Gibbs Sampler

The MCMC algorithm for the generation of ,,randomly picked''
admissible configurations of the hard core model (see
Section 3.3.1) is a special case of a so-called *Gibbs sampler* for the simulation of discrete (high dimensional)
random vectors.

- Let be a finite (nonempty) index set and let
be a discrete random vector
- taking values in the finite state space with probability where we assume
- that for every pair
there is a finite
sequence of states
such that

- Let
be the probability function
of the random vector
with
for all
, and for all let

- denote the conditional probability that the component of has the value
- given that the vector of the other components equals where we assume .

**MCMC Simulation Algorithm**-
- Similar to Section 3.3.1 we construct a Markov chain
- with state space and an (appropriately chosen) irreducible and aperiodic transition matrix ,
- such that is the ergodic limit distribution of .

- Then we generate a ,,path''
of the Markov
chain by the recursive construction discussed in
Section 2.1.3:

- Pick an initial state .
- Pick a component according to a given probability function such that for all .
- Generate the update
of the th component according to
the (conditional) probability function
- The values of all components are not changed, i.e. for all .

- Similar to Section 3.3.1 we construct a Markov chain

where the conditional probabilities are defined in . Then is irreducible and aperiodic and the pair is reversible.

**Proof**-
The assertion can be proved similarly to the proof of
Theorem 3.10.
- In order to see that
is aperiodic it
suffices to notice
- that for all
- and hence all diagonal elements of are positive.

- that for all
- The following considerations show that
is irreducible.
- For arbitrary but fixed let be the number of components such that .
- For , i.e. , we already showed while proving the aperiodicity that .
- Let now . Without loss of generality we may assume that the components are linearly ordered and that the first components of and differ.
- By hypothesis (34) the state space contains a
sequence
such that
and
- Moreover, for each

and thus .

- It is left to show that the detailed balance equation
(2.85) holds, i.e.

- In order to see that
is aperiodic it
suffices to notice

Let
be a Markov chain with state space
and the transition matrix
given by (36). As a consequence of
Theorem 3.11 we get that in this case

for any initial concentration where denotes the distribution of . Furthermore, the Gibbs sampler shows the following

**Proof**-
- For arbitrary and
, formula
(35) implies

- Using this and the definition (36) of the transition
matrix
we obtain

- For arbitrary and
, formula
(35) implies
**Remarks**-
- A modified version of the Gibbs sampler that was considered in this
section is the so-called
*cyclic Gibbs sampler*, which uses a different procedure for picking the component that will be updated.- Namely, it is
*not*chosen according to a (given) probability function , where for all , - but the components are sorted linearly and chosen one after another according to this order. The selection of the update candidates thus becomes a deterministic procedure.

- Namely, it is
- If for some numbers
and
,
then the matrix
of
the transition probabilities
in step
is given as

- For an entire (scan) cycle, updating each component exactly once,
one obtains the following transition matrix

- It is easy to show that the matrix given by (42) and (43)
- The pair
is in general not reversible. However,
in Section 2.3.4 we showed that the pair
is reversible where

denotes the multiplicative reversible version of .

- A modified version of the Gibbs sampler that was considered in this
section is the so-called

i.e., the multiplicative reversible version of the ,,forward-scan matrix'' coincides with the ,,forward-backward scan matrix''.

**Proof**-
- It suffices to show that for the matrix defined by (44).
- Formulae (42)-(44) imply for
arbitrary

- This and (35) yield (similar to the proof of
(38))

**Remarks**-
- If Gibbs samplers are used in practice it is always assumed
- The family
of subsets of is called
a
*system of neighborhoods*if for arbitrary- (a)
- ,
- (b)
- implies .

- For the hard-core model from Section 3.3.1, is the set of those vertices that are directly connected to by an edge.