|
|
+- -+ | P11 P12 P13 ... P1N | | P21 P22 P23 ... P2N | P = | .. .. .. .. | | .. .. .. .. | | PN1 PN2 PN3 ... PNN | +- -+ |
P11 + P12 + P13 + ... + P1N = 1 P21 + P22 + P23 + ... + P2N = 1 .. PN1 + PN2 + PN3 + ... + PNN = 1 |
|
|
Ҏ[ X(t+1) = Cheerful | X(t) = Cheerful ] = P11 = 0.6 Ҏ[ X(t+1) = So-so | X(t) = Cheerful ] = P11 = 0.2 Ҏ[ X(t+1) = Sad | X(t) = Cheerful ] = P11 = 0.2 |
One-step transition matrix:
+- -+ | 0.6 0.2 0.2 | P = | 0.3 0.4 0.3 | | 0.0 0.3 0.7 | +- -+ |
P2ij = Ҏ[ X(t+2) = j | X(t) = i ] |
P2ij = Pi1×P1j + Pi2×P2j + ... + PiN×PNj |
Note: this formula is used to multiply 2 matrices !!!
P2 = P × P |
|
PN = P × P × ... × P |
> P := matrix(3,3, [0.6, 0.2, 0.2, 0.3, 0.4, 0.3, 0.0, 0.3, 0.7]); P > evalm(P); [0.6 0.2 0.2] [ ] [0.3 0.4 0.3] [ ] [0. 0.3 0.7] P2 > evalm(P&*P); [0.42 0.26 0.32] [ ] [0.30 0.31 0.39] [ ] [0.09 0.33 0.58] P3 > evalm(P&*P&*P); [0.330 0.284 0.386] [ ] [0.273 0.301 0.426] [ ] [0.153 0.324 0.523] P6 > evalm(P&*P&*P&*P&*P&*P); [0.245490 0.304268 0.450242] [ ] [0.237441 0.306157 0.456402] [ ] [0.218961 0.310428 0.470611] P10 > evalm(P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P); [0.2313863532 0.3075488830 0.4610647637] [ ] [0.2310495033 0.3076271722 0.4613233244] [ ] [0.2302738212 0.3078074437 0.4619187352] P20 > evalm(P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P&*P); [0.2307712768 0.3076918322 0.4615368910] [ ] [0.2307701600 0.3076920917 0.4615377482] [ ] [0.2307675883 0.3076926893 0.4615397223] |
It converges !!!
|
|
Therefore, the probability that the Markov chain is in state 1 is equal to:
π1(1) = π1(0)P11 + π2(0)P21 + π3(0)P31 |
π1(1) = π1(0)P11 + π2(0)P21 + π3(0)P31 π2(1) = π1(0)P12 + π2(0)P22 + π3(0)P32 π3(1) = π1(0)P13 + π2(0)P23 + π3(0)P33 |
Notice: this is a vector-matrix multiplication (rather than a matrix-vector multiplication)
2 steps: π1(2) = π1(1)P11 + π2(1)P21 + π3(1)P31 π2(2) = π1(1)P12 + π2(1)P22 + π3(1)P32 π3(2) = π1(1)P13 + π2(1)P23 + π3(1)P33 Or: π(2) = π(1) × P = (π(0) × P) × P = π(0) × (P × P) = π(0) × P2 |
(Maple) > x := vector([1,0,0]); > x = matrix(3,1, [1,0,0]); > P := matrix(3,3, [0.6, 0.2, 0.2, 0.3, 0.4, 0.3, 0.0, 0.3, 0.7]); x(1) > evalm( x&*P ); [0.6, 0.2, 0.2] x(2) > evalm( x&*P&*P ); [0.42, 0.26, 0.32] x(4) > evalm( x&*P&*P*P&*P ); [0.2832, 0.2954, 0.4214] x(8) > evalm( x&*P&*P*P&*P&*P&*P*P&*P); [0.23490798, 0.30673034, 0.45836168] x(16) > evalm( x&*P&*P*P&*P&*P&*P*P&*P&*P&*P*P&*P&*P&*P*P&*P); [0.2307951062, 0.3076862941, 0.4615185997] x(24) > evalm( x&*P&*P*P&*P&*P&*P*P&*P&*P&*P*P&*P&*P&*P*P&*P&*P&*P*P&*P&*P&*P*P&*P); [0.2307693926, 0.3076922701, 0.4615383374] |
It converges !!!
π(∞) = lim (k → ∞) π(k) |
The stationary state is also called the steady state
> P := matrix(4,4, [0.6, 0.2, 0.0, 0.2, \ 0.6, 0.4, 0.0, 0.0, \ 0.0, 0.3, 0.4, 0.3, \ 0.3, 0.0, 0.0, 0.7] ); P2 > evalm( P&*P ); [0.54 0.20 0. 0.26] [ ] [0.60 0.28 0. 0.12] [ ] [0.27 0.24 0.16 0.33] [ ] [0.39 0.06 0. 0.55] P4 > evalm( P&*P&*P&*P ); [0.5130 0.1796 0. 0.3074] [ ] [0.5388 0.2056 0. 0.2556] [ ] [0.4617 0.1794 0.0256 0.3333] [ ] [0.4611 0.1278 0. 0.4111] P4 > evalm( P&*P&*P&*P*P&*P*P&*P ); [0.50167962 0.16834628 0. 0.32997410] [ ] [0.50503884 0.17170552 0. 0.32325564] [ ] [0.49901697 0.16699434 0.00065536 0.33333333] [ ] [0.49496115 0.16162782 0. 0.34341103] P16 > evalm( P&*P&*P&*P*P&*P*P&*P&*P&*P&*P*P&*P*P&*P&*P ); [0.5000282111 0.1666948777 0. 0.3332769112] [ ] [0.5000846332 0.1667513000 0. 0.3331640668] [ ] [ -6 ] [0.4999993559 0.1666668815 0.4294967296 10 0.3333333333] [ ] [0.4999153666 0.1665820333 0. 0.3335025999] |
Note:
|
|
|
States 3 and 5 are transient states because once the Markov chain leaves theses states, it will never return back to them.
|
Example:
The period of the above Markov process is 3:
p(1)11 = 0 (from state 1, in one step, we reach state 2) p(2)11 = 0 (from state 1, in one steps, we reach state 3) p(3)11 = 1 (from state 1, in one steps, we reach state 1) |
|
Example:
|
|
Example:
|
|
|
Example:
Mathematically speaking: we must find this limit
lim {n → ∞} π(n)j |
|
No proof; but you have seen a few examples above....
One-step probability matrix:
+- -+ | 0.6 0.2 0.2 | P = | 0.3 0.4 0.3 | | 0.0 0.3 0.7 | +- -+ |
|
According to Lemma 1, the Markov chain has a steady state and the steady state is reached from any initial state
See: click here
|
|
Steady state probability satisfies:
π1 = 0.6 π1 + 0.3 π2 + 0.0 π3 .... (1) π2 = 0.2 π1 + 0.4 π2 + 0.3 π3 .... (2) π3 = 0.2 π1 + 0.3 π2 + 0.7 π3 .... (3) |
Drop equation (3) and re-write into canonical form:
-0.4 π1 + 0.3 π2 + 0.0 π3 = 0 .... (1) 0.2 π1 - 0.6 π2 + 0.3 π3 = 0 .... (2) π1 + π2 + π3 = 1 .... (4) |
Solution using Maple:
> eq1 := -0.4*x1 + 0.3*x2 + 0.0*x3 = 0; eq1 := -0.4 x1 + 0.3 x2 = 0 > eq2 := 0.2*x1 - 0.6*x2 + 0.3*x3 = 0; eq2 := 0.2 x1 - 0.6 x2 + 0.3 x3 = 0 > eq3:= x1 + x2 + x3 = 1; eq3 := x1 + x2 + x3 = 1 |
Better:
> eq1 := -4/10*x1 + 3/10*x2 = 0; 2 x1 3 x2 eq1 := - ---- + ---- = 0 5 10 > eq2 := 2/10*x1 - 6/10*x2 + 3/10*x3 = 0; x1 3 x2 3 x3 eq2 := ---- - ---- + ---- = 0 5 5 10 > eq3:= x1 + x2 + x3 = 1; eq3 := x1 + x2 + x3 = 1 |
This give rise to the following equilibrium equation:
π1 = 0.6 × π1 + 0.3 × π2 + 0.0 × π3 |
This give rise to the following equilibrium equation:
π2 = 0.2 × π1 + 0.4 × π2 + 0.3 × π3 |
This give rise to the following equilibrium equation:
π3 = 0.2 × π1 + 0.3 × π2 + 0.7 × π3 |
|