Note to other teachers and users of these slides. Andrew would be delighted if you found
this source material useful in giving your own lectures. Feel free to use these slides
verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If
you make use of a significant portion of these slides in your own lecture, please include this
message, or the following link to the source repository of Andrew’s tutorials:
http://www.cs.cmu.edu/~awm/tutorials
. Comments and corrections gratefully received.
Hidden Markov
Models
Andrew W. Moore
Professor
School of Computer Science
Carnegie Mellon University
www.cs.cmu.edu/~awm
awm@cs.cmu.edu
412-268-7599
Copyright © Andrew W. Moore
Slide 1
A Markov System
Has
N
states, called
s
1
, s
2
.. s
N
s
2
There are discrete timesteps,
t=0, t=1, …
s
1
N=3
t=0
s
3
Copyright © Andrew W. Moore
Slide 2
1
A Markov System
Has
N
states, called
s
1
, s
2
.. s
N
s
2
Current State
There are discrete timesteps,
t=0, t=1, …
On the t’th timestep the system is
in exactly one of the available
states. Call it
q
t
Note:
q
t
∈{s
1
, s
2
.. s
N
}
s
1
N=3
t=0
q
t
=q
0
=s
3
s
3
Copyright © Andrew W. Moore
Slide 3
A Markov System
Current State
Has
N
states, called
s
1
, s
2
.. s
N
s
2
There are discrete timesteps,
t=0, t=1, …
On the t’th timestep the system is
in exactly one of the available
states. Call it
q
t
Note:
q
t
∈{s
1
, s
2
.. s
N
}
Between each timestep, the next
state is chosen randomly.
s
1
N=3
t=1
q
t
=q
1
=s
2
s
3
Copyright © Andrew W. Moore
Slide 4
2
P(q
t+1
=s
1
|q
t
=s
2
) = 1/2
P(q
t+1
=s
2
|q
t
=s
2
) = 1/2
P(q
t+1
=s
3
|q
t
=s
2
) = 0
P(q
t+1
=s
1
|q
t
=s
1
) = 0
P(q
t+1
=s
2
|q
t
=s
1
) = 0
P(q
t+1
=s
3
|q
t
=s
1
) = 1
A Markov System
Has
N
states, called
s
1
, s
2
.. s
N
There are discrete timesteps,
t=0, t=1, …
On the t’th timestep the system is
in exactly one of the available
states. Call it
q
t
Note:
q
t
∈{s
1
, s
2
.. s
N
}
Between each timestep, the next
state is chosen randomly.
The current state determines the
probability distribution for the
next state.
Slide 5
s
2
s
1
N=3
t=1
q
t
=q
1
=s
2
s
3
P(q
t+1
=s
1
|q
t
=s
3
) = 1/3
P(q
t+1
=s
2
|q
t
=s
3
) = 2/3
P(q
t+1
=s
3
|q
t
=s
3
) = 0
Copyright © Andrew W. Moore
P(q
t+1
=s
1
|q
t
=s
2
) = 1/2
P(q
t+1
=s
2
|q
t
=s
2
) = 1/2
P(q
t+1
=s
3
|q
t
=s
2
) = 0
P(q
t+1
=s
1
|q
t
=s
1
) = 0
P(q
t+1
=s
2
|q
t
=s
1
) = 0
P(q
t+1
=s
3
|q
t
=s
1
) = 1
1/2
2/3
A Markov System
Has
N
states, called
s
1
, s
2
.. s
N
There are discrete timesteps,
t=0, t=1, …
On the t’th timestep the system is
in exactly one of the available
states. Call it
q
t
Note:
q
t
∈{s
1
, s
2
.. s
N
}
Between each timestep, the next
state is chosen randomly.
The current state determines the
probability distribution for the
next state.
Slide 6
s
2
1/2
s
1
N=3
t=1
q
t
=q
1
=s
2
1/3
1
s
3
P(q
t+1
=s
1
|q
t
=s
3
) = 1/3
P(q
t+1
=s
2
|q
t
=s
3
) = 2/3
P(q
t+1
=s
3
|q
t
=s
3
) = 0
Often notated with arcs
between states
Copyright © Andrew W. Moore
3
P(q
t+1
=s
1
|q
t
=s
2
) = 1/2
P(q
t+1
=s
2
|q
t
=s
2
) = 1/2
P(q
t+1
=s
3
|q
t
=s
2
) = 0
P(q
t+1
=s
1
|q
t
=s
1
) = 0
P(q
t+1
=s
2
|q
t
=s
1
) = 0
P(q
t+1
=s
3
|q
t
=s
1
) = 1
1/2
2/3
Markov Property
q
t+1
is conditionally independent
of { q
t-1
, q
t-2
, … q
1
, q
0
} given q
t
.
In other words:
P(q
t+1
= s
j
|q
t
= s
i
) =
P(q
t+1
= s
j
|q
t
= s
i
,
any earlier history
)
Question: what would be the best
Bayes Net structure to represent
the Joint Distribution of ( q
0
, q
1
,
… q
3
,q
4
)?
s
2
1/2
s
1
N=3
t=1
q
t
=q
1
=s
2
1/3
1
s
3
P(q
t+1
=s
1
|q
t
=s
3
) = 1/3
P(q
t+1
=s
2
|q
t
=s
3
) = 2/3
P(q
t+1
=s
3
|q
t
=s
3
) = 0
Copyright © Andrew W. Moore
Slide 7
P(q
t+1
=s
1
|q
t
=s
2
) = 1/2
Answer:
P(q
t+1
=s
2
|q
t
=s
2
) = 1/2
0
P(q
t+1
=s
3
|q
t
=s
2
) = 0
P(q
t+1
=s
1
|q
t
=s
1
) = 0
P(q
t+1
=s
2
|q
t
=s
1
) = 0
P(q
t+1
=s
3
|q
t
=s
1
) = 1
q
Markov Property
q
t+1
is conditionally independent
of { q
t-1
, q
t-2
, … q
1
, q
0
} given q
t
.
In other words:
P(q
t+1
= s
j
|q
t
= s
i
) =
P(q
t+1
= s
j
|q
t
= s
i
,
any earlier history
)
Question: what would be the best
Bayes Net structure to represent
the Joint Distribution of ( q
0
, q
1
,
q
2
,q
3
,q
4
)?
q
1
q
1/3
2
1
s
2
2/3
1/2
1/2
s
1
N=3
t=1
q
t
=q
1
=s
2
s
3
P(q
t+1
=s
1
|q
t
=s
3
) = 1/3
3
P(q
t+1
=s
2
|q
t
=s
3
) = 2/3
P(q
t+1
=s
3
|q
t
=s
3
) = 0
q
q
4
Copyright © Andrew W. Moore
Slide 8
4
P(q
t+1
=s
1
|q
t
=s
2
) = 1/2
Answer:
P(q
t+1
=s
2
|q
t
=s
2
) = 1/2
0
P(q
t+1
=s
3
|q
t
=s
2
) = 0
P(q
t+1
=s
1
|q
t
=s
1
) = 0
P(q
t+1
=s
2
|q
t
=s
1
) = 0
P(q
t+1
=s
3
|q
t
=s
1
) = 1
i
2
3
i
q
Markov Property
t
t
t
t
q
1
q
1/3
2
1
1
a
s
2
11
q
t+1
is conditionally independent
P(
q
t+1
=s
1
|q =
s
i
)
P(
q
t+1
=s
,
|q =
s
i
)
, …
t+1
=s
j
|q =
s
i
) …
P(
q
t+1
=s
N
|q =
s
i
)
of { q
t-1
2
q
t-2 …
P(
q
q
1
, q
0
} given q
t
.
a
21
a
31
a
i1
a
N1
1/2
In other words:
…
a
22
:
: :
a
12
…
a
1j
…
a
…
a
…
a
: :
1N
2N
3N
a
2j
1/2
:2/3
:
P(q
t+1
= s
j
|q
…
=
3j
i
) =
a
32
t
a
s
a
i2
a
ij
s
1
Each of these
probability
N=3
tables is
identical
t=1
s
3
N
P(q
t+1
= s
j
|q
…
= s
i
,
any earlier history
)
t
…
Question: what would be the best
…
a
Bayes Net structure to
…
a
NN
represent
a
N2
Nj
the Joint Distribution of ( q
0
, q
1
,
q
2
,q
3
,q
4
)?
Notation:
a
iN
P(q
t+1
=s
1
|q
t
=s
3
) = 1/3
3
P(q
t+1
=s
2
|q
t
=s
3
) = 2/3
P(q
t+1
=s
3
|q
t
=s
3
) = 0
q
q
t
=q
1
=s
2
q
4
a
ij
=
P
(
q
t
+
1
=
s
j
|
q
t
=
s
i
)
Slide 9
Copyright © Andrew W. Moore
A Blind Robot
A human and a
robot wander
around randomly
on a grid…
R
H
(num.
ote: N 18 *
N
)=
states
24
18 = 3
Slide 10
STATE q =
Copyright © Andrew W. Moore
Location of Robot,
Location of Human
5
评论