From the course: Fundamentals of Dynamic Programming
Unlock this course with a free trial
Join today to access over 22,600 courses taught by industry experts.
Breaking down state inference into subproblems: The Viterbi algorithm - Python Tutorial
From the course: Fundamentals of Dynamic Programming
Breaking down state inference into subproblems: The Viterbi algorithm
- [Tutor] If we have a Hidden Markov model and some observations, in order to predict the most probable sequence of hidden states that led to those observations, we can use the Viterbi algorithm. The idea behind the Viterbi algorithm is to calculate the best path that ends with each and every state. Then in the next iteration, we move ahead and decide which of the paths from the last iteration we want to continue. More formally, let's define a function V of T and S, which is the maximized probability of ending up at state S at time T if we look at just the first T plus one observations. The function doesn't tell us which state we should end up in, just for each of those previous states, how likely it would be to end up in that given state. So how do we actually define the function? Let's start with the initial step where T equals zero. There is a probability pie of S of ending up at state S, and then a probability B of…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
(Locked)
What is a Hidden Markov Model?2m 21s
-
(Locked)
Modeling a Hidden Markov Model in Python3m 20s
-
(Locked)
Inferring the most probable state sequence1m 54s
-
(Locked)
Breaking down state inference into subproblems: The Viterbi algorithm3m 36s
-
(Locked)
Implementing the Viterbi algorithm in Python4m 26s
-
(Locked)
More applications of Hidden Markov Models2m 29s
-
(Locked)
Training Hidden Markov Models2m 28s
-
(Locked)
-