In the last Unit, we discussed exact and approximate probabilistic reasoning, during which no state or environment changes are taken into account. However, agents in uncertain environments should be able to keep track of the current state of the environment, and the uncertainty of how the environment changes over time. This unit will discuss these issues by defining several basic inference tasks, by describing the general structure of inference algorithms for temporal models,and by describing three specific kinds of models: hidden Markov models, Kalman filters, and dynamic Bayesian networks. Considering that real-world application always involves temporal uncertainty in reasoning and decision-making, it is not surprising that temporal probabilistic reasoning approaches are useful in many application problems, including speech recognition, which is also introduced in this unit. Machine learning (See Units 9 - 11) plays a central role in the construction of these models.
When you complete this unit, you will be able to
Section 1: Representation and inference in temporal models
Section 2: Hidden Markov models (HMMs)
Section 3: Kalman filters
Section 4: Dynamic Bayesian networks
Section 5: Speech Recognition as an application of HMM
Bishop, C.M. (2006). Pattern Recognition and Machine Learning. Springer. ISBN 0-387-31073-8. (Refer primarily to the sections covering Graphical Models)
Unit 7: Temporal Probabilistic Reasoning and Dynamic Bayesian Networks
Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Mateo, CA: Morgan Kaufmann.
- Explore other applications of HMMs, Kalman filters, and dynamic Bayesian networks in fields that are familiar to you, and report your discoveries and comments in the course conference.
Updated November 17 2015 by FST Course Production Staff