Section 4 : Dynamic Bayesian networks
Commentary
Section Goals
- To discuss dynamic Bayesian networks (DBNs), which include Hidden Markov Models and Kalman filters as special cases.
Learning Objectives
Learning Objective 1
- Describe the structure of dynamic Bayesian networks.
- Explain the differences between DBN, HMM, and Kalman filter.
- Exemplify a DBN application, and draw its structure.
- Explain the ideas behind the construction of DBNs.
- Describe both exact and approximate inference in DBNs.
Objective Readings
Required readings:
Reading topics:
Dynamic Bayesian Networks (see Section 15.5 of AIMA3ed)
Smyth, P., Heckerman, D., and Jordan, M. I. (1997). Probabilistic independence networks for hidden Markov probability models. Neural Computation, 9(2), 227-269.
Supplemental Readings
Huang, C.-L., Shih, H.-C., and Chao, C.-Y. (2006). Semantic analysis of soccer video using dynamic Bayesian network. IEEE Transactions on Multimedia, 8(4), 749 - 760. Digital Object Identifier 10.1109/TMM.2006.876289
Objective Questions
- Why and how can exact inference in DBNs be performed using the algorithms for inference in Bayesian networks?
- What is a better solution for approximate inference in DBNs?
Objective Activities
- Explore the following source code for the particle filtering algorithm, which is downloadable from
- the textbook's website.
- Complete Exercise 15.17 of AIMA3ed.