Section 1 : Representation and inference in temporal models
Commentary
Section Goals
- To introduce the concepts and principles of probabilistic reasoning in the context of dynamic worlds that change over time.
- To discuss Markov assumption and Bayesian network structure with states that change over time.
- To discuss four basic inference tasks that must be solved based on temporal models.
Learning Objectives
Learning Objective 1
- Outline state and probabilistic representation in temporal models in the context of the Markov process.
- Explain in detail each of the four basic tasks: filtering, prediction, smoothing, and most likely explanation.
- Explain the algorithm for each of the four tasks: filtering, prediction, smoothing, and most likely sequence finding.
- Explain the following concepts or terms:
- Time slice
- Stationary process
- Markov process or Markov chain
- Transition model
- Sensor model or observation model
- Filtering or monitoring
- Prediction
- Smoothing or hindsight
- Most likely explanation
- Viterbi algorithm
Objective Readings
Required readings:
Reading topics:
Inference in Temporal Models (see Section 15.1-15.2 of AIMA3ed)
Objective Questions
- What makes a Bayesian network structure different when it is used to represent problems changing with time?
- What methods will you adopt for the Bayesian network if the Markov assumption is only approximate?
Objective Activities
- Compare the representation in conditional probability of the four tasks, and figure out how the probability representation reflects the purposes of the tasks.
- Explore Viterbi algorithms from Web recourses to see how it is used to solve a range of problems.
- Explore the following smoothing algorithm from the textbook's website.
- Complete Exercise 15.2 of AIMA3ed.
Updated November 17 2015 by FST Course Production Staff