Section 1 : Uncertainty, probability, and Bayesian networks
Commentary
Section Goals
- To introduce the concepts and methodologies of uncertainty and probability.
- To introduce the knowledge representation relating to uncertainty.
- To discuss graphical models, especially Bayesian networks and its semantics.
Section Notes
- Parts of this section may be skipped by those who have a strong background in probability and Bayesian networks.
Learning Objectives
Learning Objective 1
- Outline how to handle uncertainty in AI, and explain how it will affect decision and action.
- Outline how to make use of conditional independence in Bayesian networks to simplify representation and inference.
- Explain the semantics of Bayesian networks.
- Find and select reading materials relating to a particular study topic; e.g., the graphical model for this section.
- Explain the term graphical model, and describe three common types of graphical models, such as Bayesian Network, Markov random field (or Markov Network), and Factor Graph.
- Design and implement programs for probabilistic inference by enumeration in full joint distribution.
- Explain the following concepts or terms:
- Uncertainty
- Causal rule
- Degree of belief
- Evidence
- Marginalization or summing out
- Conditioning rule
- Conditional independence
- Naive Bayes
- Graphical model
- Bayesian network
- Markov network
- Factor graph
- Conditional probability table
- Chain rule
- Markov blanket
- Hybrid Bayesian network
Objective Readings
Required readings:
Reading topics:
Uncertainty (see Chapter 13 of AIMA3ed)
Representing Knowledge in an Uncertain Domain (see Section 14.1 of AIMA3ed)
The Semantics of Bayesian Networks (see Section 14.2 of AIMA3ed)
Efficient Representation of Conditional Distributions (see Section 14.3 of AIMA3ed)
Papers:
Choose three articles about Graphical Models from the Internet. Google, Wikipedia, and Google Scholar are recommended for this task.
Supplemental Readings
Dechter, R., and Mateescu, R. (2007). AND/OR search spaces for graphical models. Artificial Intelligence, 171 (2-3), 73-106
Raphael, C. (2002). A hybrid graphical model for rhythmic parsing. Artificial Intelligence, 137(1-2), 217-238.
Frey, B. J. (1998). Graphical models for machine learning and digital communication [electronic resource of AU Digital Library]. Cambridge, MA: MIT Press.
Objective Questions
- What reasons make first-order logic fail to cope with domain-like medical diagnoses in the real world?
- What is the difference between the common types of graphical models?
Objective Activities
- Explore the Internet to find three papers about graphical models that are at your level of understanding, and discuss them on the online course conference.
- Explore the following probabilistic inference algorithm related to this section from the textbook's website, which is based on enumeration of the entries in a full joint distribution.
- Complete Exercise 14.11 of AIMA3ed.