This unit introduces both computational learning theory and statistical learning methods. The basic question computational learning theory wants to answer is: How can we know that the hypothesis h is close to the target function f, if we don't know what f is? Computational learning theory analyses the sample complexity and computational complexity of inductive learning, which can guide the learning process to a balance between proper samples and fair correctness. Statistical learning, the main topic of this unit, covers a number of learning methods that range from the simple calculation of averages to the construction of complex models, such as Bayesian networks and neural networks. This unit discusses several major methods in current use, such as Bayesian learning, maximum a posteriori (MAP) and maximum likelihood (ML) learning, naive Bayes learning, EM algorithm, instance-based learning, and neural network learning.
When you complete this unit, you will be able to
Section 1: Computational learning theory
Section 2: Bayesian learning and EM algorithm
Section 3: Instance-based learning
Section 4: Neural networks
Section 5: Kernel machines
Flach, P. A. (2001). On the state of the art in machine learning: A personal review. Artificial Intelligence, 131(1-2), 199-222.
Unit 10: Computational Learning Theory and Statistical Learning
Kearns, M., and Vazirani, U. (1994). An introduction to computational learning theory. Cambridge, MA: MIT Press.
Bishop, C. M. (2006). Pattern recognition and machine learning. Springer. ISBN 0-387-31073-8. (Refer primarily to the sections covering Graphical Models)
Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Mateo, CA: Morgan Kaufmann.
Updated November 17 2015 by FST Course Production Staff