Syllabus

This list is very much in flux - and in particular, is overly ambitious.
We probably will not get to the last couple of topics, so please check back often.
  1. Linear regression
    1. LMS algorithm
    2. Normal equations
      1. Matrix derivatives
      2. Least squares
    3. Probablistic motivation
    4. Locally weighted linear regression
      1. Nearest neighbors
      2. Overfitting
  2. Classification and logistic regression
    1. Sigmoid loss function
    2. Perceptron
    3. Iteratively weighted least squares
  3. Generalized linear models
    1. Exponential family
      1. Bernoulli
      2. Gaussian
    2. Recap with GLM models
      1. Linear regression
      2. Logistic regression
      3. Softmax regression
  4. Generative models
    1. Guassian/Quadratic discriminant analysis
      1. Multi-variate gaussian
      2. Probablistic model
      3. Comparison with logistic regression
    2. Naive Bayes
      1. Laplace smoothing
  5. Decision Trees
    1. Entropy gain
  6. Subspace methods
    1. Principle Component Analysis (PCA)
      1. Singular Value Decomposition (SVD)
    2. Linear Discriminant Analysis (LDA)
    3. Canonical Correlation Analysis (CCA)
    4. Independant Componant Analysis (ICA)
  7. Neural nets
    1. Seperating hyperplanes
    2. Hidden layer models
    3. Backpropogation
  8. Support vector machines
    1. Functional and geometric margins
    2. Quadratic Program (QP) Primal forumaiton
    3. Lagrange duality
    4. Support vectors
    5. Kernals
    6. Non-separability
    7. Sequential Minimal Optimization (SMO)
      1. Coordinate ascent
      2. SMO
    8. Kernalized subspace methods
  9. Boosting
    1. Exponential loss function
    2. Adaboost
    3. Viola Jones face detection
  10. Learning Theory
    1. Bias/Variance
      1. Consistency
    2. Bounds
      1. Union bound
      2. Chernoff bound
    3. Provably Approximately Correct (PAC) models
    4. Loss functions
    5. Empircal versus structural risk
    6. Sample complexity
    7. VC dimension
  11. Regularization and model selection
    1. Cross validation
    2. Feature selection
    3. Bayes statistics for regularization
      1. Maximum likelihood (ML) versus Maximum a-Posteriori (MAP)
  12. Structured prediction
    1. Multi-class generalization
    2. Vitterbi optimization of markov models
    3. Margin-based training
    4. Conditional Random Feilds
  13. Expectation maximization
    1. K-means clustering
    2. Guassian mixture models
    3. Expected complete log-likelihood