Stanford CS229 Machine Learning by Andrew Ng
CS229 Machine Learning Stanford Course by Andrew Ng
Course material, problem set Matlab code written by me, my notes about video course:
https://github.com/Yao-Yao/CS229-Machine-Learning
Contents:
- supervised learning
Lecture 1
application field, pre-requisite knowledge
supervised learning, learning theory, unsupervised learning, reinforcement learning
Lecture 2
linear regression, batch gradient decent, stochastic gradient descent(SGD), normal equations
Lecture 3
locally weighted regression(Loess), probabilistic interpretation, logistic regression, perceptron
Lecture 4
Newton's method, exponential family(Bernoulli, Gaussian), generalized linear model(GLM), softmax regression
Lecture 5
discriminative vs generative, Gaussian discriminent analysis, naive bayes, Laplace smoothing
Lecture 6
multinomial event model, nonlinear classifier, neural network, support vector machines(SVM), functional margin/geometric margin
Lecture 7
optimal margin classifier, convex optimization, Lagrangian multipliers, primal/dual optimization, KKT complementary condition, kernels
Lecture 8
Mercer theorem, L1-norm soft margin SVM, convergence criteria, coordinate ascent, SMO algorithm
- learning theory
Lecture 9
underfit/overfit, bias/variance, training error/generalization error, Hoeffding inequality, central limit theorem(CLT), uniform convergence, sample complexity bound/error bound
Lecture 10
VC dimension, model selection, cross validation, structured risk minimization(SRM), feature selection, forward search/backward search/filter method
Lecture 11
Frequentist/Bayesian, online learning, SGD, perceptron algorithm, "advice for applying machine learning"
- unsupervised learning
Lecture 12
k-means algorithm, density estimation, expectation-maximization(EM) algorithm, Jensen's inequality
Lecture 13
co-ordinate ascent, mixture of Gaussian(MoG), mixture of naive Bayes, factor analysis
Lecture 14
principal component analysis(PCA), compression, eigen-face
Lecture 15
latent sematic indexing(LSI), SVD, independent component analysis(ICA), "cocktail party"
- reinforcement learning
Lecture 16
Markov decision process(MDP), Bellman's equations, value iteration, policy iteration
Lecture 17
continous state MDPs, inverted pendulum, discretize/curse of dimensionality, model/simulator of MDP, fitted value iteration
Lecture 18
state-action rewards, finite horizon MDPs, linear quadratic regulation(LQR), discrete time Riccati equations, helicopter project
Lecture 19
"advice for applying machine learning"-debug RL algorithm, differential dynamic programming(DDP), Kalman filter, linear quadratic Gaussian(LQG), LQG=KF+LQR
Lecture 20
partially observed MDPs(POMDP), policy search, reinforce algorithm, Pegasus policy search, conclusion