[ 机器学习 - 吴恩达 ] Linear regression with one variable | 2-6 Gradient descent for linear regression


Gradient descent algorithm

repeat until convergence {
\(\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0,\theta_1)\) ?? (for \(j = 1\) and \(j = 0\))
}

Linear Regression Model

\[h_\theta(x) = \theta_0 + \theta_1x \]

\[J(\theta_0,\theta_1)=\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2 \]

将线性回归模型代入梯度下降算法:

\[\frac{\partial \alpha}{\partial \theta_j}J(\theta_0,\theta_1)=\frac{\partial \alpha}{\partial \theta_j}\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2\\=\frac{\partial \alpha}{\partial \theta_j}\frac{1}{2m}\sum_{i=1}^m(\theta_0 + \theta_1x^{(i)}-y^{(i)})^2 \]

\(j=0\)时: \(\frac{\partial \alpha}{\partial \theta_0}J(\theta_0,\theta_1) = \frac1m\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})\)
\(j=1\)时: \(\frac{\partial \alpha}{\partial \theta_1}J(\theta_0,\theta_1) = \frac1m\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x^{(i)}\)

所以,对于一元线性回归模型,梯度下降算法化简如下:
repeat until convergence {
\(\theta_0 := \theta_0 - \alpha \frac1m\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})\)
\(\theta_1 := \theta_1 - \alpha \frac1m\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x^{(i)}\)
}
??? update \(\theta_0\) and \(\theta_1\) simultaneously

对于一元线性回归模型,其损失函数平方差函数是"Convex function" - 凸函数:Bowl-shaped (碗形),所以只有全局最优解。

"Batch" Gradient Descent

"Batch": Each step of gradient descent uses all the training examples:

\[\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)}) \]