[ 机器学习 - 吴恩达 ] Linear regression with one variable | 2-2 Cost function |


Training set of housing prices (Portland, OR)

Size in \(feet^2\) (x) Price ($) in 1000's (y)
2104 460
1416 232
1534 315
852 178

? Hypothesis: \(h_\theta(x) = \theta_0 + \theta_1x\)

? \(\theta_{i's}\): Parameters

How to choose \(\theta_{i's}\)?
Idea: Choose \(\theta_0, \theta_1\) so that \(h_\theta(x)\) is close to \(y\) for our training examples \((x, y)\)

Cost Function

\[J(\theta_0,\theta_1) = 1/2m\sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2 \]

m: #training exmples

Summarize

Hypothesis:

\[h_\theta(x) = \theta_0 + \theta_1x \]

Parameters: \(\theta_0, \theta_1\)
Cost Function: Squared error function**

\[J(\theta_0,\theta_1) = 1/2m\sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2 \]

Goal: \(\begin{matrix} minimize J(\theta_0,\theta_1)\\ \theta_0,\theta_1 \end{matrix}\)

Simplified:

Hypothesis:

\[h_\theta(x) = \theta_1x \]

Parameters: \(\theta_1\)
Cost Function:

\[J(\theta_1) = 1/2m\sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2 \]

Goal: \(\begin{matrix} minimize J(\theta_1)\\ \theta_1 \end{matrix}\)