CS 445 Machine Learning
Fall 2020
Linear Regression
Linear regression build a model to predict an output variable (y) from a set of input variables (x). When only a single input variable is used,this is known as simple linear regression (and this will be what we implement in this lab). Our model will use the following equation:
y = w0 + w1x1
Our algorithm needs to find the values for w0 and w1 that minimize the loss function. In this case, the loss function is:
L = ∑ (w0 + w1xi,1 - yi)2
Tasks
- Download the following files:
- Complete the implementation of gradient descent in linear_reg.py.
The modifications you will need to make include:
- Implement the update to w as vectorized operations
- Your loop should accept a max number of epochs, but also test for convergence and exit when this is detected.
- Detect when the learning rate, α , is too large and is causing the loss function to increase instead of decrease. Your method should dynamically adjust the learning rate when diverence is detected.
- Your code must produce two plots. One set of plots shows the line (the model) for each iteration of the loop. Template code is provided for producing these files. There is also a file that can "stitch" these images together to make a movie of how the model changes per iteration. Here is an example movie.
- Produce a plot showing iterations/epochs on the x-axis and the loss function on the y-axis.