This notebook performs cubic polynomial regression using gradient descent to approximate a sine function
using a cubic polynomial model:
where are the parameters to be learned. The goal is to minimize the mean squared error between the model's predictions and the true sine function. The model is trained using gradient descent, which iteratively updates the parameters in the direction of the negative gradient of the loss function.
This kind of work is typically done in Python with libraries like TensorFlow or PyTorch, but here we implement it in JavaScript as an educational exercise.
In the code:
const x = Array.from({ length: 1000 }, (_, i) => (i / 999) * 2 - 1);
const y_true = x.map(value => Math.sin(value * Math.PI));
We define the input data , evenly spaced over 1000 points. The target output is generated by:
This is the function we wish to approximate.
We define a model function as a cubic polynomial:
Where are the parameters of the model to be learned via gradient descent.
We define the loss function as the mean squared error between the model's prediction and the true output :
Expanding the prediction:
This measures the total squared deviation between the polynomial model and the sine curve over all inputs.
Let:
Then the derivative of with respect to is:
That is, differentiate the outer function evaluated at , then multiply by the derivative of the inner function .
To minimize the loss, we compute the partial derivatives of with respect to each parameter:
Let the error term be:
Then, using the chain rule, the gradients are:
These are implemented in the code as:
const a_grad = error.reduce((acc, e) => acc + e, 0) / n;
const b_grad = error.reduce((acc, e, i) => acc + e * x[i], 0) / n;
const c_grad = error.reduce((acc, e, i) => acc + e * x[i] ** 2, 0) / n;
const d_grad = error.reduce((acc, e, i) => acc + e * x[i] ** 3, 0) / n;
We then update the parameters using gradient descent:
for , and where is the learning rate.
This update is repeated for 2000 iterations:
for (let t = 0; t < 2000; t++) { ... }
After training, we compute the final model output:
const y_pred = x.map(xi => a + b * xi + c * xi ** 2 + d * xi ** 3);
This gives us:
Finally, both and are plotted:
Plot.line(plotData, { x: 'x', y: 'y', stroke: 'line' })
The resulting graph shows:
This visually demonstrates how well the cubic model fits the sine curve using gradient-based optimization.
const x = Array.from({ length: 1000 }, (_, i) => (i / 999) * 2 - 1);
const lr = 0.1;
const y_true = x.map(value => Math.sin(value * Math.PI)); // stretch it a bit
let a = Math.random();
let b = Math.random();
let c = Math.random();
let d = Math.random();
for (let t = 0; t < 2000; t++) {
const y_pred = x.map(xi => a + b * xi + c * xi ** 2 + d * xi ** 3 );
const error = y_pred.map((yp, i) => yp - y_true[i]);
const n = x.length;
const a_grad = error.reduce((acc, e) => acc + e, 0) / n;
const b_grad = error.reduce((acc, e, i) => acc + e * x[i], 0) / n;
const c_grad = error.reduce((acc, e, i) => acc + e * x[i] ** 2, 0) / n;
const d_grad = error.reduce((acc, e, i) => acc + e * x[i] ** 3, 0) / n;
a -= lr * a_grad;
b -= lr * b_grad;
c -= lr * c_grad;
d -= lr * d_grad;
}
const y_pred = x.map(xi => a + b * xi + c * xi ** 2 + d * xi ** 3);