This project demonstrates a complete implementation of Linear Regression from scratch using only Python and NumPy.
It covers fundamental concepts of Machine Learning, including cost function computation, gradient derivation, and gradient descent optimization.
Linear Regression is one of the simplest yet most powerful algorithms in Machine Learning.
In this project, we:
- Implement the Cost Function:
J(w,b) = (1 / 2m) * ฮฃ (f_wb(xแถฆ) - yแถฆ)ยฒ - Compute Gradients for parameters w and b:
โJ/โw = (1/m) * ฮฃ (f_wb(xแถฆ) - yแถฆ)xแถฆ
โJ/โb = (1/m) * ฮฃ (f_wb(xแถฆ) - yแถฆ) - Use Gradient Descent to update parameters iteratively:
w := w - ฮฑ โJ/โw
b := b - ฮฑ โJ/โb
- Compute cost and gradients manually
- Implement gradient descent loop
- Visualize training data and fitted line
- Compare predicted vs actual values
- Modular code structure with comments
| Parameter | Value |
|---|---|
| Initial w | 0 |
| Initial b | 0 |
| Final w | ~2.0 |
| Final b | ~0.0 |
| Final Cost | โ 0.0 |
- Python 3
- NumPy
- Matplotlib
- Jupyter Notebook
- Understand linear regression fundamentals
- Derive and compute the gradient from scratch
- Implement optimization using gradient descent
- Strengthen Python and NumPy vectorization skills
linear-regression-from-scratch/
โ
โโโ Linear_Regression_From_Scratch.ipynb # Main Jupyter Notebook
โโโ README.md # Project documentation
Edwin Kofi Afful
๐ Computer Science & Engineering Student
๐ก Passionate about AI, Quantum Computing, and Machine Learning
Inspired by Andrew Ngโs Machine Learning Specialization and DeepLearning.AI resources.
Built as part of foundational practice for understanding gradient-based optimization.