You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ML Paper Reimplementations: A Journey Through Foundational Concepts
This repository documents my self-study project focused on building an understanding of fundamental Machine Learning (ML), Deep Learning (DL), and Reinforcement Learning (RL) concepts by reading and reimplementing seminal research papers.
Current Progress & Syllabus
Status Legend: ✅ Done, ⏳ In Progress
This project is a work in progress. Feedback, suggestions, and discussions are welcome!
For the sake of time some papers will just be a read and not a reimplement. I've noted which ones in the implementation goal.
Phase 1 - Core Learning Mechanics
#
Paper (Year)
Implementation Goal
Status
1
Back‑propagation – Rumelhart 1986
Build a 2‑layer NumPy MLP (sigmoid/tanh) trained with plain SGD
✅
Phase 2 - LeNet + Training Fixes and Optimizations
#
Paper (Year)
Implementation Goal
Status
2
LeNet‑5 – LeCun 1998
First CNN on MNIST using sigmoid/tanh; feel slow training
✅
3
Weight Decay – Krogh 1991
Add L2 regularisation switch and observe over‑fit reduction
✅
4
ReLU – Glorot 2011
Swap activation to ReLU and compare learning speed