Neural Network Learning Hub!
AI Learning Roadmap
Section 1: Foundations (Math, Regression, Classification, XOR)
These four projects build the mental model of linear algebra → optimization → nonlinearity.
- Single‑Feature Linear Regression
- MSE, gradients, update
- r, R², OLS
- Geometry of projection
-
Multi‑Feature Regression
- Vectorized dot products
- Weight vector as a direction in feature space
- Plane instead of line
- Residual vectors
-
Binary Classification
- Logistic regression
- Sigmoid
- Cross‑entropy
- Decision boundary geometry
-
Neural Network for XOR
- Hidden layer
- Activation functions
- Why linear models fail
- Geometry of separating non‑linearly separable data
-
The Design Matrix
- Why ML always uses a matrix
- How neural networks generalize it
- How shapes flow through the system
- How this is a neural network layer
Neural Networks Simple Math 3 images explaining what these 5 projects teach.
Section 2: Building Real Neural Networks (Projects 6–10)
This is where you move from single neurons to full, modular neural network architectures—first from scratch, then with PyTorch.
-
Build a Neural Network Class From Scratch
- A Layer class
- A NeuralNetwork class
- Forward pass
- Backprop
- Training loop
- Loss functions
- Activation functions
- Modular architecture
-
Introduction to PyTorch (Tensors, Autograd, Modules)
- How PyTorch replaces your manual gradients
- How nn.Module mirrors your custom class
- How autograd works
- How optimizers work
- How to rewrite your Project 5 network in PyTorch
-
Recurrent Neural Networks (RNNs)
- Sequence modeling
- Hidden state
- Unrolling
- Vanishing gradients
- PyTorch’s nn.RNN
-
LSTM Networks
- The natural evolution of Project 7
- Gates
- Cell state
- Long‑term memory
- Why LSTMs solve vanishing gradients
- PyTorch’s nn.LSTM
-
GRU or CNN (Your Choice)
- Explore GRU as a simplified gated RNN, or
- Explore CNNs for spatial feature extraction
- Compare trade‑offs and ideal use cases