LLMs are built on deep learning. You can't explain transformer attention if you don't understand backpropagation. Interviewers test foundations before asking about GPT architecture.
I'll cover neural network basics, optimization, regularization, and the architectures that preceded transformers. Understanding CNNs and RNNs gives context for why transformers succeeded. You'll use these concepts to discuss modern AI.