Classical ML knowledge remains the foundation. Interviewers use fundamentals questions to filter you quickly. If you can't explain overfitting or the bias-variance tradeoff, you won't reach the LLM questions.
I'll cover what every ML interview assumes you know: supervised and unsupervised learning, model evaluation, regularization, and feature engineering. These concepts appear in phone screens, technical deep dives, and even system design discussions.