Deep learning isn't always the answer. Classical ML wins when:
Limited data: Neural networks need thousands of examples. XGBoost works with hundreds.
Interpretability required: Regulators need explanations. Linear models and decision trees are transparent.
Tabular data: Gradient boosting often beats neural networks on structured data.
Low latency: Simple models are faster. Linear model predicts in microseconds.
Interview question: "You have examples. What do you try first?"
Logistic regression or gradient boosting. Start simple, add complexity if needed.