You've covered traditional NLP techniques that complement modern transformers:
Preprocessing: Tokenization, stemming, lemmatization, stopwords
Representations: Bag of words, TF-IDF, Word2Vec, GloVe
Tasks: NER, text classification, sentiment analysis
Models: Naive Bayes, RNN/LSTM, BiLSTM-CRF
Interview patterns:
- "When use TF-IDF over embeddings?" (sparse data, interpretability)
- "Explain Word2Vec" (skip-gram, CBOW, vector arithmetic)
- "How handle limited labeled data?" (simpler models, transfer learning)