KNN classifies by majority vote of k nearest neighbors. No training phase. All computation happens at inference.
Distance metrics: Euclidean (default), Manhattan, Minkowski.
Choosing k: Small k is noisy, large k is smooth. Use cross-validation.
Curse of dimensionality: In high dimensions, all points become equidistant. KNN fails.
Time complexity: per query where n is dataset size, d is dimensions. Use KD-trees or ball trees for speedup.
Interview tip: KNN is simple but doesn't scale. Know its limitations.