Full-text search goes beyond exact matches:
Tokenization: Split text into terms. "The quick fox" → ["the", "quick", "fox"]
Normalization: Lowercase, remove punctuation, handle unicode.
Stemming: Reduce words to root. "running", "runs", "ran" → "run"
Stop words: Remove common words ("the", "is", "at") that add noise.
The pipeline: Raw text → Tokenize → Normalize → Stem → Index. Query goes through same pipeline. Match against index.
This is why searching "running" finds documents containing "run".