Early stopping halts training when validation loss stops improving. You set a patience value: how many evaluations to wait without improvement.
Example: Patience of means stop if validation loss doesn't improve for consecutive evaluations.
This prevents wasted compute and overfitting. Keep checkpoints so you can restore the best model, not just the final one.