Learning rate controls how big each update step is. Too high and training becomes unstable. Too low and training takes forever or gets stuck.
For LLM fine-tuning, learning rates are tiny compared to training from scratch. Typical values: e- to e-. The model is already trained. You're making small adjustments, not major changes.