LoRA achieves -% of full fine-tuning performance on most tasks. The gap narrows with higher rank.
Where LoRA might underperform:
- Tasks requiring major behavioral changes
- Large datasets where full capacity helps
- Domains very different from pre-training
For most practical applications, LoRA quality is sufficient. Start with LoRA and only try full fine-tuning if results disappoint.