Full fine-tuning updates billions of parameters. LoRA achieves similar results by updating only millions. This section teaches you how LoRA works and when to use it.
I'll show you the math behind low-rank adaptation and how QLoRA pushes memory efficiency even further.