Why does low-rank work? Weight updates during fine-tuning have low intrinsic dimensionality. You don't need to change everything. Most of the update can be captured in a small subspace.
Think of it like compressing an image. You lose some detail, but keep the important information. LoRA compresses the update, not the original weights.