LoRA+ uses different learning rates for the A and B matrices. Optimal learning rates differ for each.
Typically, the B matrix (initialized to zero) benefits from higher learning rates than A. LoRA+ can improve convergence speed and final quality.
Most frameworks support this as a simple configuration option. Try a -x higher learning rate for B.