LoRA supports dropout on adapter outputs. During training, some adapter outputs are randomly zeroed.
Dropout values of to help prevent overfitting, especially on small datasets. If you have abundant data, you can often disable dropout.
This is separate from the model's own dropout. You're regularizing the adapters specifically.