You can stack PEFT methods:
- LoRA on attention + Adapters on FFN
- Prefix tuning + LoRA
- DoRA + layer freezing
Combinations can capture different aspects of adaptation. Experiment carefully though. More methods mean more hyperparameters to tune.
Start simple. Add complexity only when single methods prove insufficient.