PEFT implements parameter-efficient methods:
from peft import LoraConfig, get_peft_model
config = LoraConfig(r=16, lora_alpha=32, target_modules=["q_proj", "v_proj"])
model = get_peft_model(model, config)
LoRA, QLoRA, DoRA, adapters, prefix tuning, and more. All work with the same interface. Switching methods is a config change.