Adapters:
- Add layers, increasing forward pass compute
- Cleaner separation from base model
- Can be inserted at various points
LoRA:
- No inference overhead when merged
- Modifies existing computations
- Generally more parameter efficient
LoRA is more popular today, but adapters remain useful for specific architectures and use cases.