Adapter modules insert small networks between transformer layers. Unlike LoRA which modifies existing weights, adapters add entirely new components.
Structure: Linear down-projection → Activation → Linear up-projection
Adapters are older than LoRA but still useful. They're particularly good when you want clear separation between base model and adaptation.