You can merge multiple LoRA adapters into one model:
Merging lets you combine capabilities from different fine-tuning runs. Merge a coding adapter with a reasoning adapter, for example.
Quality depends on compatibility. Adapters trained on similar tasks merge better. Adapters from very different domains can interfere with each other. Start by merging adapters and evaluate before adding more.