MoLE applies mixture-of-experts concepts to LoRA. Multiple LoRA adapters act as experts. A router decides how much each expert contributes.
Unlike hard switching, MoLE can blend experts. An input might use % of the coding expert and % of the reasoning expert.
This is more flexible than single-adapter approaches but requires training the router.