Hugging Face provides the foundation for most fine-tuning work:
- Transformers: Model loading and inference
- PEFT: LoRA, QLoRA, and other efficient methods
- TRL: Training loops for SFT and alignment
- Datasets: Data loading and preprocessing
- Accelerate: Distributed training
Learn this ecosystem. Most other tools build on top of it.