SFT alone produces models that can complete tasks but may do so poorly. They might be verbose, rude, or give harmful advice.
Alignment shapes behavior beyond task completion. It teaches the model which responses users prefer. A well-aligned model is concise when appropriate, admits uncertainty, and refuses harmful requests. This makes the difference between a capable model and a useful one.