During training, you often want to compute loss only on the response, not the instruction. This is loss masking.
Why? The model should learn to generate responses, not to memorize instructions. Computing loss on instructions can also teach the model to repeat prompts.
Most fine-tuning frameworks support loss masking. Enable it unless you have a specific reason not to.