RoPE encodes position by rotating query and key vectors. The rotation angle depends on position, so the dot product between queries and keys naturally encodes relative position.
Unlike learned position embeddings, RoPE generalizes to longer sequences than seen during training. This is why Llama and most modern LLMs use RoPE.
When you fine-tune a RoPE model, position handling is built in. If you need longer context than the base model supports, techniques like YaRN extend RoPE to longer sequences.