How do you represent negative numbers in binary? One idea: use the leftmost bit as a sign (0 for positive, 1 for negative) and the rest for magnitude.
In 8-bit sign-magnitude, is and is . Simple, but this has problems: two representations of zero ( and ), and addition circuits get complicated.
Computers do not use sign-magnitude. They use two's complement, which solves both problems: one zero, and addition works without modification. This is the standard representation in all modern CPUs.