Two's Complement for Signed Integer

Notations and conventions

We now have introduced a new notation for indicating how a bit pattern get interpreted in a certain context. With

\[s(b_{n-1} b_{n-2} \cdots b_0) := u(b_{n-1} \cdots b_0) - 2^{n-1} b_{n-1}\in \left\{ -2^{n-1}, \dots, 2^{n-1}-1 \right\}\]

we express that an \(n\)-bit pattern is interpreted as signed integer. If \(b_{n-1} = 1\) then this signed value is negative, otherwise it is non-negative (or not negative), i.e.

\[s(b_{n-1} \cdots b_0) < 0 \quad\Leftrightarrow\quad b_{n-1} = 1.\]

Note that non-negative means that a value can be zero or positive. That means: “the opposite of negative is not positive, it is zero or positive”!

In a \(n\)-bit pattern \(b_{n-1} \dots b_0\) the bit \(b_{n-1}\) is called the most significant bit (MSB) and \(b_0\) the least significant bit (LSB). In general one can say that the significance of a bit \(b_k\) depends on its index \(k\). So \(b_1\) is more significant than \(b_0\), \(b_2\) is more significant than \(b_1\) etc. You can motivate this convention by considering the interpretation as unsigned integer, i.e.

\[u(b_{n-1} b_{n-2} \cdots b_0) := \sum\limits_{k=0}^{n-1} b_k 2^k\]

If you change the value of one bit \(b_k\) you change the correlated unsigned value. The higher you select \(k\) the more significant is the corresponding effective change.

Representable integer ranges

With \(n\) bits you can represent unsigned integers in the range from \( 0 \) to \(2^{n}-1\) and signed integers in the range from \(-2^{n-1}\) to \(2^{n-1}-1\). Most of the time we will work with 8-bit, 16-bit, 32-bit or 64-bit pattern. In order to have a guts feeling about what integer ranges can be represented here a list:

\[\begin{array}{lll}\text{bit pattern size} &\text{unsigned integer range} &\text{signed integer range}\\8 &0,\dots,255 &-128,\dots,127\\16 &0,\dots,65536 &-32768, \dots, 32767 \\32 &0, \dots, 4294967296 &-2147483648, \dots, 2147483647 \\64 &0,\dots, 18446744073709551616&-9223372036854775808, \dots, 9223372036854775807 \\\end{array}\]

Integer overflow

Overflow means that the result of an operation can not be represented with a given bit pattern size. Here we only consider the operations addition and subtraction. The carry flag (CF) indicates an overflow when interpreting bit patterns as unsigned integers, the overflow flag (OF) indicates overflow when interpreting bit patterns as signed integers. So in simple words:

  • If you are looking at the bit patterns and think of them as an unsigned integer representation then you only care about CF. The value of OF does not make any sense for you.

  • If you are looking at the bit patterns and think of them as a signed integer representation then you only care about OF. The value of CF does not make any sense for you.