Binary - Negative Numbers
Written by Mike James   
Sunday, 25 June 2023
Article Index
Binary - Negative Numbers
Binary Complements

Binary complements

Now convert all of this to binary and implement it in electronics and be amazed at how natural it all seems.

If you know how to program the first thing it all explains are those “funny” numerical ranges that you are given for variables of different sizes.

For example, a 16-bit variable or 16-bit hardware storage location is usually quoted as being capable of holding a number in the range

+32767 to -32768.

This corresponds to the positive range using 1 to 32767 and the negative range using 65535 (i.e. -1) to 32768 (i.e. -32768).

You can now see why it can’t hold quite as large a positive number as a naegative number.

The next pleasing surprise is that you can immediately tell when a number is negative by looking at the highest order bit.

For example for 16-bit values the positive range written in binary is:

         0000 0000 0000 0001


         0111 1111 1111 1111

and the negative range is

         1111 1111 1111 1111


         1000 0000 0000 0000

and so all positive numbers have a zero high order bit and all negative numbers have a one.

This means that you can interpret the high order bit as a sign bit - although we didn't design things this way. The sign bit is just a consequence of the way we have divided up the range into negative and positive numbers.

The final piece of good news is that converting a value into two's complement is very easy. First convert it into one's complement, i.e. subtract every bit of the value from 1.





this is equivalent to “NOTing” in the sense of Boolean logic each bit of the value.

In other words to form the one's complement all you have to do is take the logical NOT of the value.

This is easy but it gets better because all you have to do is add one and the result is two's complement. In fact you don’t even have to do the “add one” as a separate operation because you could just start any addition off with an initial carry set to 1 instead of 0. Exactly how it is done varies according to the hardware design but you can see that it isn’t difficult. In fact you should be able to see that you don't need any special hardware to do subtraction at all - simply a full adder and twos compliment arithmetic.

Why bother?

Do you really need to know about complement-based arithmetic?

The answer is, at the moment, you do if you want to be a programmer or a hardware designer.

The reason is that we are still close enough to the computer hardware we program for these things to show through. This is especially the case when you get involved in bit manipulation or change the precision of a value. Occasionally you can create serious bugs simply because you don’t realise that the positive value that your machine is displaying with lots of digits should be a negative value with far fewer digits.

Another example is the simple shift operation which is supported in many computer languages. A shift of all the bits in a number to the left multiplies the value by two. A shift to the right divides it by two. However if you are working with negative numbers as well as positive things can be more difficult. For example, if you shift a value to the left then any change in the highest most bit is equivalent to a sign change. So if you detect a change in the highest bit you have multiplied by two to the point you have caused an overflow.

Shifting to the right brings with it a similar problem. If you shift in a zero to the top-most bit you make the number positive, but if you shift in a one you make it negative. What you really need to do for a signed number is to shift in a value that is the same as the top-most bit. In most machines there are two forms of shift - logical shift introduces a zero to the top-most bit and an arithmetic shift simply keeps the top-most bit the same.



Related Articles

Binary Arithmetic

Floating point numbers

Boolean Logic

Charles Babbage

Claude Shannon

Eckert & Mauchly and ENIAC


What Programmers Know



  1. The Computer - What's The Big Idea?*
  2. The Memory Principle - Computer Memory and Pigeonholes*
  3. Principles of Execution - The CPU
  4. The Essence Of Programming
  5. Variables - Scope, Lifetime And More*
  6. Binary Arithmetic
  7. Hexadecimal*
  8. Binary - Negative Numbers*
  9. Floating Point Numbers*
  10. Inside the Computer - Addressing
  11. The Mod Function
  12. Recursion
  13. The Lost Art Of The Storage Mapping Function *
  14. Hashing - The Greatest Idea In Programming
  15. Advanced Hashing
  16. XOR - The Magic Swap*
  17. Programmer's Introduction to XML
  18. From Data To Objects*
  19. What Exactly Is A First Class Function - And Why You Should Care*
  20. Stacks And Trees*
  21. The LIFO Stack - A Gentle Guide*
  22. Data Structures - Trees
  23. Inside Random Numbers
  24. The Monte Carlo Method
  25. Cache Memory And The Caching Principle
  26. Data Compression The Dictionary Way
  27. Dates Are Difficult*
  28. Sequential Storage*
  29. Magic of Merging*
  30. Power of Operators
  31. The Heart Of A Compiler*
  32. The Fundamentals of Pointers
  33. Functional And Dysfunctional Programming*

* Recently revised

kotlin book



or email your comment to:

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.


Programmer's Guide To Theory - Splitting the Bit

Information theory – perhaps one of the most remarkable inventions of the twentieth century - naturally leads on to the consideration of how information can be coded and hence coding theory.


We take multitasking for granted now but it was a difficult technology to get right - and still is. We take a look at how it all developed and at the variations on the basic idea.

Other Articles



















Last Updated ( Sunday, 25 June 2023 )