The Number Systems
About the number systems
As we all know, the fundamental role of
a computer is the manipulation of data. Numbers are used both in
quantifying items of data and in the form of codes which define the
computational operations to be executed. All numbers which are used
for these two purposes must be stored within the computer memory and
also transported along the communication buses. A detailed
consideration of the conventions used for representing numbers within
the computer is therefore required.
Number Systems
The decimal system is the best-known
number system, but it is not very suitable for use by digital
computers. It uses a base of ten, such that each digit in a number can
have any one of ten values within the range 0-9. Items of electronic
equipment such as the digital counter, which are often used as
computer peripherals, have liquid crystal display elements which can
each display any of the ten decimal digits, and therefore a
four-element display can directly represent decimal numbers in the
range 0-9999. The decimal system is therefore perfectly suitable for
use with such output devices.
The fundamental unit of data storage within a digital computer is a
memory element known as a bit. This holds information by switching
between one of two possible states. Each storage unit can therefore
only represent two possible values and all data to be entered into
memory must be organized into a format which recognizes this
restriction. This means that numbers must be entered in binary format,
where each digit in the number can have only one of two values, 0 or
1. The binary representation is particularly convenient for computers
because bits can be represented very simply electronically as either
zero or non-zero voltages. However, the conversion is tedious for
humans. Starting from the right-hand side of a binary number, where
the first digit represents 2° (i.e. 1), each successive binary digit
represents progressively higher powers of two. For example, in the
binary number 1111, the first digit (starting from the right-hand
side) represents 1, the next 2, the next 4 and the final, leftmost
digit represents 8; thus the decimal equivalent is 1 + 2 + 4 + 8 = 15.
For data storage purposes, memory
elements are combined into larger units known as bytes, which are
usually considered to consist of 8 bits each. Each bit holds one
binary digit, and therefore a memory unit consisting of 8 bits can
store eight-digit binary numbers in the range of 00000000 to 11111111
(equivalent to decimal numbers in the range of 0 to 255). A binary
number in this system of 10010011 for instance would correspond to the
decimal number 147.
This range is clearly inadequate for most purposes, including
measurement systems, because even if all data could be conveniently
scaled the maximum resolution obtainable is only 1 part in 128.
Numbers are therefore normally stored in units of either 2 or 4 bytes,
which allow the storage of integer (whole) numbers in the range of
0-65 535 or 0-4 294 967 296.
No means have been suggested so far for expressing the sign of
numbers, which is clearly necessary in the real world where negative
as well as positive numbers occur. A simple way to do this is to
reserve the most significant (left-hand) bit in a storage unit to
define the sign of a number, with '0' representing a positive number
and '1' a negative number. This alters the range of numbers
representable in a 1 byte storage unit to -127 to +127, as only 7 bits
are left to express the magnitude of the number, and also means that
there are two representations of the value 0. In this system the
binary number 10010011 translates to the decimal number -19 and
00010011 translates to +19. For reasons dictated by the mode of
operation of the CPU, however, most computers use an alternative
representation known as the two's complement form.
The two's complement of a number is most easily formed by going via an
intermediate stage of the one's complement. The one's complement of a
number is formed by reversing all digits in the binary representation
of the magnitude of a number, changing ones to zeros and zeros to
ones, and then changing the left-hand bit to a 1 if the original
number was negative. The two's complement is then formed by adding 1
at the least significant (right-hand) end of the one's complement. As
before for a 1 byte storage unit, only 7 bits are available for
representing the magnitude of a number, but, because there is now only
one representation of zero, the decimal range representable is -128 to
+ 127. We have therefore established the binary code in which the
computer stores positive and negative integers (whole numbers).
However, it is frequently necessary also to handle real numbers (those
with fractional parts). These are most commonly stored using the
floating point representation.
The floating point representation divides each memory storage unit
(notionally, not physically) into three fields, known as the sign
field, the exponent field and the mantissa field. The sign field is
always 1 bit wide but there is no formal definition for the relative
sizes of the other fields. However, a common subdivision of a 32 bit
(4 byte) storage unit is to have a 7 bit exponent field and a 24 bit
mantissa field.
The value contained in the storage unit is evaluated by multiplying
the number in the mantissa field by two raised to the power of the
number in the exponent field. Negative as well as positive exponents
are obtained by biasing the exponent field by 64 (for a 7 bit field),
such that a value of 64 is interpreted as an exponent of 0, a value of
65 as an exponent of 1, a value of 63 as an exponent of -1, etc.
Suppose therefore that the sign bit field has a 0, the exponent field
has a value of 0111110 (decimal 62) and the mantissa field has a value
of 000000000000000001110111 (decimal 119), i.e. the contents of the
storage unit are 00111110000000000000000001110111.
The number stored is +119 x 2-2. Changing the first (sign)
bit to a 1 would change the number stored to -119 x 2-2.
If a human being were asked to enter numbers in these binary forms,
however, the procedure would be both highly tedious and also very
prone to error, and, in consequence, simpler ways of entering binary
numbers have been developed. Two such ways are to use octal and
hexadecimal numbers, which are translated to binary numbers at the
input-output interface to the computer.
Octal numbers use a base of eight and consist of decimal digits in the
range 0-7 which each represent three binary digits. Thus a 24 bit
binary number is represented by eight octal digits.
Hexadecimal numbers have a base of 16 and are used much more commonly
than octal numbers. They use decimal digits in the range 0-9 and
letters in the range A-F which each represent four binary digits. The
decimal digits 0-9 translate directly to the decimal values 0-9 and
the letters A-F translate respectively to the decimal values 10-15. A
24 bit binary number requires six hexadecimal digits to represent it.

More on Intelligent Instruments
|