Numeric Types
Category | System |
---|
Overview
Data is the fundamental element of programming.
Computer operations work on data.
For a computer system to operate on data, a representation of the data is required.
Numeric Bases
Decimal
The decimal representation is known as base ten.
Number are represented with digits from 0 to 9.
A decimal number is the sum of the powers of 10 which each digit represents.
Binary
The binary representation is known as base two.
It is the fundamental representation of computer data, as electronic components send regular pulses that are either on (1) or off (0).
Each digit is referred to as a bit.
A binary number is the sum of the powers of 2 which each digit represents.
Hexadecimal
The hexadecimal representation is known as base 16.
Number are represented with digits from 0 to 9 and letters from A to F that represents values from 10 to 15.
The hexadecimal representation is also fundamental in computers, as data is generally packed into bytes which are a set of 8 bits.
The FF hexadecimal number is the largest value that can be stored in a byte, which is equal to 255 as a decimal number.
Data Types
Integers
Integers are known as natural numbers in mathematics.
Integer numbers are generally represented with the following sizes:
- Byte (8 bits)
- Short (16 bits)
- Int (32 bits)
- Long (64 bits)
Integers can be positive or negative.
Positive integers and negative integers are represented by distinct types, because their encoding is different.
Negative integers use the two complement notation.
If the most significant bit is set, the number is negative.
Fixed Points
To represent fractions, a different notation is used to express the concept of a decimal point.
In the fixed point notation, a number of bits represent the whole part of the number, and the remaining bits represent the decimal part.
The fixed point notation constrains the range of whole numbers that can be represented and the amount of precision for the decimal part.
Floating Points
In the floating point notation, the position of the decimal point is determined by an exponent.
A floating point number has three parts:
- A mantissa that contains the digits on both side of the decimal point.
- An exponent that indicates the position of the decimal point.
- A sign bit that indicates whether the number is positive or negative.
Floating point numbers are generally represented as:
- Single-precision (32 bits)
- Double-precision (64 bits)
The most common standard to represent floating point numbers is IEEE-754.
For a single-precision number:
- The sign is in the most significant bit.
- The exponent is stored on 8 bits.
- The mantissa is stored on 24 bits.
The precision of a floating-point value is proportional to its magnitude.
It means that the precision of a floating point number increases as the magnitude decrease, and the precision decreases as the magnitude increase.
Epsilon
The epsilon (or machine epsilon) is the relative error (independent of the exponent).
It indicates the maximum error in the mantissa after a given operation.
It is the minimum distance that is recognized between two floating point numbers.
It satisfies the following equation:
Word
A word corresponds to the natural size of a processor.
- A 32-bit processor has a word size of 4 bytes.
- A 64-bit processor has a word size of 8 bytes.