A round-off error, also called a rounding error, is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate this error when using approximation equations, algorithms, or both, especially when using finitely many digits to represent real numbers. This is a form of quantization error. When a sequence of calculations subject to rounding errors is made, errors may accumulate, sometimes dominating the calculation. Cases where significant errors accumulate are known as 'ill-conditioned.'
When making calculations, the value is rarely a whole number. It is expressed in the form of a decimal with infinite digits. The more digits that are used, the more accurate the calculations will be upon completion. Using a slew of digits in multiple calculations, however, is often unfeasible and can lead to much more human error when keeping track of so many digits. To make calculations much easier, the results are often 'rounded off' to the nearest few decimal places.
For example, the equation for finding the area of a circle is A=(pi)r2. The number (pi) has infinitely many digits, and can be represented as 3.14159265359. However, for the convenience of making calculations, this number is typically rounded to the nearest two decimal places, or just 3.14. Though this technically decreases the accuracy of the calculations, the value derived is typically 'close enough' for most purposes.
However, when doing a series of calculations, numbers are rounded off at each subsequent step. This leads to an accumulation of errors, and if profound enough, can misrepresent calculated values and lead to miscalculations and mistakes (Figure 1).