 I am setting up a column with 6000 rows with values from 150 to 750 by increments of 0.1 so I put 150 in row 2 (cell A2) , 150.1 in row 3, 150.2 in row 4 & then selected the series and sent down to 6002 (autoaddition).  Value in row 6002 looks like 750 but if I click on it,  in the formula bar , it actually displays 749.999999999966 & this value is used if I multiply it etc.

So I tried a different way.  I put 150 in the second row  and below wrote equation = A2+0.1  & send this down to A6002.  Value in row 6002 is 750.000000000106.

If I format numbers to 20 decimal places it first is incorrect at row 90 (88 sequential calculations) by both methods and it is lower by 0.0000000000001 by both methods, even though the second method progresses to provide a mistake of inflation.  Same if I format numbers to 30 decimals (max allowed) - it is accurate for 87 calculations & then - so not an inaccuracy of proportions completely.

I know this won't make a difference  when I round things but wouldn't it be easier for the program to add 0.1 accurately??

2 Replies

It's precision error due to floating point math and binary fractions.

https://docs.microsoft.com/en-us/office/troubleshoot/excel/floating-point-arithmetic-inaccurate-resu...

@brewsdltn   Put very simply, most decimal fractions cannot be represented exactly in binary floating-point, which is how values are represented internally.  And the binary approximation of a particular decimal fraction might differ depending on the magnitude of the number.

Consequently, for example, IF(10.01 - 10 = 0.01, TRUE) returns FALSE(!) because the approximation of 1/100 in 10.01 differs from the approximation of 1/100 by itself.

For that reason, whenever we expect a calculation that involves or might result in decimal fractions to be accurate to some number of decimal places, we should explicitly round to that number of decimal places -- and not to an arbitrary number like 10, as some people suggest.

So, the formulas in column C should be of the form =ROUND(C2+0.1, 1) in C3.

The decision of when to round within an expression is subjective.  Only you can make that decision.

-----

In the case of 10.01 - 10, we can see the infinitesimal difference; Excel displays 0.00999999999999979 when formatted appropriately.

(But note: even that is not the exact decimal value.)

But often we cannot see the difference, because Excel formats only the first 15 significant digits.

(And for that reason, it is sufficient to format your values with 12 decimal places, not 30, since the magnitude (integer part) of your values is 3 digits.)

We can see the infinitesimal difference between the displayed value and the actual value by entering formulas of the form =SUM(C3,-(C3&"")) into D3, formatted as Scientific, and copy down the column.

Thus, we can see infinitesimal differences as early as D5.

Why (C3&"")?  Because that returns a string that is formatted to 15 significant digits.

Why SUM instead of =C3-(C3&"")?  Because Excel plays tricks in order to try to hide the infinitesimal conditions sometimes.  In very limited circumstances, Excel replaces the actual arithmetic result with exactly zero (0.00E+00).  Consequently, we do not see infinitesimal differences until D40.

But Excel applies that trick inconsistently.  Consequently, for example, =C5-(C5&"") returns exactly zero, but IF(C5-(C5&"") = 0, TRUE) returns FALSE(!).

And even the SUM function applies the trick inconsistently.  depending on the form of the parameters.  I just know that SUM(C5,-(C5&"")) always returns the exact arithmetic result.

------

Re: ``wouldn't it be easier for the program to add 0.1 accurately?``

No.

First, that is the way that Intel-compatible CPUs behave, as do most other modern CPUs.  And for that reason, most applications have the same problem.  So, Excel would have to do additional work in order to "correct" the binary arithmetic anomalies.  (Or choose a different internal representation, which is what Windows Calculator does.)

Second, it is unclear when Excel should automatically round arithmetic results.  For example, if Excel rounded each arithmetic operation, (1/3)*3 would result in 0.999999999999999, not exactly 1.

Finally, it is unclear what precision Excel should round to.  For example, if Excel always rounded to 15 significant digits (as most documentation incorrectly claims that Excel does), 10.01 - 10 would still result in 0.00999999999999979.

In fact, Excel does provide an option ("Precision as displayed") that attempts to address these issues.  It causes Excel to round the result of a formula to the number of decimal places in the cell format.  But since PAD applies only to formulas, it would not correct the problem with IF(10.01 - 10 = 0.01, TRUE).

Caveat:  I deprecate the setting of PAD.  But if you want to experiment with it, be sure to make a copy of the Excel file first.  Merely setting PAD can change constants irreversibly anywhere in the workbook, if their cell format has less precision than the original constant.  That is not uncommon with interest rates, for example.