Note that sin(double(pi)) should not return zero. sin(pi) is, mathematically, zero, but since float/double/long-double can’t represent pi it would actually be incorrect for them to return zero when passed an approximation to pi.
Huh? sin(pi ± epsilon) better damn well return 0 if epislon is closer to 0 than the minimum representable floating-point value. Otherwise the processor is rounding incorrectly.
(I realize epsilon in this case is likely to be much larger than the minimum representable float, but it's not guaranteed and thus the article's reasoning is flawed.)
The article is correct. double(pi) is exactly 884279719003555/281474976710656, which is a perfectly valid number to apply the sin function to. sin(884279719003555/281474976710656) is a transcendental number close to 1.2246467991473532e-16. In fact, that is the closest 64-bit float value to the true value of sin(884279719003555/281474976710656) and thus the best answer that the sin function can give. There is no way to ask the sin(double x) function what the sine of the transcendental number π is since the argument must be a double and there's no double value for π. Since the sin function is not in the business of guessing which of the uncountably many real numbers you really wanted to take the sine of, it can only give you the best answer for the value you actually passed it.
Actually, epsilon is guaranteed to be much larger than the minimum representable float: double(pi) is 1.57... * 2^1, so double(pi) - pi is going to be on the order of 2^-m, where m is the size of the mantissa. The only way that double(pi) - pi could be much smaller is if pi magically had a lot of 0s in its binary representation (it doesn't).
This caught me off-guard at first as well, but it's really quite logical.
The only way that double(pi) - pi could be much smaller is if pi magically had a lot of 0s in its binary representation (it doesn't).
Yes, this is what I was getting at. Obviously the statement is true for pi and the majority of reals; I just don't understand why it should be a rule as the article implies.
No, it is not pi that must get rounded, but sin(pi). So in sin(double(pi)), first double(pi) is computed, which is not exactly pi, then an approximation to sin(double(pi)) is computed, which is not exactly zero. It would be absolutely wrong to round a non-zero number to zero.
Yes, we're both talking about sin(pi) (hence why I claimed "sin(pi ± epsilon) better damn well return 0").
I take issue with this assertion:
It would be absolutely wrong to round a non-zero number to zero.
Why? I know IEEE floats have subnormals and subnormals can be VERY small, but if the error is less than half the minimum representable float, why should it be rounded away from zero?
Yes, zero is "special" because you can't divide by it, but for stability, my understanding is that you shouldn't be dividing by anything that can be so close to zero as to potentially round to it anyway.
sin(pi+eps) is a quantity on the order of eps. Not only is it non-zero, it is also so far from zero that it can be represented quite accurately with a floating-point number (which is a separate concern from the accuracy with which it is computed).
The relative error introduced in rounding 1e-16 down to 0 is 100%, or approximately 1e16 ulp, which is why that would be unacceptable. Floating point arithmetic cares about relative errors much much more than absolute.
Also, while dividing by zero is a problem, dividing by 1e-16 should not be a problem, at all. 1e-16 is a perfectly ordinary representable number.
[eps] is also so far from zero that it can be represented quite accurately with a floating-point number
What is missing from your claim is where you say, "Specifically, the delta between pi and its float-point approximation is greater than half the minimum representable float." This is almost certainly true of pi, but it is NOT true of all real numbers! 1+(1e-1000), for example, fails this test. You cannot – logically! – make the claim that the error of any close approximations of real numbers should not round to zero because the error is not small enough; clearly in many cases it is!
Yes, dividing by 1e-16 is not a problem! My question is whether this is a desirable thing to do in an algorithm that should be stable.
Floating point arithmetic cares about relative errors much much more than absolute.
This hints more at a reason, but why is this true? Yes, I know floating-point numbers are spaced logarithmically, and yes, relative error is obviously the correct measure when dealing with multiplication and division. But if you are not dividing due to stability concerns, so your value eventually finds its way to an addition, isn't absolute error more important?
Nobody here besides you is talking about numbers that are less than half the minimum representable float.
Edit: Oh my god now I get what you are saying. There are plenty of implicit reasons why the value would end up being non-zero -- in particular, any unit-scale continuous function would have to be artificially toxic for epsilon to be smaller than the smallest float. There is no reason for the article to point out that nit.
Well, thanks at least for taking the time to understand my point. Though I don't consider it a "nit" when the article implies that "never round to zero" is some sort of rule (such as "round toward even LSBs" is), as opposed to a consequence of the properties of floats and the constants and functions involved.
The sign-exponent-mantissa nature of IEEE format means (basically by design) that you have a much finer resolution near 0 (smaller absolute value) than you do away from 0 (larger absolute value).
We're talking about sin(x) with x ~ pi. In this situation the output's absolute value is much much smaller than the input's absolute value. So the output will have much finer resolution than the input.
The output being appreciably non-zero comes from the output having much higher resolution than the input. The output wouldn't be anywhere near small enough to be subnormal.
Huh? sin(pi ± epsilon) better damn well return 0 if epislon is closer to 0 than the minimum representable floating-point value. Otherwise the processor is rounding incorrectly.
(I realize epsilon in this case is likely to be much larger than the minimum representable float, but it's not guaranteed and thus the article's reasoning is flawed.)