Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not possible to implement a more precise fsin() without breaking apps?

One scenario I can imagine an app breaking is if someone executed fsin(x), pasted the result into their code, and then tested some other float against it using the == operator.

I suppose a more common breakage scenario would be if someone did something like floor(x + fsin(x)), since that'd very slightly change where the floor "triggers" with respect to the input. But how could that cause an app to break? Every scenario I can think of comes back to someone using == to test floats, which everyone knows is a no-no for dynamic results, such as the output of a function like fsin().

I guess a programmer wouldn't really expect the result of "x + fsin(x)" to ever change for a certain value of x, so maybe they executed it with that certain value, stored the result, and did an equality comparison later, which would always work unless the implementation of fsin() changed, and the thought process went something like, "This should be reliable because realistically the implementation of fsin() won't change."

Can anyone think of some obvious way that making fsin() more accurate could cause serious breakage in an app that isn't testing floats with the equality operator?



> It's not possible to implement a more precise fsin() without breaking apps?

It is, but it's A) slow as hell (it requires something like 384 bits of Pi to reduce properly) and B) nobody really cares.

Anybody who cares pulls out Cody and Waite and writes their own so that they know exactly what the errors and performance are. This is especially true for modern processors which have vector units and whose mainline performance is probably just as good as its microcode performance.

Anybody who doesn't care is really looking for an approximation anyway, so they're not going to use the transcendentals because they're too slow.


It requires more than 1100 bits to do correct argument reduction for double, actually.

You're absolutely correct that a software implementation may be several times faster than the legacy x87 fsin instruction, while delivering well-rounded results. There shouldn't be a need to write your own implementation, however. High-quality library implementations are pretty widely available these days.


Agreed. I'm stunned that there is a compiler currently in existence that actually uses the built-in Intel transcendentals rather than their own library.


> One scenario I can imagine an app breaking is if someone executed fsin(x), pasted the result into their code, and then tested some other float against it using the == operator.

While this sounds grandious, thanks to aggressive inlining this could be the case even if you wrote in your logic:

    sin(x) == sin(constant)
Or even the following depending on your choice of delta.

    abs(sin(x) - sin(constant) < delta


Is it even valid for a compiler to transform sin(constant) into a hardcoded constant embedded into the executable?

I guess since sin(x) is computed by an FPU in hardware, then it's inherently "dynamically linked." But in that case, why is it valid for a compiler to transform sin(constant) into a hardcoded result? Is there some standard somewhere which says that the implementation of sin/cos/etc can't ever change, and therefore it's valid for the compiler to do that? Or do compilers just make an assumption that the implementation of sin(x) won't ever change?


Your mistake is in assuming that the language C and its standard library are somehow defined in terms of some hardware, and that there somehow is a mapping between C functions and operators and CPU instructions - there isn't. The standard specifies for what input values an operation is defined and which conditions have to be met by the result, nowhere does it say how an implementation is supposed to compute the result, that's for the compiler to decide - as far as the standard is concerned, the compiler may compile all arithmetic to only ANDs and NOTs, though most practical compilers tend to try and choose instructions that give the best performance for a given computation, or just compute the result at compile time where possible.


From the C standard:

    A floating expression may be contracted, that is, evaluated as though it were an atomic
    operation, thereby omitting rounding errors implied by the source code and the
    expression evaluation method. The FP_CONTRACT pragma in <math.h> provides a
    way to disallow contracted expressions. Otherwise, whether and how expressions are
    contracted is implementation-defined.


"but if it's dynamic, is it even valid for a compiler to transform sin(constant) into a hardcoded constant embedded into the executable?"

Yes.

I honestly don't remember all the rules here, but it does in fact do so, legally.


Yes - the standard that defines what the sin() function does is the C standard itself.

Just as a compiler is free to evaluate strlen("foo") at compile-time, it's free to evaluate sin(0.1) at compile-time.


It breaks binary compatibility. The compiler substitutes some value (e.g. the broken value), so if the sin(x) evaluation changes (to the correct one), then binary execution will fail.


Why would the compiler substitute the broken value unless the compiler was broken?

Compilers generally don't use specialized local architecture instructions to compute their results, because they can't (It would break cross-compiling or targeting different platforms in a lot of cases) In fact, it's more likely the other way around:

Compiler substitutes proper value. actual calls on sin(x) on the local platform give wrong answer.


One case that comes to mind is multiplayer games that rely on each participant running identical inputs on identical simulation code producing identical outputs. Doing so on floating point inputs is arguably madness, especially using vendor-provided functions or instructions as part of the process, but that doesn't mean it doesn't get done. Small differences can accumulate and desync the game rapidly.


IEEE floats are (nominally) deterministic in part, although that part [0] doesn't happen to include the transcendental functions.

You are right, though, that there's a lot of software out there that assumes determinism without being anywhere near sufficiently circumspect.

[0] http://randomascii.wordpress.com/2013/07/16/floating-point-d...


I expect that Microsoft could get some angry customers if Excel's computations varied with the CPU it runs on. Imagine discussing a spreadsheet with a colleague and seeing wildly different data.

Some people would say "That's what you deserve if you use an ill-conditioned algorithm or have an ill-conditioned problem", but most would not even understand that statement.


But doesn’t that already happen? Didn’t we see that when the Pentium flaw was discovered? In fact, I thought there was an Excel spreadsheet you could download and run to test whether you had the flaw or not.


For this argument, the Pentium flaw is not the best example. As long as all CPUs were equally flawed, everything was fine, if you define 'fine' as 'we see the same results on all systems'.

I guess Excel will have the problem in some cases for rare CPUs. For 'common' ones, I expect that Excel contains workaround for errata that might affect its computations.


Well there's a difference between trusting the CPU to get arithmetic right and trusting the CPU to get sin(x) right (or to just know it gets it wrong but not caring).


Is the change really big enough that you'd get "wildly different data" in a normal situation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: