Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How does this compare to boost::multiprecision?

https://www.boost.org/doc/libs/1_72_0/libs/multiprecision/do...



Ironically, the section labeled "What FloatX is NOT" tells you more about what it is than what it isn't.

Its a system for emulating narrower precision floating-point arithmetic using native machine-width floating point. The author's claim that its much faster than using the integer unit for this purpose.

Boost::multiprecision and MPFR are libraries to execute higher-precision arithmetic, commonly using the integer hardware to do so.


Emulating narrower using larger floating-point arithmetic can be dodgy, since you open yourself up to double rounding scenarios. For the IEEE 754 types (half, single, double, and quad), it is the case that the primitive operations (+, -, *, /, sqrt) are all correctly rounded if you emulated it by converting to the next size up, doing the math, and converting it down. For non-IEEE 754 types (such as bfloat16, or the x87 80-bit type), this is not the case, so double rounding is a possible concern.


That section also says:

> it is not likely that FloatX will be useful in production codes (sic).

I would be interested in why the author thinks that. Quality of implementation issue? Or is it in reference to the statement before that it is WIP?


Wow thanks! Yeah that wasn't clear to me at all. I hadn't considered the need for a floating point library that is less precise (and less performant!) than the 32/64 bit native types.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: