Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Kind of like this well trained CNN is no longer relying entirely on the raw pixel values, but is statistically inferring a brighter image from the baseline.


There's a difference between applying known priors, and making things up based on statistics. Conflating the two isn't helping anyone.


Not to harp on this, but the point is that, as I understand it, both “systems” are using exogenous information to extrapolate more data than is actually present in the source image.

That’s not to say that the same “thing” is happening at the granular level at all.

But this is distinctly different from standard filtering functions, which can only work with entropy already present in the source image. So there’s a neat distinction.

The output from the CNN is essentially an “artist interpretation” of the source image. As such there could be “clarifying details” in the output that were in fact totally invented and not actually present in the source.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: