Kind of like this well trained CNN is no longer relying entirely on the raw pixel values, but is statistically inferring a brighter image from the baseline.
Not to harp on this, but the point is that, as I understand it, both “systems” are using exogenous information to extrapolate more data than is actually present in the source image.
That’s not to say that the same “thing” is happening at the granular level at all.
But this is distinctly different from standard filtering functions, which can only work with entropy already present in the source image. So there’s a neat distinction.
The output from the CNN is essentially an “artist interpretation” of the source image. As such there could be “clarifying details” in the output that were in fact totally invented and not actually present in the source.