You are conflating a lot of different things here and some of your math is off.
First, 1920x1080 pixels times 3 bytes for RGB is about 6.2 MB. I'm not sure where you got an alpha channel from. Also an entire screen of pixels isn't being copied from memory to refresh a text buffer.
Second the Z80 couldn't transfer a byte every cycle so that's 500KB/s.
The comparison for memory pixel data is 6.2 million / 107,000, which is 58. The comparison for memory bandwidth is 500KB to 20 GB/s, which would be 40,000.
Add a few extra features like anti-aliasing, more sophisticated layout management, syntax highlighting, etc. and it all seems to track.
These don't have anything to do with memory bandwidth and are back to CPUs.
Finally, there are other sources of perceptible differences in UI/UX due to the growing depth of the software stack:
This doesn't have anything to do with anything in this thread. The original person said text rendering was expensive, that has nothing to do with input latency.
First, 1920x1080 pixels times 3 bytes for RGB is about 6.2 MB. I'm not sure where you got an alpha channel from. Also an entire screen of pixels isn't being copied from memory to refresh a text buffer.
Second the Z80 couldn't transfer a byte every cycle so that's 500KB/s.
Third, DDR4 doesn't have a bandwidth of 1.6 GB/s, which would be absurd. We can call that about 20 GB/s - https://www.transcend-info.com/Support/FAQ-292
The comparison for memory pixel data is 6.2 million / 107,000, which is 58. The comparison for memory bandwidth is 500KB to 20 GB/s, which would be 40,000.
Add a few extra features like anti-aliasing, more sophisticated layout management, syntax highlighting, etc. and it all seems to track.
These don't have anything to do with memory bandwidth and are back to CPUs.
Finally, there are other sources of perceptible differences in UI/UX due to the growing depth of the software stack:
This doesn't have anything to do with anything in this thread. The original person said text rendering was expensive, that has nothing to do with input latency.