Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm wondering about any abuse of GPU for non GFX tasks in recent games.


If it has embarrassingly parallel tasks that it can dispatch to a massively parallel subsystem dedicated to solving embarrassingly parallel tasks, is that abuse or smart use of resources?

That being said most simulation games are usually memory-latency and memory-bandwidth limited, not compute limited.


It looks a perfect match. At first. Then you realize that you are not alone.

Very much like using the Java stream parallel API in a webserver is doing wonders in dev, but not in production, as you have many other threads serving other requests also starved for CPU cores




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: