Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wonder how good are those for LLMs (compared to M3 Pro/Max)... They talk about the Neural Engine a lot in the press release.


I'm not sure we can leverage the neural cores for now, but they're already rather good for LLMs, depending on what metrics you value most.

A specced out Mac Studio (M2 being the latest model as of today) isn't cheap, but it can run 180B models, run them fast for the price, and use <300W of power doing it. It idles below 10W as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: