Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With this model:

https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-GGUF

Man I need to test the q8 version with llamafiles optimizations, it would be so nice to host it locally with the new ryzens, it could maybe fit my 96GB of ram



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: