M1 with 32 GB RAM. I can just about fit the 4-bit quantized 33 GB Code Llama model (and it's finetunes, e.g. WizardCoder, etc.) into memory. It's somewhat slow, but good enough for my purposes.
Edit: when I bought my Macbook in 2021, I was like "Ok, I'll just take the base model and add another 16 GB of RAM. That should future proof it for at least another half-decade." Famous last words.
This is why my rule for laptops with non-upgradable memory has been to max out the RAM at purchase -- and that has been my rule since 2012/2013 or whenever that trend really started.
Thank you. I had to reduce the context length to get this to work without crashing (from 16k to 8k)—and I'm seeing the ~100% speed up you mentioned.
However, when I run the LLM, OSX becomes sluggish. I assume this is because the GPU's utilized to the point where hardware-based rendering slows down due to insufficient resources.
llama.ccp will run LLMs that have been ported to the gguf format. If you have enough RAM, you can even run the big 70 billion parameter models. If you have a CUDA GPU, you can even offload part of the model onto the GPU and have the CPU do the rest, so you can get some partial performance benefit.
The issue is that the big models run too slowly on a CPU to feel interactive. Without a GPU, you'll get much more reasonable performance running a smaller 7 billion parameter model instead. The responses won't be as good as the larger models, but they may still be good enough to be worthwhile.
Also, development in this space is still coming extremely rapidly, especially for specialized models like ones tuned for coding.
They do run, just slowly. Still better than nothing if you want to run something larger than would fit in your VRAM though. The Llama.ccp project is the most popular runtime, but I think all the major ones have a flag like "--cpu".