Benchmarking for Local LLM

There’s probably some database or registry. I returned an M4 Mac Mini with 24GB RAM and the base CPU recently and ordered a Mac Studio M4 Max 16/40/16, 128GB RAM.

I also tested on an Macbook Pro M1 Pro, 32GB RAM I have access to, and gave the following prompt:

write a simple Flask API with OpenAPI endpoints

LLM ModelM1 Pro, 32GB RAM eval rate (tokens/s, 16 graphics cores)M4 Max, 128GB RAM eval rate (tokens/s, 40 graphics cores)Intel NUC12i5, 64GB Ram, running Ipex OllamaM5 32GB RAM eval rate (tokens/s, 10 graphics cores)
qwq:32B4.9718.6, 16.8, 17.36.0, 6.0, 5.7
qwen2.5:7b26.171.2, 73.2, 72.15.09, 4.95, 5.2726.1, 25.9, 25.0
a-m-team/AM-Thinking-v15.7315.6, 15.4
qwen2.5:72b8.75, 8.84, 8.71

I vlogged about these, and the setup of my new machine, here.