Yes — DeepSeek R1 32B (32B) runs at 18 tok/s on M4 Max with 48 GB RAM using Q4_K_M quantization via LM Studio. First token latency is 1.8s. DeepSeek's 32B reasoning model delivering frontier-grade results locally.
LLMCheck measured DeepSeek R1 32B on M4 Max using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 18 tok/s |
| Time to first token | 1.8s |
| Quantization | Q4_K_M |
| Minimum RAM | 48 GB |
| Recommended engine | LM Studio |
| Parameters | 32B |
| Benchmark date | 2026-01 |
Q4_K_M 32B LM Studio M4 Max
The recommended engine for DeepSeek R1 32B on M4 Max is LM Studio. Install Ollama, then pull the model:
Ollama handles quantization automatically — it will download the Q4_K_M variant (~48 GB) and start an interactive chat session.
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M5 Max | 27 tok/s | 1.2s | 64 GB | Ollama |
| M3 Max | 14 tok/s | 2.0s | 36 GB | Ollama |
To run DeepSeek R1 32B on M4 Max you need:
See how DeepSeek R1 32B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard