Yes — DeepSeek R1 70B (70B) runs at 11 tok/s on M5 Max with 128 GB RAM using Q4_K_M quantization via Ollama. First token latency is 3.0s. DeepSeek's full 70B reasoning distillation — requires 64 GB+ Mac.
LLMCheck measured DeepSeek R1 70B on M5 Max using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 11 tok/s |
| Time to first token | 3.0s |
| Quantization | Q4_K_M |
| Minimum RAM | 128 GB |
| Recommended engine | Ollama |
| Parameters | 70B |
| Benchmark date | 2026-03 |
Q4_K_M 70B Ollama M5 Max
The recommended engine for DeepSeek R1 70B on M5 Max is Ollama. Install Ollama, then pull the model:
Ollama handles quantization automatically — it will download the Q4_K_M variant (~128 GB) and start an interactive chat session.
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M4 Ultra | 16 tok/s | 2.2s | 192 GB | MLX |
To run DeepSeek R1 70B on M5 Max you need:
See how DeepSeek R1 70B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard