Run DeepSeek R1 32B on M3 Max

Yes — DeepSeek R1 32B (32B) runs at 14 tok/s on M3 Max with 36 GB RAM using Q4_K_M quantization via Ollama. First token latency is 2.0s. DeepSeek's 32B reasoning model delivering frontier-grade results locally.

Speed
14
tok/s
First Token
2.0
seconds
RAM Needed
36
GB minimum
Engine
Ollama
recommended

Benchmark Details

LLMCheck measured DeepSeek R1 32B on M3 Max using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.

MetricValue
Tokens per second14 tok/s
Time to first token2.0s
QuantizationQ4_K_M
Minimum RAM36 GB
Recommended engineOllama
Parameters32B
Benchmark date2025-12

Q4_K_M 32B Ollama M3 Max

Setup Guide: Run DeepSeek R1 32B on M3 Max

The recommended engine for DeepSeek R1 32B on M3 Max is Ollama. Install Ollama, then pull the model:

ollama run deepseek-r1:32b

Ollama handles quantization automatically — it will download the Q4_K_M variant (~36 GB) and start an interactive chat session.

Performance on Other Apple Silicon Chips

ChipSpeedFirst TokenMin RAMEngine
M5 Max 27 tok/s 1.2s 64 GB Ollama
M4 Max 18 tok/s 1.8s 48 GB LM Studio

System Requirements

To run DeepSeek R1 32B on M3 Max you need:

Compare More Models

See how DeepSeek R1 32B stacks up against other models on your specific Mac hardware.

Open Compare Tool Full Leaderboard