Run DeepSeek R1 8B on M2

Yes — DeepSeek R1 8B (8B) runs at 58 tok/s on M2 with 16 GB RAM using Q4_K_M quantization via Ollama. First token latency is 0.8s. DeepSeek's MIT-licensed 8B reasoning model with chain-of-thought thinking.

Speed
58
tok/s
First Token
0.8
seconds
RAM Needed
16
GB minimum
Engine
Ollama
recommended

Benchmark Details

LLMCheck measured DeepSeek R1 8B on M2 using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.

MetricValue
Tokens per second58 tok/s
Time to first token0.8s
QuantizationQ4_K_M
Minimum RAM16 GB
Recommended engineOllama
Parameters8B
Benchmark date2026-01

Q4_K_M 8B Ollama M2

Setup Guide: Run DeepSeek R1 8B on M2

The recommended engine for DeepSeek R1 8B on M2 is Ollama. Install Ollama, then pull the model:

ollama run deepseek-r1:8b

Ollama handles quantization automatically — it will download the Q4_K_M variant (~16 GB) and start an interactive chat session.

Performance on Other Apple Silicon Chips

ChipSpeedFirst TokenMin RAMEngine
M5 Max 97 tok/s 0.5s 64 GB Ollama
M4 78 tok/s 0.5s 16 GB MLX
M1 38 tok/s 1.2s 16 GB Ollama

System Requirements

To run DeepSeek R1 8B on M2 you need:

Compare More Models

See how DeepSeek R1 8B stacks up against other models on your specific Mac hardware.

Open Compare Tool Full Leaderboard