Yes — Llama 3.1 8B (8B) runs at 48 tok/s on M2 with 8 GB RAM using Q4_K_M quantization via Ollama. First token latency is 0.8s. Meta's widely-adopted 8B Llama 3.1 model with great ecosystem support.
LLMCheck measured Llama 3.1 8B on M2 using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 48 tok/s |
| Time to first token | 0.8s |
| Quantization | Q4_K_M |
| Minimum RAM | 8 GB |
| Recommended engine | Ollama |
| Parameters | 8B |
| Benchmark date | 2025-12 |
Q4_K_M 8B Ollama M2
The recommended engine for Llama 3.1 8B on M2 is Ollama. Install Ollama, then pull the model:
Ollama handles quantization automatically — it will download the Q4_K_M variant (~8 GB) and start an interactive chat session.
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M5 Max | 138 tok/s | 0.3s | 128 GB | MLX |
| M4 | 75 tok/s | 0.6s | 16 GB | Ollama |
| M3 Pro | 62 tok/s | 0.7s | 18 GB | Ollama |
| M1 | 40 tok/s | 1.1s | 16 GB | Ollama |
To run Llama 3.1 8B on M2 you need:
See how Llama 3.1 8B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard