Run Llama 3.1 8B on M2

Yes — Llama 3.1 8B (8B) runs at 48 tok/s on M2 with 8 GB RAM using Q4_K_M quantization via Ollama. First token latency is 0.8s. Meta's widely-adopted 8B Llama 3.1 model with great ecosystem support.

Speed
48
tok/s
First Token
0.8
seconds
RAM Needed
8
GB minimum
Engine
Ollama
recommended

Benchmark Details

LLMCheck measured Llama 3.1 8B on M2 using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.

MetricValue
Tokens per second48 tok/s
Time to first token0.8s
QuantizationQ4_K_M
Minimum RAM8 GB
Recommended engineOllama
Parameters8B
Benchmark date2025-12

Q4_K_M 8B Ollama M2

Setup Guide: Run Llama 3.1 8B on M2

The recommended engine for Llama 3.1 8B on M2 is Ollama. Install Ollama, then pull the model:

ollama run llama3.1:8b

Ollama handles quantization automatically — it will download the Q4_K_M variant (~8 GB) and start an interactive chat session.

Performance on Other Apple Silicon Chips

ChipSpeedFirst TokenMin RAMEngine
M5 Max 138 tok/s 0.3s 128 GB MLX
M4 75 tok/s 0.6s 16 GB Ollama
M3 Pro 62 tok/s 0.7s 18 GB Ollama
M1 40 tok/s 1.1s 16 GB Ollama

System Requirements

To run Llama 3.1 8B on M2 you need:

Compare More Models

See how Llama 3.1 8B stacks up against other models on your specific Mac hardware.

Open Compare Tool Full Leaderboard