Run Gemma 3 12B on M4 Pro

Yes — Gemma 3 12B (12B) runs at 52 tok/s on M4 Pro with 24 GB RAM using Q4_K_M quantization via MLX. First token latency is 0.7s. Google's 12B model offering strong reasoning for mid-range Macs.

Speed
52
tok/s
First Token
0.7
seconds
RAM Needed
24
GB minimum
Engine
MLX
recommended

Benchmark Details

LLMCheck measured Gemma 3 12B on M4 Pro using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.

MetricValue
Tokens per second52 tok/s
Time to first token0.7s
QuantizationQ4_K_M
Minimum RAM24 GB
Recommended engineMLX
Parameters12B
Benchmark date2026-02

Q4_K_M 12B MLX M4 Pro

Setup Guide: Run Gemma 3 12B on M4 Pro

The recommended engine for Gemma 3 12B on M4 Pro is MLX. Install with pip and pull the model:

pip install mlx-lm
mlx_lm.generate --model mlx-community/gemma-3-12b-q4_k_m --prompt "Hello!"

Alternatively, you can use Ollama for a simpler setup:

ollama run gemma3:12b

Performance on Other Apple Silicon Chips

ChipSpeedFirst TokenMin RAMEngine
M5 Max 68 tok/s 0.6s 64 GB Ollama
M3 38 tok/s 1.1s 16 GB Ollama
M2 32 tok/s 1.3s 16 GB Ollama

System Requirements

To run Gemma 3 12B on M4 Pro you need:

Compare More Models

See how Gemma 3 12B stacks up against other models on your specific Mac hardware.

Open Compare Tool Full Leaderboard