Yes — Gemma 4 26B-A4B (26B) runs at 28 tok/s on M4 Pro with 24 GB RAM using Q4_K_M quantization via Ollama. First token latency is 1.0s. Google's 26B MoE model activating only 3.8B params — near-frontier quality at 24 GB.
LLMCheck measured Gemma 4 26B-A4B on M4 Pro using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 28 tok/s |
| Time to first token | 1.0s |
| Quantization | Q4_K_M |
| Minimum RAM | 24 GB |
| Recommended engine | Ollama |
| Parameters | 26B |
| Benchmark date | 2026-04 |
Q4_K_M 26B Ollama M4 Pro
The recommended engine for Gemma 4 26B-A4B on M4 Pro is Ollama. Install Ollama, then pull the model:
Ollama handles quantization automatically — it will download the Q4_K_M variant (~24 GB) and start an interactive chat session.
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M5 Max | 50 tok/s | 0.5s | 128 GB | MLX |
| M4 Max | 40 tok/s | 0.7s | 48 GB | MLX |
| M5 Pro | 35 tok/s | 0.8s | 24 GB | Ollama |
To run Gemma 4 26B-A4B on M4 Pro you need:
See how Gemma 4 26B-A4B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard