Yes — Gemma 3 27B (27B) runs at 35 tok/s on M4 Max with 48 GB RAM using Q4_K_M quantization via MLX. First token latency is 1.1s. Google's 27B dense model, best-in-class reasoning for 24–48 GB Macs.
LLMCheck measured Gemma 3 27B on M4 Max using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 35 tok/s |
| Time to first token | 1.1s |
| Quantization | Q4_K_M |
| Minimum RAM | 48 GB |
| Recommended engine | MLX |
| Parameters | 27B |
| Benchmark date | 2026-02 |
Q4_K_M 27B MLX M4 Max
The recommended engine for Gemma 3 27B on M4 Max is MLX. Install with pip and pull the model:
Alternatively, you can use Ollama for a simpler setup:
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M5 Max | 42 tok/s | 0.9s | 64 GB | Ollama |
| M3 Max | 28 tok/s | 1.3s | 96 GB | MLX |
| M4 Pro | 25 tok/s | 1.5s | 24 GB | LM Studio |
To run Gemma 3 27B on M4 Max you need:
See how Gemma 3 27B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard