Yes — Phi-4 Mini (3.8B) runs at 142 tok/s on M5 Max with 64 GB RAM using Q4_K_M quantization via Ollama. First token latency is 0.3s. Microsoft's efficient 3.8B model known for blazing speed on Apple Silicon.
LLMCheck measured Phi-4 Mini on M5 Max using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 142 tok/s |
| Time to first token | 0.3s |
| Quantization | Q4_K_M |
| Minimum RAM | 64 GB |
| Recommended engine | Ollama |
| Parameters | 3.8B |
| Benchmark date | 2026-03 |
Q4_K_M 3.8B Ollama M5 Max
The recommended engine for Phi-4 Mini on M5 Max is Ollama. Install Ollama, then pull the model:
Ollama handles quantization automatically — it will download the Q4_K_M variant (~64 GB) and start an interactive chat session.
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M4 Max | 125 tok/s | 0.3s | 48 GB | MLX |
| M4 Pro | 108 tok/s | 0.4s | 24 GB | Ollama |
| M3 | 95 tok/s | 0.3s | 16 GB | MLX |
| M2 | 72 tok/s | 0.5s | 8 GB | Ollama |
| M1 | 58 tok/s | 0.6s | 16 GB | Ollama |
To run Phi-4 Mini on M5 Max you need:
See how Phi-4 Mini stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard