Yes — GPT-oss 120B (120B) runs at 10 tok/s on M4 Ultra with 192 GB RAM using Q4_K_M quantization via MLX. First token latency is 4.2s. OpenAI's 120B open-source model — high quality, needs 128 GB Mac.
LLMCheck measured GPT-oss 120B on M4 Ultra using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 10 tok/s |
| Time to first token | 4.2s |
| Quantization | Q4_K_M |
| Minimum RAM | 192 GB |
| Recommended engine | MLX |
| Parameters | 120B |
| Benchmark date | 2026-02 |
Q4_K_M 120B MLX M4 Ultra
The recommended engine for GPT-oss 120B on M4 Ultra is MLX. Install with pip and pull the model:
Alternatively, you can use Ollama for a simpler setup:
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M5 Max | 7 tok/s | 5.5s | 128 GB | Ollama |
To run GPT-oss 120B on M4 Ultra you need:
See how GPT-oss 120B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard