Run GPT-oss 120B on M4 Ultra

Yes — GPT-oss 120B (120B) runs at 10 tok/s on M4 Ultra with 192 GB RAM using Q4_K_M quantization via MLX. First token latency is 4.2s. OpenAI's 120B open-source model — high quality, needs 128 GB Mac.

Speed
10
tok/s
First Token
4.2
seconds
RAM Needed
192
GB minimum
Engine
MLX
recommended

Benchmark Details

LLMCheck measured GPT-oss 120B on M4 Ultra using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.

MetricValue
Tokens per second10 tok/s
Time to first token4.2s
QuantizationQ4_K_M
Minimum RAM192 GB
Recommended engineMLX
Parameters120B
Benchmark date2026-02

Q4_K_M 120B MLX M4 Ultra

Setup Guide: Run GPT-oss 120B on M4 Ultra

The recommended engine for GPT-oss 120B on M4 Ultra is MLX. Install with pip and pull the model:

pip install mlx-lm
mlx_lm.generate --model mlx-community/gpt-oss-120b-q4_k_m --prompt "Hello!"

Alternatively, you can use Ollama for a simpler setup:

ollama run gpt-oss:120b

Performance on Other Apple Silicon Chips

ChipSpeedFirst TokenMin RAMEngine
M5 Max 7 tok/s 5.5s 128 GB Ollama

System Requirements

To run GPT-oss 120B on M4 Ultra you need:

Compare More Models

See how GPT-oss 120B stacks up against other models on your specific Mac hardware.

Open Compare Tool Full Leaderboard