Run GPT-oss 120B on M5 Max

Yes — GPT-oss 120B (120B) runs at 7 tok/s on M5 Max with 128 GB RAM using Q4_K_M quantization via Ollama. First token latency is 5.5s. OpenAI's 120B open-source model — high quality, needs 128 GB Mac.

Speed
7
tok/s
First Token
5.5
seconds
RAM Needed
128
GB minimum
Engine
Ollama
recommended

Benchmark Details

LLMCheck measured GPT-oss 120B on M5 Max using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.

MetricValue
Tokens per second7 tok/s
Time to first token5.5s
QuantizationQ4_K_M
Minimum RAM128 GB
Recommended engineOllama
Parameters120B
Benchmark date2026-03

Q4_K_M 120B Ollama M5 Max

Setup Guide: Run GPT-oss 120B on M5 Max

The recommended engine for GPT-oss 120B on M5 Max is Ollama. Install Ollama, then pull the model:

ollama run gpt-oss:120b

Ollama handles quantization automatically — it will download the Q4_K_M variant (~128 GB) and start an interactive chat session.

Performance on Other Apple Silicon Chips

ChipSpeedFirst TokenMin RAMEngine
M4 Ultra 10 tok/s 4.2s 192 GB MLX

System Requirements

To run GPT-oss 120B on M5 Max you need:

Compare More Models

See how GPT-oss 120B stacks up against other models on your specific Mac hardware.

Open Compare Tool Full Leaderboard