Run Llama 3.3 70B on M4 Ultra

Yes — Llama 3.3 70B (70B) runs at 18 tok/s on M4 Ultra with 192 GB RAM using Q4_K_M quantization via MLX. First token latency is 2.0s. Meta's 70B Llama 3.3 model for Macs with 64–128 GB RAM.

Speed
18
tok/s
First Token
2.0
seconds
RAM Needed
192
GB minimum
Engine
MLX
recommended

Benchmark Details

LLMCheck measured Llama 3.3 70B on M4 Ultra using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.

MetricValue
Tokens per second18 tok/s
Time to first token2.0s
QuantizationQ4_K_M
Minimum RAM192 GB
Recommended engineMLX
Parameters70B
Benchmark date2026-02

Q4_K_M 70B MLX M4 Ultra

Setup Guide: Run Llama 3.3 70B on M4 Ultra

The recommended engine for Llama 3.3 70B on M4 Ultra is MLX. Install with pip and pull the model:

pip install mlx-lm
mlx_lm.generate --model mlx-community/llama-33-70b-q4_k_m --prompt "Hello!"

Alternatively, you can use Ollama for a simpler setup:

ollama run llama3.3:70b

Performance on Other Apple Silicon Chips

ChipSpeedFirst TokenMin RAMEngine
M5 Max 15 tok/s 2.4s 128 GB MLX
M5 Max 12 tok/s 2.8s 128 GB Ollama

System Requirements

To run Llama 3.3 70B on M4 Ultra you need:

Compare More Models

See how Llama 3.3 70B stacks up against other models on your specific Mac hardware.

Open Compare Tool Full Leaderboard