Yes — Qwen 3.6-35B-A3B (35B) runs at 55 tok/s on M5 Max with 128 GB RAM using Q4_K_M quantization via MLX. First token latency is 0.6s. Alibaba's flagship 35B MoE (3B active) with 73.4% SWE-bench Verified — the best local coding model.
LLMCheck measured Qwen 3.6-35B-A3B on M5 Max using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 55 tok/s |
| Time to first token | 0.6s |
| Quantization | Q4_K_M |
| Minimum RAM | 128 GB |
| Recommended engine | MLX |
| Parameters | 35B |
| Benchmark date | 2026-04 |
Q4_K_M 35B MLX M5 Max
The recommended engine for Qwen 3.6-35B-A3B on M5 Max is MLX. Install with pip and pull the model:
Alternatively, you can use Ollama for a simpler setup:
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M4 Max | 42 tok/s | 0.9s | 48 GB | MLX |
| M4 Pro | 32 tok/s | 1.2s | 24 GB | Ollama |
To run Qwen 3.6-35B-A3B on M5 Max you need:
See how Qwen 3.6-35B-A3B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard