Yes — Qwen 3.5 9B (9B) runs at 35 tok/s on M1 with 16 GB RAM using Q4_K_M quantization via Ollama. First token latency is 1.1s. Alibaba's 9B model — top pick for 8 GB and 16 GB Macs.
LLMCheck measured Qwen 3.5 9B on M1 using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 35 tok/s |
| Time to first token | 1.1s |
| Quantization | Q4_K_M |
| Minimum RAM | 16 GB |
| Recommended engine | Ollama |
| Parameters | 9B |
| Benchmark date | 2025-12 |
Q4_K_M 9B Ollama M1
The recommended engine for Qwen 3.5 9B on M1 is Ollama. Install Ollama, then pull the model:
Ollama handles quantization automatically — it will download the Q4_K_M variant (~16 GB) and start an interactive chat session.
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M5 Max | 105 tok/s | 0.5s | 64 GB | Ollama |
| M4 Pro | 92 tok/s | 0.4s | 24 GB | MLX |
| M4 | 72 tok/s | 0.6s | 16 GB | LM Studio |
| M3 | 58 tok/s | 0.7s | 16 GB | Ollama |
To run Qwen 3.5 9B on M1 you need:
See how Qwen 3.5 9B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard