Yes — Mistral 7B (7B) runs at 98 tok/s on M4 Pro with 24 GB RAM using Q4_K_M quantization via MLX. First token latency is 0.4s. Mistral AI's classic 7B model, one of the most widely-supported open models.
LLMCheck measured Mistral 7B on M4 Pro using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 98 tok/s |
| Time to first token | 0.4s |
| Quantization | Q4_K_M |
| Minimum RAM | 24 GB |
| Recommended engine | MLX |
| Parameters | 7B |
| Benchmark date | 2026-02 |
Q4_K_M 7B MLX M4 Pro
The recommended engine for Mistral 7B on M4 Pro is MLX. Install with pip and pull the model:
Alternatively, you can use Ollama for a simpler setup:
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M5 Max | 122 tok/s | 0.3s | 64 GB | Ollama |
| M3 | 62 tok/s | 0.6s | 16 GB | Ollama |
| M1 | 42 tok/s | 1.2s | 16 GB | Ollama |
To run Mistral 7B on M4 Pro you need:
See how Mistral 7B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard