Yes — Ministral 14B (14B) runs at 30 tok/s on M3 with 16 GB RAM using Q4_K_M quantization via LM Studio. First token latency is 1.2s. Mistral's 14B model balancing size and reasoning for 16–24 GB Macs.
LLMCheck measured Ministral 14B on M3 using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.
| Metric | Value |
|---|---|
| Tokens per second | 30 tok/s |
| Time to first token | 1.2s |
| Quantization | Q4_K_M |
| Minimum RAM | 16 GB |
| Recommended engine | LM Studio |
| Parameters | 14B |
| Benchmark date | 2026-01 |
Q4_K_M 14B LM Studio M3
The recommended engine for Ministral 14B on M3 is LM Studio. Install Ollama, then pull the model:
Ollama handles quantization automatically — it will download the Q4_K_M variant (~16 GB) and start an interactive chat session.
| Chip | Speed | First Token | Min RAM | Engine |
|---|---|---|---|---|
| M5 Max | 58 tok/s | 0.7s | 64 GB | Ollama |
| M4 Pro | 40 tok/s | 0.9s | 24 GB | Ollama |
To run Ministral 14B on M3 you need:
See how Ministral 14B stacks up against other models on your specific Mac hardware.
Open Compare Tool Full Leaderboard