Run Gemma 4 E4B on M5 Max

Yes — Gemma 4 E4B (4B) runs at 128 tok/s on M5 Max with 128 GB RAM using Q4_K_M quantization via MLX. First token latency is 0.2s. Google's 4B PLE model with multimodal capabilities and outstanding speed.

Speed
128
tok/s
First Token
0.2
seconds
RAM Needed
128
GB minimum
Engine
MLX
recommended

Benchmark Details

LLMCheck measured Gemma 4 E4B on M5 Max using the standard methodology: Q4_K_M quantization, 256-token input, 512-token output, 3 runs averaged on a freshly-booted system.

MetricValue
Tokens per second128 tok/s
Time to first token0.2s
QuantizationQ4_K_M
Minimum RAM128 GB
Recommended engineMLX
Parameters4B
Benchmark date2026-04

Q4_K_M 4B MLX M5 Max

Setup Guide: Run Gemma 4 E4B on M5 Max

The recommended engine for Gemma 4 E4B on M5 Max is MLX. Install with pip and pull the model:

pip install mlx-lm
mlx_lm.generate --model mlx-community/gemma-4-e4b-q4_k_m --prompt "Hello!"

Alternatively, you can use Ollama for a simpler setup:

ollama run gemma4:e4b

Performance on Other Apple Silicon Chips

ChipSpeedFirst TokenMin RAMEngine
M5 Pro 92 tok/s 0.3s 24 GB Ollama
M4 Pro 78 tok/s 0.3s 24 GB MLX
M3 62 tok/s 0.4s 16 GB Ollama
M1 42 tok/s 0.6s 8 GB Ollama

System Requirements

To run Gemma 4 E4B on M5 Max you need:

Compare More Models

See how Gemma 4 E4B stacks up against other models on your specific Mac hardware.

Open Compare Tool Full Leaderboard