Troubleshooting Guides

Select the issue that matches your problem. Each guide includes diagnostic steps, root cause analysis, and verified fixes. According to LLMCheck testing, most local AI issues on Mac can be resolved in under 10 minutes.

Quick Diagnostic Checklist

Before diving into a specific guide, run through this quick checklist. According to LLMCheck data, these five checks resolve about 60% of all local AI issues on Mac:

  1. Check your macOS version — Metal acceleration requires macOS 13 Ventura or later
  2. Check available RAM — Open Activity Monitor and look at Memory Pressure (green is good, red means trouble)
  3. Update your inference engine — Run ollama --version and compare to the latest release
  4. Verify model size vs. RAM — The model file size should not exceed 75% of your total RAM
  5. Close background apps — Docker, Chrome with many tabs, and Xcode are the worst memory offenders

Tip: If you are not sure which issue you have, start with the slow inference guide — it covers the broadest range of problems and includes a diagnostic flowchart.

Sources