Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They recommend a Mac Mini because it’s the cheapest device that can access your Apple reminders and iMessage. If you are into that ecosystem obviously.

If you don’t need any of that then any device or small VPS instance will suffice.



It's because of the Mac Mini's unified memory architecture; which is ideal for inference.


The amount of ram available on a Mac Mini is not good enough for a decent open model for OpenClaw, everybody is using remote AI services on those.


You can get up 64GB of memory.

It's very difficult to get this much memory on a graphics card.


I know, but which open model that fits in there is useful enough for OpenClaw? I don’t think there is one.

If you look at the videos and blog posts where they recommend getting a Mac Mini for this are recommending the base model (which comes with just 16GB), precisely because it’s the cheapest Mac that can read your reminders, use iMessage etc. that’s what those using OpenClaw want from the Mini, not its inference capabilities.


I disagree.


What model are you running with 64GB of VRAM (equivalent)? I doubt most users are doing that. Looking at their documentation, the default path for openclaw seems to be a 3P API for the model.


It doesn't matter what 'most users' are doing.

On a 64 GB Apple silicon Mac mini you can natively host mid sized and some larger quantised local models .. using Ollama.

For example:

Qwen3-Coder (32B), GLM-4.7 (or GLM-4 Variants), Devstral-24B / Mistral Large (Quantized)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: