Running OpenClaw in Apple Container
The risks of OpenClaw should be pretty obvious given the power it has, but be careful out there before installing skills... See this article for details.
Apple Container
Sorry to start with a slight digression, but since macOS 26 Tahoe launched, and everyone was carrying on about Liquid Glass (ahem), I've wanted to talk about something that I think might be more significant that shipped alongside it: Apple Container. It's basically a native Apple version of Docker for running Linux containers as lightweight VMs on Mac.
If you squint, you can see Apple laying the foundations for macOS to become the platform for AI agents. This sits on top of another advantage: Apple has quietly and possibly accidentally at first, become a hidden giant in shipping hardware that can run local models better than almost anyone else. Unless you're paying a LOT more for a DGX Spark, any Mac with an M-series chip and unified memory is remarkably capable for local inference.
Put these pieces together and the picture starts to form. Native containerisation + best-in-class consumer AI hardware + a tightly controlled ecosystem (the opposite of openclaw, but probably what normal people will feel safe actually using) = a platform primed for the agent era.
Anyway, this is not a story about local llm's today, but I did finally find a good excuse to mention trying out Apple Container.
OpenClaw
OpenClaw is the open-source personal AI assistant that went viral with 100k+ GitHub stars. Think of it as a self-hosted agent that can connect to various messaging channels (WhatsApp, Telegram, etc.) and do things on your behalf, use Skills, MCP servers, etc, etc. I guess kind of an open source Claude Cowork. It's super powerful, and I felt like if I was going to play with what it could really do, I should probably not run it on my main machine.
What I set up: OpenClaw running in a lightweight Linux VM on my MacBook Pro, using a Claude Max subscription for the LLM backend, with WhatsApp as the messaging channel.
Getting Apple Container running
First, the prerequisites. You need macOS 26 (Tahoe) or later, an Apple Silicon Mac, and Homebrew. Then:
brew install container
container --version
brew services start container
container system kernel set --recommended
container system status
Building OpenClaw
OpenClaw doesn't publish pre-built container images, although there are some out there ... I prefered to build from the Dockerfile that is in the source. It uses node:22-bookworm (Debian 12) as the base.
git clone https://github.com/openclaw/openclaw.git
cd openclaw
container build -t openclaw:local .
Troubleshooting: If your build fails, you may need to give the builder more memory and possibly also fix a DNS resolution bug that I hit with container v0.9:
container builder stop
container builder start --memory 8G
container exec buildkit /bin/sh -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
container build -t openclaw:local .
Running onboarding
Create a folder on your local disk for keeping config (so you can rebuild the container with updates without losing it) and then run the container. Note Apple Container defaults to 1GB RAM, which isn't enough for OpenClaw. I started it with 4GB and it ran fine.
mkdir -p ~/.openclaw
container run -it --rm \
--memory 4g \
-p 18789:18789 \
-v ~/.openclaw:/home/node/.openclaw \
openclaw:local node dist/index.js onboard
Onboarding decisions
The wizard walks you through several choices. The important ones:
- Model/auth provider: I chose "Anthropic"
- Auth method: "Anthropic token (paste setup-token)" to use my Claude Max subscription
- Setup token: Generate this by running
claude setup-tokenin another terminal - Default model:
anthropic/claude-opus-4-5 - Channel: WhatsApp
I initially tried iMessage but realised it won't work inside a Linux container - iMessage requires macOS system services. If you need iMessage integration, you'll need to run OpenClaw natively on macOS instead of containerised. WhatsApp and Telegram work great in containers.
Enabling memory search
After onboarding, running node dist/index.js doctor flagged that memory search was enabled but no embedding provider was configured. Memory search is OpenClaw's semantic recall — it lets the agent search its own notes and past sessions by meaning, not just keywords. Without an embedding provider, it silently degrades to basic text matching.
The doctor suggests running openclaw auth add --provider openai, but that command doesn't exist. The actual command is nested under models:
node dist/index.js models auth paste-token --provider openai
This prompts you for an OpenAI API key and stores it in the auth profile. You also need to tell OpenClaw which provider to use by adding this to your openclaw.json under agents.defaults:
"memorySearch": {
"provider": "openai"
}
That's it. OpenClaw picks the default model (text-embedding-3-small) automatically. The auth profile is stored inside ~/.openclaw, so it persists across container rebuilds as long as the volume is mounted.
The networking puzzle
After onboarding, the gateway was inaccessible from the host Mac. The gateway binds to 127.0.0.1 (localhost) inside the container by default. Even with port forwarding, connections from the Mac can't reach it because localhost inside the container isn't the same as localhost on the Mac.
The fix is editing ~/.openclaw/openclaw.json:
{
"gateway": {
"bind": "lan",
"controlUi": {
"enabled": true,
"allowInsecureAuth": true
}
}
}
The allowInsecureAuth setting is needed because connections come through the container's network stack (appearing as an external IP), which would otherwise trigger pairing requirements.
Running it
Start the gateway:
container run -it --rm \
--memory 4g \
-p 18789:18789 \
-v ~/.openclaw:/home/node/.openclaw \
openclaw:local node dist/index.js gateway
You should see:
OpenClaw 2026.1.29
[gateway] listening on ws://0.0.0.0:18789 (PID 4)
[whatsapp] Listening for personal WhatsApp inbound messages.
Then open http://localhost:18789/?token=YOUR_TOKEN (the token is in your config file) and you should see the dashboard with "Health OK" in green.
Upgrading
Since we're running from source rather than a published image, updates require rebuilding the container. OpenClaw moves fast—new releases drop frequently.
To upgrade:
git pull
container build -t openclaw:local .
Then restart your container with the same run command. Your config and state persist in the mounted ~/.openclaw volume, so nothing is lost between rebuilds.
However, anything installed inside the container (SSH keys, CLI tools, auth configs) gets wiped when the container stops, since only ~/.openclaw is mounted. The workaround: store these files inside the mounted volume (e.g. ~/.openclaw/.local/) and have your agent run a boot script on startup that symlinks them into the expected locations. Add the script to HEARTBEAT.md so it happens automatically each session. I didn't modify the Dockerfile itself since it's part of the upstream repo and would be overwritten on git pull.
Version controlling agent state
After losing my agent's workspace to an accidental rm -rf, I set up a git repo at the ~/.openclaw level to back up everything that matters. The .gitignore uses a whitelist approach—ignore everything by default, then explicitly include workspace/ and cron/. Heavy or sensitive stuff within the workspace (books, chat history, SSH keys) is excluded. An hourly cron job auto-commits and pushes, so the agent's memory, config, and cron jobs are always backed up to a private GitHub repo. Just be careful you don't end up with any private keys in here as part of that previous step.
What I learned
A few things to remember:
- Apple Container isn't Docker. Similar concepts, but different CLI syntax. It's
container image pullnotcontainer pull. - Memory matters. Default 1GB is insufficient for Node.js applications doing serious work. Use
--memory 4gor higher. - Network binding is not intuitive. Change
bind: loopbacktobind: lanfor container accessibility from the host. - Claude Max works great. Use
claude setup-tokento authenticate with your subscription instead of paying for API separately.
Why everyone uses a Mac Mini for this
Supposedly everyone is running out and buying Mac Minis to run this, which seemed a bit excessive, but during the setup I could see why. A lot of the popular skills require native macOS integration - things like iMessage, Calendar, and Reminders need macOS system services that simply don't exist in a Linux container. Also a lot of skills seem installable by Homebrew by default.