Now for the brain. The LLM Layer is where thinking happens. It supports pluggable model providers, so you can connect Claude, GPT, Gemini, Llama, or any OpenAI-compatible API. Each provider is registered at startup, and you can switch models per conversation or per task.
This layer handles prompt construction, token counting, context window management, and response streaming. Want OpenClaw to use a local model on Ollama for quick tasks and Claude for complex reasoning? You configure that here. I'll show you how in the configuration section.