Features

Everything OpenClaw can do — documented clearly

The official docs are sparse. These guides cover every major feature in depth — with real examples, config snippets, and the gotchas the docs skip over.

Skills System & ClawHub

Skills are the core of what OpenClaw does. Each skill adds a discrete capability, and ClawHub is the distribution layer where teams discover, install, and update them.

  • Install skills with a single command from ClawHub.
  • Write custom skills in JavaScript or TypeScript.
  • Chain skills together for multi-step automations.
  • Version-lock skills to avoid breaking changes.
Read full guide

SOUL.md & MEMORY.md Explained

SOUL.md shapes the agent's persona and defaults. MEMORY.md stores durable context. Together they define how the assistant behaves over time and across sessions.

  • SOUL.md controls tone, persona, and default behaviors.
  • MEMORY.md persists facts across sessions.
  • Both files support Markdown formatting and code blocks.
  • Changes take effect on the next full agent restart.
Read full guide

All Supported Messaging Platforms

OpenClaw connects to major messaging platforms through adapters. Each integration has its own authentication model, operational limits, and debugging path.

  • WhatsApp via Baileys with no API key required.
  • Telegram, Discord, Slack, and Signal support.
  • Custom webhook adapters for unsupported platforms.
  • Per-platform auth and rate-limit considerations.
Read full guide

Config File Reference

The main config file controls providers, rate limits, ports, and logging behavior. This overview highlights the categories that usually need deliberate production choices.

  • AI provider selection for hosted and local models.
  • Rate limiting and concurrent session controls.
  • Logging levels and output destinations.
  • Environment variable mapping and secret handling.
Read full guide

Multi-Agent & Orchestration

OpenClaw can support multiple agents with separate instructions, memories, and skill sets while sharing the underlying host. The orchestration pattern matters for isolation and operability.

  • Separate persona and memory per agent.
  • Per-agent skill sets and overrides.
  • Shared infrastructure with explicit isolation boundaries.
  • Operational planning for ports, logs, and ownership.
Read full guide

Local & Self-Hosted Models

You can run OpenClaw against Ollama, LM Studio, or any OpenAI-compatible endpoint. Local models trade some convenience for privacy, cost control, and deployment flexibility.

  • Ollama and LM Studio support out of the box.
  • Any OpenAI-compatible endpoint can work.
  • Model fallback chains improve reliability.
  • Per-skill model routing helps control cost.
Read full guide

Something not working as documented?

Check the fix index for known bugs, or hire Milan to sort it out directly.