OpenClaw Tutorial: Complete Beginner Guide
This tutorial walks you through everything you need to know about OpenClaw — from understanding what it is to deploying a fully configured AI assistant across Telegram, WhatsApp, and Discord. No prior experience required.
Reading time: ~15 minutes · Skill level: Beginner
1. What is OpenClaw?
OpenClaw is an open-source AI assistant framework that acts as a bridge between powerful AI language models (like GPT-4, Claude, and Gemini) and the messaging apps you use daily — Telegram, WhatsApp, Discord, and more.
Think of it like this: AI models are engines, and OpenClaw is the car. Without a framework like OpenClaw, you'd have raw API access but no interface for real conversations. OpenClaw wraps those APIs in a complete gateway that handles message routing, session management, rate limiting, and multi-platform delivery.
OpenClaw was created by a developer community frustrated with the fragmentation of AI tooling. Instead of building separate integrations for every platform, OpenClaw provides one unified backend that speaks to all of them simultaneously.
Why use OpenClaw?
- →Personal AI in your pocket. Your bot lives inside Telegram or WhatsApp — no switching apps, no browser tabs.
- →Model flexibility. Switch from GPT-4o to Claude or Gemini by changing one line in your config.
- →Privacy by default. Your conversations go directly from your device to your chosen AI provider — no middleman storing your data.
- →Extensible via plugins. Add skills like web search, image generation, or calendar access through the plugin system.
- →Free and open-source. MIT-licensed, 250,000+ GitHub stars, active community.
2. Architecture Overview
Before diving into deployment, it helps to understand how OpenClaw is structured internally. The framework has three core components that work together: the Gateway, the Agent Engine, and Channels.
Gateway
The Gateway is OpenClaw's central server process. It runs on port 18789 by default and is responsible for receiving messages from all connected channels, routing them to the appropriate agent, collecting the AI response, and sending it back. The Gateway also exposes a web-based control panel ("OpenClaw Control") for monitoring and configuration.
Agent Engine
The Agent Engine handles everything AI-related. It reads your agent configuration (system prompt, model, temperature, memory settings), builds the context window for each conversation, calls the AI API, and processes the response. You can define multiple agents — for example, one general-purpose assistant and one specialized coding helper.
Channels
Channels are the connectors to messaging platforms. Each channel (Telegram, WhatsApp, Discord) has its own adapter that handles authentication, message polling or webhooks, media handling, and platform-specific formatting. Channels are enabled or disabled in config.json — you can run all of them simultaneously.
The data flow in a typical conversation looks like this:
User message (Telegram)
↓
Telegram Channel Adapter (polls Telegram API)
↓
OpenClaw Gateway (message routing)
↓
Agent Engine (builds context, calls AI API)
↓
AI Model (GPT-4o / Claude / Gemini)
↓
Agent Engine (processes response)
↓
Gateway (routes reply back)
↓
Telegram Channel Adapter (sends message via Telegram API)
↓
User receives replyThis entire round-trip typically completes in 1–3 seconds depending on the AI model's latency.
3. Your First Bot
There are two ways to deploy your first OpenClaw bot: using ClawMates (recommended for beginners) or self-hosting with Docker. We'll cover both.
Option A: ClawMates (5 minutes, no code)
ClawMates is a managed hosting platform that handles all the infrastructure for you. You bring your Telegram bot token; we handle servers, Docker, updates, and monitoring.
- 1Create a Telegram bot. Open Telegram, search for @BotFather, and send
/newbot. Follow the prompts and copy the bot token (it looks like123456789:AABBcc...). - 2Go to ClawMates setup. Visit clawmates.net/setup and enter your email to start a 7-day free trial.
- 3Paste your bot token. The setup wizard will ask for your Telegram token. Paste it in and press Enter.
- 4Choose your AI model. Pick GPT-4o, Claude 3.5 Sonnet, or Gemini 1.5 Pro. ClawMates provisions the API key — you don't need your own.
- 5Test your bot. Open your Telegram bot and send any message. It will respond within seconds.
Option B: Self-Host with Docker
For developers who prefer full control, you can run OpenClaw on your own server. You'll need Docker installed and a server with at least 2GB RAM (OpenClaw's WASM gateway compiles at startup and requires ~1.3GB peak memory).
Step 1: Pull the Docker image
docker pull alpine/openclaw:latestStep 2: Create your config.json
{
"gateway": {
"mode": "local"
},
"agents": {
"defaults": {
"model": "gpt-4o",
"apiKey": "sk-your-openai-key-here"
}
},
"channels": {
"telegram": {
"enabled": true,
"botToken": "123456789:AABBcc...",
"dmPolicy": "open"
}
}
}Step 3: Run the container
docker run -d \
--name openclaw \
-p 8080:8080 \
-v $(pwd)/config.json:/home/node/.openclaw/config.json \
-e NODE_OPTIONS="--max-old-space-size=1536" \
--memory="2048m" \
alpine/openclaw:latest \
node /app/openclaw.mjs gatewayThe gateway takes approximately 5 minutes to start on first boot while it compiles the WASM sandbox. Port 18789 only becomes active after initialization completes. You can check logs with docker logs -f openclaw.
For a detailed self-hosting walkthrough, see our OpenClaw installation guide and hosting options comparison.
4. Customizing Your Bot
Once your bot is running, the most impactful thing you can do is customize its personality and behavior. OpenClaw gives you three primary knobs: system prompt, model selection, and temperature.
System Prompt
The system prompt is the set of instructions sent to the AI model before every conversation. It defines who your bot is, what it knows, how it speaks, and what it should or should not do. This is the single most powerful customization in OpenClaw.
{
"agents": {
"defaults": {
"model": "gpt-4o",
"apiKey": "sk-...",
"systemPrompt": "You are Alex, a friendly personal productivity assistant. You help the user manage tasks, draft emails, and stay focused. Keep responses concise — under 150 words unless asked for detail. Always end with an actionable next step."
}
}
}Tip: A good system prompt includes a name, a role, a tone, and behavioral constraints. Experiment with different prompts to find what works best for your use case.
Model Selection
OpenClaw supports multiple AI providers. Change the model field to switch between them:
| Model value | Provider | Best for |
|---|---|---|
| gpt-4o | OpenAI | General purpose, coding, analysis |
| claude-3-5-sonnet | Anthropic | Writing, reasoning, long context |
| gemini-1.5-pro | Multimodal, fast, cost-efficient | |
| gemini-flash | Low-latency, high-volume usage |
Temperature
Temperature controls how creative or predictable your bot's responses are. It ranges from 0.0 (deterministic, always picks the most likely word) to 2.0 (highly creative and variable). For most assistants, a value between 0.5 and 0.8 works well.
{
"agents": {
"defaults": {
"model": "claude-3-5-sonnet",
"apiKey": "sk-ant-...",
"temperature": 0.7
}
}
}Use lower temperature (0.0–0.3) for factual Q&A bots, higher (0.8–1.2) for creative writing or brainstorming assistants.
5. Memory & Context
One of OpenClaw's most useful features is its conversation memory system. By default, OpenClaw keeps a sliding window of the last 20 message turns in memory for each conversation thread, so your bot can refer back to earlier parts of the current chat.
How the Context Window Works
Every time a user sends a message, OpenClaw assembles a "context payload" that gets sent to the AI model. This payload includes:
- →The system prompt (your bot's instructions)
- →The last N turns of conversation history
- →Any injected memory summaries (if using the memory plugin)
- →The current user message
Configuring Context Length
You can control how many turns of history are included using the contextWindow setting:
{
"agents": {
"defaults": {
"model": "gpt-4o",
"apiKey": "sk-...",
"contextWindow": 30
}
}
}The Memory Plugin
For long-running assistants where conversations span days or weeks, the built-in memory plugin stores summaries of past conversations in a local SQLite database and automatically injects relevant context into new conversations. Enable it like this:
{
"agents": {
"defaults": {
"model": "gpt-4o",
"apiKey": "sk-...",
"plugins": {
"memory": {
"enabled": true,
"summaryInterval": 10,
"maxMemories": 50
}
}
}
}
}summaryInterval controls how many turns trigger a new summary. maxMemories caps the number of stored summaries to prevent unbounded growth.
Pro tip: The memory plugin is especially useful for personal assistants where you want the bot to remember your preferences, ongoing projects, and past decisions across conversations.
6. Multi-Platform Deployment
One of OpenClaw's most powerful capabilities is running the same AI assistant on multiple platforms simultaneously. Your bot can be available on Telegram, WhatsApp, and Discord at the same time, sharing the same model configuration and personality.
Each platform has its own conversation threads and memory isolation — a conversation on Telegram doesn't bleed into your WhatsApp thread. But all channels share the same agent configuration, so you only need to update one system prompt to change behavior everywhere.
Telegram Setup
Telegram is the easiest channel to configure. You need a bot token from @BotFather. OpenClaw uses long-polling by default (no webhook required).
"channels": {
"telegram": {
"enabled": true,
"botToken": "123456789:AABBcc...",
"dmPolicy": "open",
"groupPolicy": "disabled"
}
}WhatsApp Setup
WhatsApp integration uses OpenClaw's built-in WhatsApp Web bridge. You pair it by scanning a QR code from the OpenClaw Control UI. Important config notes:
"channels": {
"whatsapp": {
"enabled": true,
"dmPolicy": "open",
"selfChatMode": true,
"accounts": {
"default": {
"dmPolicy": "open"
}
}
}
}Note: dmPolicy must be set at both the channel level and account level. If only set at one level, OpenClaw's doctor process may override it to "pairing" which blocks messages.
Discord Setup
For Discord, you'll need to create a Discord application and bot token at discord.com/developers. Grant the bot message and guild permissions, then add it to your server.
"channels": {
"discord": {
"enabled": true,
"botToken": "MTAx...",
"dmPolicy": "open",
"guildPolicy": "allowlisted",
"allowedGuilds": ["123456789012345678"]
}
}For a complete platform-by-platform setup walkthrough, see our OpenClaw setup guide.
7. Skills & Plugins
Out of the box, OpenClaw is a conversational AI assistant. But through its plugin system, you can extend it with real-world capabilities — searching the web, generating images, reading files, calling APIs, and more.
Built-in Plugins
OpenClaw ships with several built-in plugins that you can enable in config.json:
memory — Persistent conversation storage
Stores conversation summaries in SQLite and injects relevant past context automatically. Covered in detail in Section 5.
web-search — Real-time web search
Gives your bot the ability to search the web when it doesn't know something or needs current information. Requires a Brave Search or Google Search API key.
image-gen — AI image generation
Allows users to request images via natural language. Supports DALL-E 3 (OpenAI) and Stable Diffusion backends.
reminders — Scheduled notifications
Users can ask the bot to send them a reminder at a specific time. The plugin stores reminders and triggers them via the channel it was configured on.
Enabling Plugins
{
"agents": {
"defaults": {
"model": "gpt-4o",
"apiKey": "sk-...",
"plugins": {
"memory": {
"enabled": true
},
"web-search": {
"enabled": true,
"apiKey": "BSK-your-brave-key"
},
"image-gen": {
"enabled": true,
"provider": "dalle3",
"apiKey": "sk-..."
}
}
}
}
}Community Plugins
The OpenClaw community has published hundreds of third-party plugins on GitHub including calendar integration (Google Calendar, Outlook), home automation (Home Assistant), stock/crypto price lookup, and many more. Browse the openclaw-plugins topic on GitHub to find community plugins.
8. Advanced Configuration
Once you're comfortable with the basics, config.json has many more options for fine-tuning your deployment. Here is a complete annotated config showing the most useful advanced settings:
{
// Gateway settings
"gateway": {
"mode": "local", // MUST be "local" — do not change
"controlUi": {
// Set to true to skip device pairing in Control UI
"dangerouslyDisableDeviceAuth": true
}
},
// Agent configuration
"agents": {
"defaults": {
"model": "claude-3-5-sonnet",
"apiKey": "sk-ant-...",
"temperature": 0.7,
"contextWindow": 25, // Number of message turns to keep
"maxTokens": 1024, // Max tokens per response
"systemPrompt": "You are a helpful assistant.",
"plugins": {
"memory": { "enabled": true },
"web-search": { "enabled": false }
}
},
// Define additional specialized agents
"agents": [
{
"id": "coder",
"systemPrompt": "You are an expert software engineer. Answer coding questions with working code examples.",
"model": "gpt-4o",
"temperature": 0.2
}
]
},
// Channel configuration
"channels": {
"telegram": {
"enabled": true,
"botToken": "123456789:AABBcc...",
"dmPolicy": "open", // "open" | "pairing" | "disabled"
"groupPolicy": "disabled", // "open" | "disabled"
"defaultAgent": "default" // Which agent handles this channel
},
"whatsapp": {
"enabled": true,
"dmPolicy": "open",
"selfChatMode": true,
"accounts": {
"default": {
"dmPolicy": "open"
}
}
}
}
}Important Config Rules
OpenClaw's config validation is strict. Unrecognized keys cause the gateway to exit with code 1, so it's important to avoid typos or deprecated fields. Key rules to remember:
- →
gateway.modemust always be"local". No other value is supported. - →Do not add
systemPromptortemperaturedirectly underagents— they must go underagents.defaults. - →WhatsApp does not support
selfPhoneMode,dms, orgroupskeys. UseselfChatModeanddmPolicyinstead. - →The HTTP channel does not exist in current OpenClaw versions. Do not enable it.
Environment Variables
For production deployments, avoid hardcoding API keys in config.json. OpenClaw reads environment variables at startup. Set them in your Docker run command or container environment:
docker run -d \
-e OPENAI_API_KEY="sk-..." \
-e TELEGRAM_BOT_TOKEN="123..." \
-e OPENCLAW_GATEWAY_TOKEN="your-secret-token" \
-e NODE_OPTIONS="--max-old-space-size=1536" \
--memory="2048m" \
alpine/openclaw:latest \
node /app/openclaw.mjs gateway9. Next Steps
You've covered the full OpenClaw lifecycle — from understanding the architecture to deploying a multi-platform bot with memory and plugins. Here are some recommended next steps depending on your goals:
Skip the setup and deploy your bot in 5 minutes with ClawMates. Free trial, no credit card.
Comprehensive overview of the OpenClaw framework, history, and community.
Compare self-hosting vs ClawMates — cost, complexity, and reliability tradeoffs.
Step-by-step self-hosting guide for Docker, Fly.io, Railway, and VPS deployments.
Individual setup guides for Telegram, WhatsApp, Discord, and more.
Tips, use cases, and tutorials for getting the most out of your OpenClaw bot.
Frequently Asked Questions
Common questions from developers and non-technical users getting started with OpenClaw.
How long does it take to set up OpenClaw for the first time?
Self-hosting from scratch takes 30–90 minutes including Docker setup, config writing, and bot token creation. The gateway also takes approximately 5 minutes to start on first boot while it compiles its WASM sandbox. Using ClawMates, you can deploy a fully working bot in under 5 minutes — no Docker or server knowledge required.
Do I need to know how to code to use OpenClaw?
Basic self-hosting requires editing JSON config files and running terminal commands, but no programming knowledge is needed. ClawMates provides a no-code setup wizard so anyone can deploy an OpenClaw bot without touching a terminal.
Can I run multiple bots with one OpenClaw instance?
Yes. OpenClaw supports multiple agent profiles within a single gateway instance. Each agent can have its own system prompt, model, and channel bindings, letting you run a customer-service bot, a personal assistant, and a research bot from one deployment.
What AI models can I use with OpenClaw?
OpenClaw supports GPT-4o and GPT-4 (OpenAI), Claude 3.5 Sonnet and Claude 3 Opus (Anthropic), and Gemini 1.5 Pro and Gemini Flash (Google). You configure the model in config.json under agents.defaults.model.
How does OpenClaw handle conversation memory?
OpenClaw maintains an in-memory sliding window of recent messages per conversation thread. By default it keeps the last 20 turns. You can extend this with the memory plugin, which stores conversation summaries in a local SQLite database and injects them into the context window automatically.
Is OpenClaw free to use?
OpenClaw itself is free and open-source (MIT license). You pay only for the AI model API calls you make (OpenAI, Anthropic, or Google) and any server costs if self-hosting. ClawMates bundles hosting and AI API access into one flat $29.99/month subscription with a 7-day free trial.
Ready to deploy your bot?
Skip the setup complexity. ClawMates deploys a fully configured OpenClaw bot in under 5 minutes — no servers, no Docker, no headaches. Start free.
Deploy with ClawMates — Free Trial7-day free trial · No credit card required · Cancel anytime