OpenClaw — AI-assisted Coding Community · February 2026
Key insight: The LLM returns tool calls — the Gateway executes them locally. Your infra never talks to the LLM directly.
"Scale redquiz to 3 replicas"
exec: kubectl scale
deployment/redquiz
--replicas=3
"Done! Scaled redquiz to 3 replicas. Pods coming up now."
Every message includes rich context automatically:
| Context | Source | Purpose |
|---|---|---|
| AGENTS.md | Your workspace | How to behave, memory rules |
| SOUL.md | Your workspace | Personality, tone, values |
| USER.md | Your workspace | Who you are, preferences |
| TOOLS.md | Your workspace | Local secrets, server IPs, credentials |
| Skills | System + custom | Available capabilities |
| Memory | memory/*.md | Conversation history, notes |
These are always available — no configuration needed:
Embeddings turn text into vectors that capture meaning — so "deployment decision" finds notes about "agreed to ship Friday".
Skills teach the LLM how to use specific tools.
SKILL.md)SKILL.mdThe system prompt includes a skill index:
<available_skills>
<skill>
<name>gitlab</name>
<description>GitLab CI/CD: pipelines, jobs, logs, artifacts.</description>
<location>~/clawd/skills/gitlab/SKILL.md</location>
</skill>
<skill>
<name>hetzner</name>
<description>Hetzner Cloud: servers, metrics, firewall.</description>
<location>~/clawd/skills/hetzner/SKILL.md</location>
</skill>
<skill>
<name>weather</name>
<description>Get current weather and forecasts.</description>
<location>/opt/openclaw/skills/weather/SKILL.md</location>
</skill>
...
</available_skills>
Rule: The LLM only reads a skill file when the task matches its description. Keeps context lean.
skills/gitlab/
├── SKILL.md # The documentation the LLM reads
└── gitlab-ci.sh # Helper script (optional)
# SKILL.md structure:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# GitLab CI/CD Skill
## Overview
Query GitLab CI pipelines, jobs, and logs.
## Authentication
Token at: ~/.gitlab-token
## Commands
### List pipelines
```bash
./gitlab-ci.sh pipelines <project>
```
### Get job log
```bash
./gitlab-ci.sh log <project> <job-id>
```
## Examples
"Show me the latest pipeline for redquiz" →
./gitlab-ci.sh pipelines gutschilla/redquiz
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
| Source | Location | Examples |
|---|---|---|
| Built-in | /opt/openclaw/skills/ |
weather, web_search, himalaya (email) |
| Your workspace | ~/clawd/skills/ |
gitlab, hetzner, k8s, inwx-dns |
| Community | clawhub.com | Shared skills (coming soon) |
~/clawd/skills/my-tool/SKILL.md with commands and examplesDomain management with 2FA — from the demo:
# SKILL.md (excerpt)
## Authentication
- User: gutsch.it
- Password: in TOOLS.md
- **2FA Required**: Ask user for OTP each time
## Commands
### List domains
```bash
INWX_USER="..." INWX_PASS="..." INWX_OTP=<code> \
python3 ~/clawd/tools/inwx-dns/inwx-dns.py list
```
### Check domain expiry
```bash
... inwx-dns.py info <domain>
```
## Notes
- OTPs expire in ~30 seconds
- Always ask user for fresh OTP before API calls
Result: Claude knows it needs to ask for OTP, runs the command, parses the result.
"When do my production domains expire?"
The LLM's reasoning:
Cross-system correlation — that's not a script. It's reasoning about which tools to combine.
Risk: LLM sees tokens → prompt injection could exfiltrate
| Layer | What It Does |
|---|---|
| Channels | Signal, Telegram, Discord, Webchat, TUI — where you talk |
| Gateway | Routes messages, manages sessions, loads context |
| LLM | Reasons about your request, decides what tools to use |
| Skills | Just-in-time docs for specific tools |
| Execution | Shell commands, APIs, file operations — on your machine |
The key insight:
Skills are just markdown. The LLM reads them when needed. You teach it new capabilities by writing documentation.
💡 The TUI is incredibly powerful on a big screen — full path completions, syntax highlighting, and you see the AI working in real-time.
For real work, you need capable models — that's not free.
| Option | Cost | Notes |
|---|---|---|
| Claude Pro | $20/month | Good for daily use, rate-limited |
| Claude Max | $100+/month | Heavy use, Opus 4.5 access |
| API (pay-as-you-go) | ~$15/M tokens (Opus) | Best quality, no rate limits |
| Local (Ollama) | Hardware once | RTX 3090 or Mac Mini M4 (~€2500) |
Semantic memory search requires an embedding model:
nomic-embed-text or mxbai-embed-large, zero API costRunning OpenClaw on my company-provided Mac
github.com/openclaw/openclaw