A curated system prompt that stops your LLM from writing like a LinkedIn post. Measured: 71% character reduction on GPT-4o-mini, 56% on GPT-5.4, across 10 prompts in English and Chinese, without losing information.
When invoked, this skill installs the talk-normal rules into your workspace's
file as an always-on behavior modifier. After install, every reply your OpenClaw agent produces follows the rules in
. The rules live between
# --- talk-normal BEGIN ---
and
# --- talk-normal END ---
markers so they do not conflict with your existing rules in
.
This is not a workflow skill you invoke per turn. It is a one-time installer that makes your agent permanently less verbose until you uninstall.
The script is idempotent: running it again replaces the existing talk-normal block in place with the latest rules. Nothing else in your
is touched.
The contents of
get injected into your
. The rules target the specific slop patterns that make LLM output sound corporate and padded, grouped into a few categories:
The exact rule list lives in
and evolves as new slop patterns get caught in the wild. Every commit to that file is a named slop pattern killed.
The first command pulls the latest skill bundle from ClawHub. The second command re-runs the idempotent installer, replacing the old rule block in
with the new one.
Works on any LLM that honors system prompts or custom instructions: GPT-5.4, GPT-4o-mini, Claude 4.6, Gemini 2.5, Grok 3, Qwen 3, DeepSeek V3, and others. OpenClaw integration is via
injection, which the agent reads on every turn.
Full benchmark data for 10 prompts across GPT-4o-mini and GPT-5.4 is in
in the upstream repository. Average reduction: 71% on GPT-4o-mini, 56% on GPT-5.4, while preserving the information content of the original response.