OpenClaw Money-Saving Strategy: Saving Two Thousand a Month - What Am I Doing Right?
Original Article Title: Why My OpenClaw Sessions Burned 21.5M Tokens in a Day (And What Actually Fixed It)
Original Article Author: MOSHIII
Translation: Peggy, BlockBeats
Editor's Note: In the current rapid adoption of Agent applications, many teams have encountered a seemingly anomalous phenomenon: while the system appears to be running smoothly, the token cost continues to rise unnoticed. This article reveals that the reason for cost explosion in a real OpenClaw workload often does not stem from user input or model output but from the overlooked cached prefix replay. The model repeatedly reads a large historical context in each call, leading to significant token consumption.
The article, using specific session data, demonstrates how large intermediate artifacts such as tool outputs, browser snapshots, JSON logs, etc., are continuously written into the historical context and repetitively read in the agent loop.
Through this case study, the author presents a clear optimization approach: from context structure design, tool output management to compaction mechanism configuration. For developers building Agent systems, this is not only a technical troubleshooting record but also a practical money-saving strategy.
Below is the original article:
I analyzed a real OpenClaw workload and discovered a pattern that I believe many Agent users will recognize:
The token usage looks "active."
The replies appear normal.
But the token consumption suddenly explodes.
Here is the breakdown of the structure, root cause, and a practical fix path for this analysis.
TL;DR
The biggest cost driver is not overly long user messages. It is the massive cached prefix being repeatedly replayed.
From the session data:
Total tokens: 21,543,714
cacheRead: 17,105,970 (79.40%)
input: 4,345,264 (20.17%)
output: 92,480 (0.43%)
In other words, the majority of the cost of inference is not in processing new user intent, but in repeatedly reading a massive historical context.
The "Wait, Why?" Moment
I originally thought high token usage came from: very long user prompts, extensive output generation, or expensive tool invocations.
But the predominant pattern is:
input: hundreds to thousands of tokens
cacheRead: each call 170k to 180k tokens
In other words, the model is rereading the same massive stable prefix every round.
Data Scope
I analyzed data at two levels:
1. Runtime logs
2. Session transcripts
It's worth noting that:
Runtime logs are primarily used to observe behavioral signals (e.g., restarts, errors, configuration issues)
Precise token counts come from the usage field in session JSONL
Scripts used:
scripts/session_token_breakdown.py
scripts/session_duplicate_waste_analysis.py
Analysis files generated:
tmp/session_token_stats_v2.txt
tmp/session_token_stats_v2.json
tmp/session_duplicate_waste.txt
tmp/session_duplicate_waste.json
tmp/session_duplicate_waste.png
Where is the Token Actually Being Consumed?
1) Session Centralization
There is one session that consumes significantly more than others:
570587c3-dc42-47e4-9dd4-985c2a50af86: 19,204,645 tokens
This is followed by a sharp drop-off:
ef42abbb-d8a1-48d8-9924-2f869dea6d4a: 1,505,038
ea880b13-f97f-4d45-ba8c-a236cf6f2bb5: 649,584
2) Behavior Centralization
The tokens mainly come from:
toolUse: 16,372,294
stop: 5,171,420
The issue is primarily with tool call chain loops rather than regular chat.
3) Time Centralization
The token peaks are not random but rather concentrated in a few time slots:
2026-03-08 16:00: 4,105,105
2026-03-08 09:00: 4,036,070
2026-03-08 07:00: 2,793,648
What Exactly Is in the Massive Cache Prefix?
It's not the conversation content but mainly large intermediate artifacts:
Massive toolResult data blocks
Lengthy reasoning/thinking traces
Large JSON snapshots
File lists
Browser fetch data
Sub-Agent conversation logs
In the largest session, the character count is approximately:
toolResult:text: 366,469 characters
assistant:thinking: 331,494 characters
assistant:toolCall: 53,039 characters
Once these contents are retained in the historical context, each subsequent invocation may retrieve them again via a cache prefix.
Specific Example (from session file)
A significantly large context block repeatedly appears at the following locations:
sessions/570587c3-dc42-47e4-9dd4-985c2a50af86.jsonl:70
Large Gateway JSON Log (approx. 37,000 characters)
sessions/570587c3-dc42-47e4-9dd4-985c2a50af86.jsonl:134
Browser Snapshot + Security Encapsulation (approx. 29,000 characters)
sessions/570587c3-dc42-47e4-9dd4-985c2a50af86.jsonl:219
Large File List Output (approx. 41,000 characters)
sessions/570587c3-dc42-47e4-9dd4-985c2a50af86.jsonl:311
session/status Status Snapshot + Large Prompt Structure (approx. 30,000 characters)
「Duplicate Content Waste」 vs 「Cache Replay Burden」
I also measured the duplicate content ratio within a single invocation:
Approximate duplication ratio: 1.72%
It does exist but is not the primary issue.
The real problem is: the absolute volume of the cache prefix is too large
Structure: Massive historical context, re-read per-round invocation, with only a small amount of new input stacked on top.
Therefore, the optimization focus is not on deduplication, but on context structure design.
Why is Agent Loop particularly prone to this issue?
Three mechanisms overlapping:
1. A large amount of tool output is written to historical context
2. Tool looping generates a large number of short interval calls
3. Minimal prefix changes → cache is re-read every time
If context compaction is not stably triggered, the issue will quickly escalate.
Most Critical Remediation Strategies (by impact)
P0—Avoid stuffing massive tool output into long-lived context
For oversized tool output:
· Keep summary + reference path / ID
· Write original payload to a file artifact
· Do not retain the full original text in chat history
Priority to limit these categories:
· Large JSON
· Long directory lists
· Browser full snapshots
· Sub-Agent full transcripts
P1—Ensure compaction mechanism truly takes effect
In this dataset, configurational compatibility issues have repeatedly arisen: the compaction key is invalid
This will silently disable optimization mechanisms.
Correct approach: use only version-compatible configurations
Then verify:
openclaw doctor --fix
and check startup logs to confirm compaction acceptance.
P1—Reduce reasoning text persistence
Avoid long reasoning texts being replayed repeatedly
In a production environment: save brief summaries instead of complete reasoning
P2—Improve prompt caching design
Goal is not to maximize cacheRead. Goal is to use cache on compact, stable, high-value prefixes.
Recommendations:
· Put stable rules into system prompt
· Avoid putting unstable data under stable prefixes
· Avoid injecting large amounts of debug data each round
Implementation Stop-Loss Plan (if I were to tackle it tomorrow)
1. Identify the session with the highest cacheRead percentage
2. Run /compact on runaway sessions
3. Add truncation + artifacting to tool outputs
4. Rerun token stats after each modification
Focus on tracking four KPIs:
cacheRead / totalTokens
toolUse avgTotal/call
Calls with>=100k tokens
Maximum session percentage
Success Signals
If the optimization is successful, you should see:
A noticeable reduction in calls with 100k+ tokens
A decrease in cacheRead percentage
A decrease in toolUse call weight
A decrease in the dominance of individual sessions
If these metrics do not change, it means your contextual policies are still too loose.
Reproducibility Experiment Command
python3 scripts/session_token_breakdown.py 'sessions' \
--include-deleted \
--top 20 \
--outlier-threshold 120000 \
--json-out tmp/session_token_stats_v2.json \
> tmp/session_token_stats_v2.txt
python3 scripts/session_duplicate_waste_analysis.py 'sessions' \
--include-deleted \
--top 20 \
--png-out tmp/session_duplicate_waste.png \
--json-out tmp/session_duplicate_waste.json \
> tmp/session_duplicate_waste.txt
Conclusion
If your Agent system appears to be working fine but costs are continually rising, you may want to first check for one issue: Are you paying for new inferences or for large-scale replay of old contexts?
In my case, the majority of costs actually came from context replays.
Once you realize this, the solution becomes clear: Strictly control the data entering long-lived contexts.
You may also like

Daily Observation of Cryptocurrency Concept Stocks: Nasdaq Bets on Stocks on the Blockchain, Strategy Buys Another 17,994 BTC, ETH Treasury Stocks Enter Production Period

One-click onboarding to RootData, allowing project information to be accurately presented on over 200 platforms including Binance Wallet, Gate, TP, and more

To the Builders who are still persevering in the crypto industry

Oil Price Cools Off, Crypto Bounces Back

a16z Releases Top 100 AI Applications List, Models Are Moving Out of the Browser and App

If you only follow the news, you may have misconstrued this Iran conflict

ERC-8183: Write a Rule for a $3M On-Chain Agent Business

AI Mistakenly 'Tips' $260,000, Makes It All Back in 24 Hours

Arthur Hayes: Why is HYPE a 5x Moonshot?

a16z: Making a $2 Billion Bet on the Next Dawn of Web3

Trade to Earn Series IV: WEEX Launches Up to 40% Real-Time Futures Fee Rebates
Trade futures on WEEX and earn up to 40% real-time fee rebates. Trade to Earn Series IV lets you accumulate WXT rewards while reducing trading costs.
WEEX AI Hackathon Champions Crowned, Revealing Future of AI Trading
The first-ever WEEX AI Hackathon has concluded, with 10 winners emerging from over 200 global teams. Beyond its $1.8 million prize pool, the event marked a milestone—proving that the future of AI trading belongs to accessible, AI-powered innovation.

View: No Hype, No FUD, I Rate OpenClaw at 65 Points

Single-day Oil Price Plunge Exceeds 30%, Copilot Cowork Feature Launched, What Is the English-Speaking Community Talking About Today?

The Agent Spend Safely thing has already taken off

After the rise in the stablecoin's status, long-time partners Circle and Stripe vie for dominance

WEEX Trade to Earn: Turn Futures Trading into Instant WXT Rewards
Join WEEX Trade to Earn and earn instant WXT rebates on every futures trade. Boost rewards with referrals and tasks. Trade more, earn more on WEEX exchange.
