claude-telegram-live-feed
A hardened character-by-character streaming Telegram CLI for AI assistants (built for Claude Code), plus the set of Claude Code hooks that enforce streaming-by-default and auto-heartbeat long commands.
The core idea: when an AI assistant replies to a Telegram chat, the user shouldn't wait in silence for a wall of text. Instead, the reply appears as a single message that grows word-by-word in front of them, with an alternating cursor glyph, finishing in 1-3 seconds. Long-running tasks produce automatic "starting" and "done in Ns" heartbeats so the channel never goes silent.
Built as part of the OpenClaw agent system, but usable by any tool that can shell out.
Components
bin/tg-stream — the streamer
A single-file bun script. Reads a message from arguments or stdin, sends a placeholder to Telegram, then pipelined-edits it with a growing prefix so the user sees a typing animation. Auto-picks chunk size based on message length and a target stream duration (default 2 seconds).
Features
- Direct calls to
api.telegram.org/bot<TOKEN>/editMessageText— no MCP middleman - Word-boundary snapping so chunks end at spaces
- Alternating
▌/▐cursor to dodge Telegram's "message not modified" whitespace rejection - Auto-split at 4000 characters to stay under Telegram's 4096 message limit — long replies become multiple sequential bubbles
- Header line (
--header "🔧 working") stays fixed while the body grows - Plain-send mode (
--no-stream) for fast one-line acknowledgements
Safety contracts
- Global concurrency cap via
O_EXCLatomic slot files (default: 10 concurrent; race-free, noflock) - Per-fetch timeout via
Promise.race(default: 10 seconds; works around bun AbortController bugs) - Total wall-time budget with self-kill (default: 60 seconds)
- Exponential 429 retry with jitter, honoring server-suggested
retry_after - RSS memory watchdog reading
/proc/self/status(default cap: 256 MB) - Structured JSON failure log with line-count rotation (default:
/host/root/.caret/log/tg-stream.log)
Usage
tg-stream "your text" # stream to default chat in ~2 seconds
tg-stream --header "🔧 working" "body text" # static prefix + streamed body
tg-stream --target 3 "longer answer..." # stretch the stream to 3 seconds
tg-stream --no-stream "short ack" # plain send, no streaming
echo "from stdin" | tg-stream
tg-stream --chat 1234567 "explicit chat id"
Env knobs
| var | default | purpose |
|---|---|---|
TG_DEFAULT_CHAT |
first env default or hardcoded | target chat_id when --chat omitted |
TG_MAX_CONCURRENT |
10 |
global concurrency cap |
TG_FETCH_TIMEOUT_MS |
10000 |
per HTTP request timeout |
TG_MAX_TOTAL_MS |
60000 |
total invocation wall budget |
TG_MEM_CAP_MB |
256 |
RSS cap before self-abort |
TG_LOG_FILE |
/host/root/.caret/log/tg-stream.log |
structured JSON log path |
Token source: expects a file at /root/.claude/channels/telegram/.env containing TELEGRAM_BOT_TOKEN=.... Adjust the ENV_FILE constant at the top of tg-stream if your path differs.
Exit codes
| code | meaning |
|---|---|
0 |
success |
1 |
send failed (final attempt errored) |
2 |
bad arguments |
3 |
memory cap exceeded |
4 |
back-pressure drop (≥ cap concurrent) |
5 |
total time budget exceeded |
bin/tg-task — the long-command wrapper
A bash wrapper that turns any shell command into a self-announcing task. It sends a "🔧 starting · label" message to Telegram before running, an "⏳ still on it · Ns elapsed" heartbeat every 8 seconds while running, and a "✅ done in Ns · label" completion message (with truncated stdout/stderr) after.
Usage
tg-task "label" -- <command...>
tg-task --target 3 "rendering 8K avatar" -- python3 /tmp/avatar.py
HEARTBEAT_SECS=5 tg-task "deploy" -- ./deploy.sh production
Use it for anything that might exceed 5 seconds. No more waiting in silence.
hooks/ — Claude Code PreToolUse / PostToolUse hooks
Three hooks that enforce the streaming behavior infrastructurally instead of relying on the assistant remembering to use the right tool.
redirect-telegram-reply.sh — PreToolUse hook matched against mcp__plugin_telegram_telegram__reply. Blocks plain Telegram reply calls without a files attachment (because those should stream), passes attachment-bearing calls through (because tg-stream doesn't do attachments yet). The assistant is physically unable to send a non-streamed reply once this hook is installed.
bash-heartbeat-pre.sh — PreToolUse hook matched against Bash. Fires a tg-stream --no-stream "🔧 description" in the background before every Bash call, except for a few noisy patterns (tg-stream itself to avoid loops, ls/cat/echo).
bash-heartbeat-post.sh — PostToolUse hook matched against Bash. Pairs with the pre hook via a small state file and, if the Bash call took longer than 5 seconds, fires a ✅ done in Ns · description completion message.
Install (in ~/.claude/settings.json)
{
"hooks": {
"PreToolUse": [
{
"matcher": "mcp__plugin_telegram_telegram__reply",
"hooks": [{ "type": "command", "command": "/path/to/hooks/redirect-telegram-reply.sh" }]
},
{
"matcher": "Bash",
"hooks": [{ "type": "command", "command": "/path/to/hooks/bash-heartbeat-pre.sh" }]
}
],
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [{ "type": "command", "command": "/path/to/hooks/bash-heartbeat-post.sh" }]
}
]
}
}
The hooks use node to parse the JSON stdin payload, not jq, because jq isn't always installed.
Dependencies
- bun for
tg-stream bash,nodefor the hooks andtg-taskcurlis not required —tg-streamuses bun's built-infetch
Why this exists
Telegram bot API responses from an LLM usually arrive as one big wall of text, which feels dead. By streaming edits to a single message, the reply appears to be "typed" in real time. Combined with hook enforcement, the assistant cannot accidentally regress to the wall-of-text behavior, and long-running tasks produce automatic heartbeats so the user never stares at a blank chat for more than a few seconds.
Built as part of the OpenClaw agent infrastructure. Battle-tested against:
- 20 parallel invocations → concurrency cap holds at 10, excess cleanly drops with exit 4, no slot leaks
- Forced 1ms fetch timeout → retry path engages, 3 attempts with exponential backoff, clean exit 1
- 5400-character payload → auto-splits into 2 sequential streaming bubbles, no
MESSAGE_TOO_LONGerrors - Production deploy under real Telegram traffic → no perceptible regression vs the earlier unhardened version
See docs/ for additional notes including an upstream plugin bug draft for the MCP Telegram plugin's duplicate-poller race.
License
MIT. Use it, copy it, adapt it.