9 Commits

Author SHA1 Message Date
user
ae6c59b28f docs: update poller dispatcher, PR state machine, agent chaining (closes #7)
Some checks failed
check / check (push) Failing after 8s
2026-02-28 16:13:59 -08:00
ccf08cfb67 Merge pull request 'docs: update poller to dispatcher architecture (closes #4)' (#5) from fix/update-poller-docs into main
All checks were successful
check / check (push) Successful in 9s
2026-02-28 16:31:43 +01:00
clawbot
0284ea63c0 docs: update poller to dispatcher architecture (closes #4)
All checks were successful
check / check (push) Successful in 11s
Replace flag-file + heartbeat approach with the production dispatcher
pattern: poller triages notifications and spawns isolated agents
directly via openclaw cron. Adds assignment scan for self-created
issues. Response time ~15-60s instead of ~30 min.
2026-02-28 06:29:32 -08:00
f3e48c6cd4 Merge pull request 'Expand sensitive output routing and make inbox references conditional' (#3) from fix/pii-and-conditional-email into main
All checks were successful
check / check (push) Successful in 9s
Reviewed-on: #3
2026-02-28 15:22:36 +01:00
clawbot
c0d345e767 expand PII routing to cover secrets, credentials, and operational info; make email/inbox references conditional
All checks were successful
check / check (push) Successful in 12s
- Rename 'PII Output Routing' → 'Sensitive Output Routing' throughout
- Expand scope to include secrets, credentials, API keys, flight numbers,
  locations, travel plans, medical info
- Replace hardcoded 'Emails' heartbeat check with conditional language
  ('Notifications — whatever inbox sources you've integrated')
- Remove 'email' from heartbeat-state.json example
- Update cross-references in SETUP_CHECKLIST.md
2026-02-28 03:40:13 -08:00
user
36223ca550 fix: agent should infer needed fields, not wait to be told
All checks were successful
check / check (push) Successful in 12s
2026-02-28 03:33:08 -08:00
user
f0a2a5eb62 docs: update Gitea notification section — webhook vs poller, flag-file approach
Some checks are pending
check / check (push) Waiting to run
- Replaced wake-event poller with flag-file approach (prevents DM spam)
- Added Option A (webhooks for VPS) vs Option B (poller for NAT)
- Documented the wake-event failure mode and why we switched
2026-02-28 03:30:49 -08:00
9631535583 Merge pull request 'Rewrite SETUP_CHECKLIST.md: replace checklists with paste-able agent prompts' (#1) from rewrite-setup-checklist-prompts into main
Some checks are pending
check / check (push) Waiting to run
2026-02-28 12:27:17 +01:00
user
b0495d5b56 rewrite SETUP_CHECKLIST.md: replace checklist items with paste-able agent prompts
All checks were successful
check / check (push) Successful in 13s
Each section now contains a self-contained prompt in a code block that
adopting users can paste directly to their agent. Prompts include full
URLs to raw reference docs. Fixes 'you provide' wording to 'your human
provides'. Keeps same phase/section structure.
2026-02-28 03:22:08 -08:00
3 changed files with 1118 additions and 606 deletions

View File

@@ -74,103 +74,108 @@ back to issues.
### PR State Machine ### PR State Machine
Once a PR exists, it enters a finite state machine tracked by Gitea labels and Once a PR exists, it enters a finite state machine tracked by Gitea labels. Each
issue assignments. Labels represent the current state; the assignment field PR has exactly one state label at a time, plus a `bot` label indicating it's the
represents who's responsible for the next action. agent's turn to act.
#### States (Gitea Labels) #### States (Gitea Labels)
| Label | Color | Meaning | | Label | Color | Meaning |
| -------------- | ------ | ------------------------------------------------- | | -------------- | ------ | --------------------------------------------- |
| `needs-rebase` | red | PR has merge conflicts or is behind main | | `needs-review` | yellow | Code pushed, `docker build .` passes, awaiting review |
| `needs-checks` | orange | `make check` does not pass cleanly |
| `needs-review` | yellow | Code review not yet done |
| `needs-rework` | purple | Code review found issues that need fixing | | `needs-rework` | purple | Code review found issues that need fixing |
| `merge-ready` | green | All checks pass, reviewed, rebased, conflict-free | | `merge-ready` | green | Reviewed clean, build passes, ready for human |
#### Transitions Earlier iterations included `needs-rebase` and `needs-checks` states, but we
eliminated them. Rebasing is handled inline by workers and reviewers (they
rebase onto the target branch as part of their normal work). And `docker build .`
is the only check — it's run by workers before pushing and by reviewers before
approving. There's no separate "checks" phase.
#### The `bot` Label + Assignment Model
The `bot` label signals that an issue or PR is the agent's turn to act. The
assignment field tracks who is actively working on it:
- **`bot` label + unassigned** = work available, poller dispatches an agent
- **`bot` label + assigned to agent** = actively being worked
- **No `bot` label** = not the agent's turn (either human's turn or done)
The notification poller assigns the agent account to the issue at dispatch time,
before the agent session even starts. This prevents race conditions — by the
time a second poller scan runs, the issue is already assigned and gets skipped.
When the agent finishes its step and spawns the next agent, it unassigns itself
first (releasing the lock). The next agent's first action is to verify it's the
only one working on the issue by checking comments for duplicate work.
At chain-end (`merge-ready`): the agent assigns the human and removes the `bot`
label. The human's PR inbox contains only PRs that are genuinely ready to merge.
#### Agent Chaining — No Self-Review
Each step in the pipeline is handled by a separate, isolated agent session.
Agents spawn the next agent in the chain via `openclaw cron add --session
isolated`. This enforces a critical rule: **the agent that wrote the code never
reviews it.**
The chain looks like this:
``` ```
New PR created Worker agent (writes/fixes code)
→ docker build . → push → label needs-review
→ unassign self → spawn reviewer agent → STOP
[needs-rebase] ──rebase onto main──▶ [needs-checks]
▲ │ Reviewer agent (reviews code it didn't write)
│ run make check → read diff + referenced issues → review
│ (main updated, │ → PASS: rebase if needed → docker build . → label merge-ready
conflicts) ┌─────────────┴──────────────┐ → assign human → remove bot label → STOP
│ │ │ → FAIL: comment findings → label needs-rework
passes fails → unassign self → spawn worker agent → STOP
│ │ │
│ ▼ ▼
│ [needs-review] [needs-checks]
│ │ (fix code, re-run)
│ code review
│ │
│ ┌─────────┴──────────┐
│ │ │
│ approved issues found
│ │ │
│ ▼ ▼
│ [merge-ready] [needs-rework]
│ │ │
│ assign human fix issues
│ │
│ ▼
└───────────────────────────── [needs-rebase]
(restart cycle)
``` ```
The cycle can repeat multiple times: rebase → check → review → rework → rebase → The cycle repeats (worker → reviewer → worker → reviewer → ...) until the
check → review → rework → ... until the PR is clean. Each iteration typically reviewer approves. Each agent is a fresh session with no memory of previous
addresses a smaller set of issues until everything converges. iterations — it reads the issue comments and PR diff to understand context.
#### Assignment Rules #### TOCTOU Protection
- **PR in any state except `merge-ready`** → assigned to the agent. It's the Just before changing labels or assignments, agents re-read all comments and
agent's job to drive it forward through the state machine. current labels via the API. If the state changed since they started (another
- **PR reaches `merge-ready`** → assigned to the human. This is the ONLY time a agent already acted), they report the conflict and stop. This prevents stale
PR should land in the human's queue. agents from overwriting fresh state.
- **Human requests changes during review** → PR moves back to `needs-rework`,
reassigned to agent.
This means the human's PR inbox contains only PRs that are genuinely ready to #### Race Detection
merge — no half-finished work, no failing CI, no merge conflicts. Everything
else is the agent's problem. If an agent starts and finds its work was already done (e.g., a reviewer sees a
review was already posted, or a worker sees a PR was already created), it
reports to the status channel and stops.
#### The Loop in Practice #### The Loop in Practice
A typical PR might go through this cycle: A typical PR goes through this cycle:
1. Agent creates PR, labels `needs-rebase` 1. Worker agent creates PR, runs `docker build .`, labels `needs-review`
2. Agent rebases onto main → labels `needs-checks` 2. Worker spawns reviewer agent
3. Agent runs `make check`lint fails → fixes lint, pushes → back to 3. Reviewer reads difffinds a missing error check → labels `needs-rework`
`needs-rebase` (new commit) 4. Reviewer spawns worker agent
4. Agent rebases → `needs-checks` → runs checks → passes`needs-review` 5. Worker fixes the error check, rebases, runs `docker build .`, labels
5. Agent does code review — finds a missing error check → `needs-rework` `needs-review`
6. Agent fixes the error check, pushes → `needs-rebase` 6. Worker spawns reviewer agent
7. Agent rebases → `needs-checks` → passes → `needs-review` 7. Reviewer reads diff — looks good → rebases → `docker build .` → labels
8. Agent reviews — looks good → `merge-ready` `merge-ready`, assigns human
9. Agent assigns to human 8. Human reviews, merges
10. Human reviews, merges
Steps 1-9 happen without human involvement. The human sees a clean, reviewed, Steps 1-7 happen without human involvement. Each step is a separate agent
passing PR ready for a final look. session that spawns the next one.
#### Automated Sweep #### Safety Net
A periodic cron job (every 4 hours) scans all open PRs across all repos: The notification poller runs a periodic scan (every 2 minutes) of all watched
repos for issues/PRs with the `bot` label that are unassigned. This catches
- **No label** → classify into the correct state broken chains — if an agent crashes or times out without spawning the next agent,
- **`needs-rebase`** → spawn agent to rebase the poller will eventually re-dispatch. A 30-minute cooldown prevents duplicate
- **`needs-checks`** → spawn agent to run checks and fix failures dispatches during normal operation.
- **`needs-review`** → spawn agent to do code review
- **`needs-rework`** → spawn agent to fix review feedback
- **`merge-ready`** → verify still true (main may have updated since), ensure
assigned to human
This catches PRs that fell through the cracks — an agent session that timed out
mid-rework, a rebase that became necessary when main moved forward, etc.
#### Why Labels + Assignments #### Why Labels + Assignments
@@ -263,26 +268,45 @@ A practical setup:
- **DM with agent** — Private conversation, sitreps, sensitive commands - **DM with agent** — Private conversation, sitreps, sensitive commands
- **Project-specific channels** — For coordination with external collaborators - **Project-specific channels** — For coordination with external collaborators
### The Notification Poller ### The Notification Poller + Dispatcher
Because the agent can't see Gitea webhooks in Mattermost (bot-to-bot visibility Because the agent can't see Gitea webhooks in Mattermost (bot-to-bot visibility
issue), we built a lightweight Python script that polls the Gitea notifications issue), we built a Python script that both polls and dispatches. It polls the
API every 2 seconds and wakes the agent via OpenClaw's `/hooks/wake` endpoint Gitea notifications API every 15 seconds, triages each notification (checking
when new notifications arrive. @-mentions and assignment), marks them as read, and spawns one isolated agent
session per actionable item via `openclaw cron add --session isolated`.
The poller also runs a secondary **label scan** every 2 minutes, checking all
watched repos for open issues/PRs with the `bot` label that are unassigned
(meaning they need work but no agent has claimed them yet). This catches cases
where the agent chain broke — an agent timed out or crashed without spawning the
next one.
Key design decisions: Key design decisions:
- **The poller never marks notifications as read.** That's the agent's job after - **The poller IS the dispatcher.** No flag files, no heartbeat dependency. The
processing. Prevents the poller and agent from racing. poller triages notifications and spawns agents directly.
- **Tracks notification IDs, not counts.** Only fires on genuinely new - **Marks notifications as read immediately.** Prevents re-dispatch on the next
notifications, not re-reads of existing ones. poll cycle.
- **The wake message tells the agent to route output to Gitea/Mattermost, not - **Assigns the agent account at dispatch time.** Before spawning the agent
DM.** Prevents chatty notification processing from disturbing the human. session, the poller assigns the bot user to the issue via API. This prevents
- **Zero dependencies.** Python stdlib only (`urllib`, `json`, `time`). Runs race conditions — subsequent scans skip assigned issues.
anywhere. - **Dispatched issues are tracked in a persistent JSON file.** Survives poller
restarts. Entries auto-prune after 1 hour.
- **30-minute re-dispatch cooldown.** The poller won't re-dispatch for the same
issue within 30 minutes, even if it appears unassigned again.
- **Concurrency cap.** The poller checks how many agents are currently running
and defers dispatch if the cap is reached.
- **Stale agent reaper.** Kills agent sessions that have been running longer
than 10 minutes (the `--timeout-seconds` flag isn't always enforced).
- **`bot` label + `merge-ready` skip.** The label scan skips issues that are
already labeled `merge-ready` — those are in the human's court.
- **Zero dependencies.** Python stdlib only. Runs anywhere.
Response time: ~15-30 seconds from notification to agent starting work.
Full source code is available in Full source code is available in
[OPENCLAW_TRICKS.md](OPENCLAW_TRICKS.md#the-gitea-notification-poller). [OPENCLAW_TRICKS.md](OPENCLAW_TRICKS.md#gitea-integration--notification-polling).
## CI: Gitea Actions ## CI: Gitea Actions
@@ -371,42 +395,34 @@ Everything gets a production URL with automatic TLS via Traefik.
Putting it all together, the development lifecycle looks like this: Putting it all together, the development lifecycle looks like this:
``` ```
1. Issue filed in Gitea (by human or agent) 1. Human labels issue with `bot` (or agent files issue)
2. Agent picks up the issue (via notification poller) 2. Poller detects `bot` label + unassigned → assigns agent → spawns worker
3. Agent posts "starting work on #N" to Mattermost #git 3. Worker agent clones repo, writes code, runs `docker build .`
4. Agent (or sub-agent) creates branch, writes code, pushes 4. Worker creates PR "(closes #N)", labels `needs-review`
5. Gitea webhook fires → #git shows the push 5. Worker spawns reviewer agent → stops
6. CI runs docker build → passes or fails 6. Reviewer agent reads diff + referenced issues → reviews
7. Agent creates PR "(closes #N)" 7a. Review PASS → reviewer rebases if needed → `docker build .`
→ labels `merge-ready` → assigns human → removes `bot`
8. Gitea webhook fires → #git shows the PR 7b. Review FAIL → reviewer labels `needs-rework`
→ spawns worker agent → back to step 3
9. Agent reviews code, runs make check locally, verifies 8. Human reviews, merges
10. Agent assigns PR to human when all checks pass 9. Gitea webhook fires → µPaaS deploys to production
11. Human reviews, requests changes or approves 10. Site/service is live
12. If changes requested → agent reworks, back to step 6
13. Human merges PR
14. Gitea webhook fires → µPaaS deploys to production
15. Gitea webhook fires → #git shows the merge
16. Site/service is live on production URL
``` ```
Steps 2-10 can happen without any human involvement. The human's role is reduced Steps 2-7 happen without any human involvement, driven by agent-to-agent
to: review the PR, approve or request changes, merge. Everything else is chaining. The human's role is reduced to: label the issue, review the final PR,
automated. merge. Everything else is automated.
### Observability ### Observability

View File

@@ -173,46 +173,102 @@ The landing checklist (triggered automatically after every flight) updates
location, timezone, nearest airport, and lodging in the daily context file. It location, timezone, nearest airport, and lodging in the daily context file. It
also checks if any cron jobs have hardcoded timezones that need updating. also checks if any cron jobs have hardcoded timezones that need updating.
### The Gitea Notification Poller ### Gitea Notification Delivery
OpenClaw has heartbeats, but those are periodic (every ~30min). For Gitea issues There are two approaches for getting Gitea notifications to your agent,
and PRs, we wanted near-realtime response. The solution: a tiny Python script depending on your network setup.
that polls the Gitea notifications API every 2 seconds and wakes the agent via
OpenClaw's `/hooks/wake` endpoint when new notifications arrive. #### Option A: Direct Webhooks (VPS / Public Server)
If your OpenClaw instance runs on a VPS or other publicly reachable server, the
simplest approach is direct webhooks. Run Traefik (or any reverse proxy with
automatic TLS) on the same server and configure Gitea webhooks to POST directly
to OpenClaw's webhook endpoint. This is push-based and realtime — notifications
arrive instantly.
Setup: add a webhook on each Gitea repo (or use an organization-level webhook)
pointing to `https://your-openclaw-host/hooks/gitea`. OpenClaw handles the rest.
#### Option B: Notification Poller + Dispatcher (Local Machine Behind NAT)
If your OpenClaw runs on a dedicated local machine behind NAT (like a home Mac
or Linux workstation), Gitea can't reach it directly. This is our setup —
OpenClaw runs on a Mac Studio on a home LAN.
The solution: a Python script that both polls and dispatches. It polls the Gitea
notifications API every 15 seconds, triages each notification (checking
@-mentions and assignments), marks them as read, and spawns one isolated agent
session per actionable item via `openclaw cron add --session isolated`.
The poller also runs a secondary **label scan** every 2 minutes, checking all
watched repos for open issues/PRs with the `bot` label that are unassigned. This
catches cases where the agent chain broke — an agent timed out or crashed
without spawning the next agent. It also picks up newly-labeled issues that
didn't trigger a notification.
Key design decisions: Key design decisions:
- **The poller never marks notifications as read.** That's the agent's job after - **The poller IS the dispatcher.** No flag files, no heartbeat dependency. The
it processes them. This prevents the poller and agent from racing. poller triages notifications and spawns agents directly.
- **It tracks notification IDs, not counts.** This way it only fires on - **Marks notifications as read immediately.** Prevents re-dispatch on the next
genuinely new notifications, not re-reads of existing ones. poll cycle.
- **The wake message tells the agent to route output to Gitea/Mattermost, not to - **Assigns the bot user at dispatch time.** Before spawning the agent, the
DM.** This prevents chatty notification processing from disturbing the human. poller assigns the bot account to the issue via API. This prevents race
- **Zero dependencies.** Just Python stdlib (`urllib`, `json`, `time`). Runs conditions — subsequent scans skip assigned issues. The spawned agent doesn't
anywhere. need to claim ownership; it's already claimed.
- **Persistent dispatch tracking.** Dispatched issues are tracked in a JSON
file on disk (not just in memory), surviving poller restarts. Entries
auto-prune after 1 hour.
- **30-minute re-dispatch cooldown.** Safety net for broken agent chains. Normal
operation uses agent-to-agent chaining (each agent spawns the next), so the
poller only re-dispatches if the chain breaks.
- **Concurrency cap.** The poller checks how many agents are currently running
(`openclaw cron list`) and defers dispatch if the cap is reached.
- **Stale agent reaper.** Each scan cycle, kills agent sessions running longer
than 10 minutes. The `--timeout-seconds` flag isn't always enforced by
OpenClaw, so the poller handles cleanup itself.
- **`merge-ready` skip.** The label scan skips issues already labeled
`merge-ready` — those are in the human's court.
- **Template-based prompts.** The poller reads two workspace files (a dispatch
header with `{{variable}}` placeholders, and a workflow rules document),
concatenates them, substitutes variables, and passes the result as the
agent's `--message`. This keeps all instructions in version-controlled
workspace files with a single source of truth.
- **Zero dependencies.** Python stdlib only. Runs anywhere.
Here's the full source: Response time: ~1530s from notification to agent starting work.
```python ```python
#!/usr/bin/env python3 #!/usr/bin/env python3
""" """
Gitea notification poller. Gitea notification poller + dispatcher.
Polls for unread notifications and wakes OpenClaw when the count
changes. The AGENT marks notifications as read after processing — Two polling loops:
the poller never marks anything as read. 1. Notification-based: detects new @-mentions and assignments, dispatches
agents for actionable notifications.
2. Label-based: periodically scans for issues/PRs with the 'bot' label
that are unassigned (available for work). Catches broken agent chains
and newly-labeled issues.
The poller assigns the bot user to the issue BEFORE spawning the agent,
preventing race conditions where multiple scans dispatch for the same issue.
Required env vars: Required env vars:
GITEA_URL - Gitea instance URL GITEA_URL - Gitea instance URL
GITEA_TOKEN - Gitea API token GITEA_TOKEN - Gitea API token
HOOK_TOKEN - OpenClaw hooks auth token
Optional env vars: Optional env vars:
GATEWAY_URL - OpenClaw gateway URL (default: http://127.0.0.1:18789) POLL_DELAY - Seconds between notification polls (default: 15)
POLL_DELAY - Delay between polls in seconds (default: 2) COOLDOWN - Seconds between dispatch batches (default: 30)
BOT_SCAN_INTERVAL - Seconds between label scans (default: 120)
MAX_CONCURRENT_AGENTS - Max simultaneous agents (default: 10)
REAP_AGE_SECONDS - Kill agents older than this (default: 600)
OPENCLAW_BIN - Path to openclaw binary
""" """
import json import json
import os import os
import subprocess
import sys import sys
import time import time
import urllib.request import urllib.request
@@ -220,109 +276,270 @@ import urllib.error
GITEA_URL = os.environ.get("GITEA_URL", "").rstrip("/") GITEA_URL = os.environ.get("GITEA_URL", "").rstrip("/")
GITEA_TOKEN = os.environ.get("GITEA_TOKEN", "") GITEA_TOKEN = os.environ.get("GITEA_TOKEN", "")
GATEWAY_URL = os.environ.get("GATEWAY_URL", "http://127.0.0.1:18789").rstrip( POLL_DELAY = int(os.environ.get("POLL_DELAY", "15"))
"/" COOLDOWN = int(os.environ.get("COOLDOWN", "30"))
BOT_SCAN_INTERVAL = int(os.environ.get("BOT_SCAN_INTERVAL", "120"))
MAX_CONCURRENT_AGENTS = int(os.environ.get("MAX_CONCURRENT_AGENTS", "10"))
REAP_AGE_SECONDS = int(os.environ.get("REAP_AGE_SECONDS", "600"))
REDISPATCH_COOLDOWN = 1800 # 30 min safety net for broken agent chains
OPENCLAW_BIN = os.environ.get("OPENCLAW_BIN", "openclaw")
BOT_USER = os.environ.get("BOT_USER", "clawbot")
WORKSPACE = os.path.expanduser("~/.openclaw/workspace")
DISPATCH_HEADER = os.path.join(
WORKSPACE, "taskprompts", "how-to-handle-gitea-notifications.md"
) )
HOOK_TOKEN = os.environ.get("HOOK_TOKEN", "") WORKFLOW_DOC = os.path.join(
POLL_DELAY = int(os.environ.get("POLL_DELAY", "2")) WORKSPACE, "taskprompts", "how-to-work-on-a-gitea-issue-or-pr.md"
)
DISPATCH_STATE_PATH = os.path.join(
os.path.dirname(os.path.abspath(__file__)), ".dispatch-state.json"
)
# Repos to watch for bot-labeled issues
WATCHED_REPOS = [
# "org/repo1",
# "org/repo2",
]
# Dispatch tracking (persisted to disk)
dispatched_issues: dict[str, float] = {}
def check_config(): def _load_dispatch_state() -> dict[str, float]:
missing = [] try:
if not GITEA_URL: with open(DISPATCH_STATE_PATH) as f:
missing.append("GITEA_URL") state = json.load(f)
if not GITEA_TOKEN: now = time.time()
missing.append("GITEA_TOKEN") return {k: v for k, v in state.items() if now - v < 3600}
if not HOOK_TOKEN: except (FileNotFoundError, json.JSONDecodeError):
missing.append("HOOK_TOKEN") return {}
if missing:
print(
f"ERROR: Missing required env vars: {', '.join(missing)}", def _save_dispatch_state():
file=sys.stderr, try:
) with open(DISPATCH_STATE_PATH, "w") as f:
json.dump(dispatched_issues, f)
except OSError as e:
print(f"WARN: Could not save dispatch state: {e}", file=sys.stderr)
def gitea_api(method, path, data=None):
url = f"{GITEA_URL}/api/v1{path}"
body = json.dumps(data).encode() if data else None
headers = {"Authorization": f"token {GITEA_TOKEN}"}
if body:
headers["Content-Type"] = "application/json"
req = urllib.request.Request(url, headers=headers, method=method, data=body)
try:
with urllib.request.urlopen(req, timeout=15) as resp:
raw = resp.read()
return json.loads(raw) if raw else None
except Exception as e:
print(f"WARN: {method} {path}: {e}", file=sys.stderr, flush=True)
return None
def load_template() -> str:
"""Load dispatch header + workflow doc, concatenated."""
parts = []
for path in [DISPATCH_HEADER, WORKFLOW_DOC]:
try:
with open(path) as f:
parts.append(f.read())
except FileNotFoundError:
print(f"ERROR: File not found: {path}", file=sys.stderr)
sys.exit(1) sys.exit(1)
return "\n\n---\n\n".join(parts)
def gitea_unread_ids(): def render_template(template, repo_full, issue_number, title,
"""Return set of unread notification IDs.""" subject_type, reason):
req = urllib.request.Request( return (
f"{GITEA_URL}/api/v1/notifications?status-types=unread", template
headers={"Authorization": f"token {GITEA_TOKEN}"}, .replace("{{repo_full}}", repo_full)
.replace("{{issue_number}}", str(issue_number))
.replace("{{title}}", title)
.replace("{{subject_type}}", subject_type)
.replace("{{reason}}", reason)
.replace("{{gitea_url}}", GITEA_URL)
.replace("{{gitea_token}}", GITEA_TOKEN)
.replace("{{openclaw_bin}}", OPENCLAW_BIN)
.replace("{{bot_user}}", BOT_USER)
# Add your own variables here (e.g. git_channel)
) )
def count_running_agents() -> int:
try: try:
with urllib.request.urlopen(req, timeout=10) as resp: result = subprocess.run(
notifs = json.loads(resp.read()) [OPENCLAW_BIN, "cron", "list"],
return {n["id"] for n in notifs} capture_output=True, text=True, timeout=10,
except Exception as e:
print(
f"WARN: Gitea API failed: {e}", file=sys.stderr, flush=True
) )
return set() return sum(1 for line in result.stdout.splitlines()
if "running" in line or "idle" in line)
except Exception:
return 0
def wake_openclaw(count): def spawn_agent(template, repo_full, issue_number, title,
text = ( subject_type, reason):
f"[Gitea Notification] {count} new notification(s). " dispatch_key = f"{repo_full}#{issue_number}"
"Check your Gitea notification inbox via API, process them, " last = dispatched_issues.get(dispatch_key)
"and mark as read when done. " if last and (time.time() - last) < REDISPATCH_COOLDOWN:
"Route all output to Gitea comments or Mattermost #git/#claw. " return
"Do NOT reply to this session — respond with NO_REPLY."
) if count_running_agents() >= MAX_CONCURRENT_AGENTS:
payload = json.dumps({"text": text, "mode": "now"}).encode() print(f" → Concurrency limit reached, deferring {dispatch_key}",
req = urllib.request.Request( flush=True)
f"{GATEWAY_URL}/hooks/wake", return
data=payload,
headers={ dispatched_issues[dispatch_key] = time.time()
"Authorization": f"Bearer {HOOK_TOKEN}",
"Content-Type": "application/json", # Assign bot user immediately to prevent races
}, gitea_api("PATCH", f"/repos/{repo_full}/issues/{issue_number}",
method="POST", {"assignees": [BOT_USER]})
)
repo_short = repo_full.split("/")[-1]
job_name = f"gitea-{repo_short}-{issue_number}-{int(time.time())}"
msg = render_template(template, repo_full, issue_number, title,
subject_type, reason)
try: try:
with urllib.request.urlopen(req, timeout=5) as resp: result = subprocess.run(
status = resp.status [OPENCLAW_BIN, "cron", "add",
print(f" Wake responded: {status}", flush=True) "--name", job_name, "--at", "1s",
return True "--message", msg, "--delete-after-run",
except Exception as e: "--session", "isolated", "--no-deliver",
print( "--thinking", "low", "--timeout-seconds", "300"],
f"WARN: Failed to wake OpenClaw: {e}", capture_output=True, text=True, timeout=15,
file=sys.stderr, )
flush=True, if result.returncode == 0:
_save_dispatch_state()
else:
dispatched_issues.pop(dispatch_key, None)
except Exception as e:
print(f"Spawn error: {e}", file=sys.stderr, flush=True)
dispatched_issues.pop(dispatch_key, None)
def is_actionable(notif):
"""Check if a notification warrants spawning an agent."""
subject = notif.get("subject", {})
repo = notif.get("repository", {})
repo_full = repo.get("full_name", "")
url = subject.get("url", "")
number = url.rstrip("/").split("/")[-1] if url else ""
if not number or not number.isdigit():
return False, "no issue number", None
issue = gitea_api("GET", f"/repos/{repo_full}/issues/{number}")
if not issue:
return False, "couldn't fetch issue", number
# Check for @-mentions in the latest comment
comments = gitea_api(
"GET", f"/repos/{repo_full}/issues/{number}/comments"
)
if comments:
last = comments[-1]
if last.get("user", {}).get("login") == BOT_USER:
return False, "own comment is latest", number
if f"@{BOT_USER}" in (last.get("body") or ""):
return True, "@-mentioned in comment", number
# Check for @-mention in issue body
body = issue.get("body", "") or ""
if f"@{BOT_USER}" in body:
return True, "@-mentioned in body", number
return False, "not mentioned", number
def scan_bot_labeled(template):
"""Scan for issues/PRs with 'bot' label that are unassigned."""
for repo_full in WATCHED_REPOS:
for issue_type in ["issues", "pulls"]:
items = gitea_api(
"GET",
f"/repos/{repo_full}/issues?state=open&type={issue_type}"
f"&labels=bot&sort=updated&limit=10",
) or []
for item in items:
number = str(item["number"])
dispatch_key = f"{repo_full}#{number}"
last = dispatched_issues.get(dispatch_key)
if last and (time.time() - last) < REDISPATCH_COOLDOWN:
continue
assignees = [
a.get("login", "") for a in item.get("assignees") or []
]
if BOT_USER in assignees:
continue
labels = [
l.get("name", "") for l in item.get("labels") or []
]
if "merge-ready" in labels:
continue
kind = "PR" if issue_type == "pulls" else "issue"
spawn_agent(
template, repo_full, number,
item.get("title", "")[:60],
"pull" if issue_type == "pulls" else "issue",
"bot label, unassigned",
) )
return False
def main(): def main():
check_config() global dispatched_issues
print( dispatched_issues = _load_dispatch_state()
f"Gitea notification poller started (delay={POLL_DELAY}s)",
flush=True,
)
last_seen_ids = gitea_unread_ids() if not GITEA_URL or not GITEA_TOKEN:
print( print("ERROR: GITEA_URL and GITEA_TOKEN required", file=sys.stderr)
f"Initial unread: {len(last_seen_ids)} notification(s)", flush=True sys.exit(1)
template = load_template()
print(f"Poller started (poll={POLL_DELAY}s, cooldown={COOLDOWN}s, "
f"bot_scan={BOT_SCAN_INTERVAL}s, repos={len(WATCHED_REPOS)})",
flush=True)
seen_ids = set(
n["id"] for n in
(gitea_api("GET", "/notifications?status-types=unread") or [])
) )
last_dispatch = 0
last_bot_scan = 0
while True: while True:
time.sleep(POLL_DELAY) time.sleep(POLL_DELAY)
now = time.time()
current_ids = gitea_unread_ids() # --- Notification polling ---
new_ids = current_ids - last_seen_ids notifs = gitea_api("GET", "/notifications?status-types=unread") or []
current_ids = {n["id"] for n in notifs}
new_ids = current_ids - seen_ids
if new_ids and now - last_dispatch >= COOLDOWN:
for n in [n for n in notifs if n["id"] in new_ids]:
nid = n.get("id")
if nid:
gitea_api("PATCH", f"/notifications/threads/{nid}")
is_act, reason, num = is_actionable(n)
if is_act:
repo = n["repository"]["full_name"]
title = n["subject"]["title"][:60]
stype = n["subject"].get("type", "").lower()
spawn_agent(template, repo, num, title, stype, reason)
last_dispatch = now
seen_ids = current_ids
if not new_ids: # --- Bot label scan (less frequent) ---
last_seen_ids = current_ids if now - last_bot_scan >= BOT_SCAN_INTERVAL:
continue scan_bot_labeled(template)
last_bot_scan = now
ts = time.strftime("%H:%M:%S")
print(
f"[{ts}] {len(new_ids)} new notification(s) "
f"({len(current_ids)} total unread), waking agent",
flush=True,
)
wake_openclaw(len(new_ids))
last_seen_ids = current_ids
if __name__ == "__main__": if __name__ == "__main__":
@@ -368,13 +585,15 @@ This applies to everything: project rules ("no mocks in tests"), workflow
preferences ("fewer PRs, don't over-split"), corrections, new policies. preferences ("fewer PRs, don't over-split"), corrections, new policies.
Immediate write to the daily file, and to MEMORY.md if it's a standing rule. Immediate write to the daily file, and to MEMORY.md if it's a standing rule.
### PII-Aware Output Routing ### Sensitive Output Routing
A lesson learned the hard way: **the audience determines what you can say, not A lesson learned the hard way: **the audience determines what you can say, not
who asked.** If the human asks for a medication status report in a group who asked.** If the human asks for a medication status report in a group
channel, the agent can't just dump it there — other people can read it. The channel, the agent can't just dump it there — other people can read it. The
rule: if the output would contain PII and the channel isn't private, redirect to rule: if the output would contain sensitive information (PII, secrets,
DM and reply in-channel with "sent privately." credentials, API keys, operational details like flight numbers, locations,
travel plans, medical info, etc.) and the channel isn't private, redirect to DM
and reply in-channel with "sent privately."
This is enforced at multiple levels: This is enforced at multiple levels:
@@ -405,7 +624,7 @@ The heartbeat handles:
- Periodic memory maintenance - Periodic memory maintenance
State tracking in `memory/heartbeat-state.json` prevents redundant checks (e.g., State tracking in `memory/heartbeat-state.json` prevents redundant checks (e.g.,
don't re-check email if you checked 10 minutes ago). don't re-check notifications if you checked 10 minutes ago).
The key output rule: heartbeats should either be `HEARTBEAT_OK` (nothing to do) The key output rule: heartbeats should either be `HEARTBEAT_OK` (nothing to do)
or a direct alert. Work narration goes to a designated status channel, never to or a direct alert. Work narration goes to a designated status channel, never to
@@ -665,25 +884,27 @@ From REPO_POLICIES.md and our operational experience:
#### The PR Pipeline #### The PR Pipeline
Our agent follows a strict PR lifecycle: Our agent follows a strict PR lifecycle using agent-to-agent chaining. Each step
is handled by a separate, isolated agent session — the agent that writes code
never reviews it:
```markdown ```markdown
## PR pipeline (every PR, no exceptions) ## PR pipeline (every PR, no exceptions)
1. **Review/rework loop**: code review → rework → re-review → repeat until clean Worker agent → docker build . → push → label needs-review → spawn reviewer
2. **Check/rework loop**: `make check` + `docker build .`rework → re-check → Reviewer agent → review diff → PASS: docker build . → label merge-ready
repeat until clean → FAIL: label needs-rework → spawn worker
3. Only after BOTH loops pass with zero issues: assign to human Repeat until reviewer approves.
- "Passes checks" ≠ "ready for human" - docker build . is the ONLY authoritative check (runs make check inside)
- Never weaken tests/linters. Fix the code. - Never weaken tests/linters. Fix the code.
- Pre-existing failures are YOUR problem. Fix them as part of your PR. - Pre-existing failures are YOUR problem. Fix them as part of your PR.
``` ```
The agent doesn't just create a PR and hand it off — it drives the PR through The agent chain doesn't just create a PR and hand it off — it drives the PR
review, rework, and verification until it's genuinely ready. A PR assigned to through review, rework, and verification until it's genuinely ready. A PR
the human means: all checks pass, code reviewed, review feedback addressed, assigned to the human means: build passes, code reviewed by a separate agent,
rebased against main, no conflicts. Anything less is the agent's open task. review feedback addressed, rebased. Anything less is still in the agent chain.
#### New Repo Bootstrap #### New Repo Bootstrap
@@ -1417,7 +1638,8 @@ stay quiet.
## Inbox Check (PRIORITY) ## Inbox Check (PRIORITY)
(check notifications, issues, emails — whatever applies) (check whatever notification sources apply to your setup — e.g. Gitea
notifications, emails, issue trackers)
## Flight Prep Blocks (daily) ## Flight Prep Blocks (daily)
@@ -1451,10 +1673,9 @@ Never send internal thinking or status narration to user's DM. Output should be:
```json ```json
{ {
"lastChecks": { "lastChecks": {
"email": 1703275200, "gitea": 1703280000,
"calendar": 1703260800, "calendar": 1703260800,
"weather": null, "weather": null
"gitea": 1703280000
}, },
"lastWeeklyDocsReview": "2026-02-24" "lastWeeklyDocsReview": "2026-02-24"
} }
@@ -1535,12 +1756,12 @@ For complex coding tasks, spawn isolated sub-agents.
### Sub-Agent PR Quality Gate (MANDATORY) ### Sub-Agent PR Quality Gate (MANDATORY)
- `make check` must pass with ZERO failures. No exceptions. - `docker build .` must pass. This is identical to CI and the only
authoritative check. No exceptions.
- Pre-existing failures are YOUR problem. Fix them as part of your PR. - Pre-existing failures are YOUR problem. Fix them as part of your PR.
- NEVER modify linter config to make checks pass. Fix the code. - NEVER modify linter config to make checks pass. Fix the code.
- Every PR must include full `make check` output
- Rebase before and after committing - Rebase before and after committing
- Never self-review - Never self-review — each agent spawns a separate agent for review
``` ```
--- ---
@@ -1623,21 +1844,24 @@ Never lose a rule or preference your human states:
--- ---
## PII Output Routing — Audience-Aware Responses ## Sensitive Output Routing — Audience-Aware Responses
A critical security pattern: **the audience determines what you can say, not who A critical security pattern: **the audience determines what you can say, not who
asked.** If your human asks for a sitrep (or any PII-containing info) in a group asked.** If your human asks for a sitrep (or any sensitive info) in a group
channel, you can't just dump it there — other people can read it. channel, you can't just dump it there — other people can read it.
### AGENTS.md / checklist prompt: ### AGENTS.md / checklist prompt:
```markdown ```markdown
## PII Output Routing (CRITICAL) ## Sensitive Output Routing (CRITICAL)
- NEVER output PII in any non-private channel, even if your human asks for it - NEVER output sensitive information in any non-private channel, even if your
- If a request would produce PII (medication status, travel details, financial human asks for it
info, etc.) in a shared channel: send the response via DM instead, and reply - This includes: PII, secrets, credentials, API keys, and sensitive operational
in-channel with "sent privately" information (flight numbers/times/dates, locations, travel plans, medical
info, financial details, etc.)
- If a request would produce any of the above in a shared channel: send the
response via DM instead, and reply in-channel with "sent privately"
- The rule is: the audience determines what you can say, not who asked - The rule is: the audience determines what you can say, not who asked
- This applies to: group chats, public issue trackers, shared Mattermost - This applies to: group chats, public issue trackers, shared Mattermost
channels, Discord servers — anywhere that isn't a 1:1 DM channels, Discord servers — anywhere that isn't a 1:1 DM
@@ -1646,10 +1870,10 @@ channel, you can't just dump it there — other people can read it.
### Why this matters: ### Why this matters:
This is a real failure mode. If someone asks "sitrep" in a group channel and you This is a real failure mode. If someone asks "sitrep" in a group channel and you
respond with medication names, partner details, travel dates, and hotel names respond with medication names, partner details, travel dates, hotel names, or
you just leaked all of that to everyone in the channel. The human asking is API credentials — you just leaked all of that to everyone in the channel. The
authorized to see it; the channel audience is not. Always check WHERE you're human asking is authorized to see it; the channel audience is not. Always check
responding, not just WHO asked. WHERE you're responding, not just WHO asked.
--- ---

File diff suppressed because it is too large Load Diff