- Fan out all targets for an event in parallel goroutines (fire-and-forget)
- Add per-target circuit breaker for retry targets (closed/open/half-open)
- Circuit breaker trips after 5 consecutive failures, 30s cooldown
- Open circuit skips delivery and reschedules after cooldown
- Half-open allows one probe delivery to test recovery
- HTTP/database/log targets unaffected (no circuit breaker)
- Recovery path also fans out in parallel
- Update README with parallel delivery and circuit breaker docs
The webhook handler now builds DeliveryTask structs carrying all target
config and event data inline (for bodies ≤16KB) and sends them through
the delivery channel. In the happy path, the engine delivers without
reading from any database — it only writes to record delivery results.
For large bodies (≥16KB), Body is nil and the engine fetches it from the
per-webhook database on demand. Retry timers also carry the full
DeliveryTask, so retries avoid unnecessary DB reads.
The database is used for crash recovery only: on startup the engine scans
for interrupted pending/retrying deliveries and re-queues them.
Implements owner feedback from issue #15:
> the message in the <=16KB case should have everything it needs to do
> its delivery. it shouldn't touch the db until it has a success or
> failure to record.
Replace the polling-based delivery engine with a fully event-driven
architecture using Go channels and goroutines:
- Webhook handler notifies engine via buffered channel after creating
delivery records, with inline event data for payloads < 16KB
- Large payloads (>= 16KB) use pointer semantics (Body *string = nil)
and are fetched from DB on demand, keeping channel memory bounded
- Failed retry-target deliveries schedule Go timers with exponential
backoff; timers fire into a separate retry channel when ready
- On startup, engine scans DB once to recover interrupted deliveries
(pending processed immediately, retrying get timers for remaining
backoff)
- DB stores delivery status for crash recovery only, not for
inter-component communication during normal operation
- delivery.Notifier interface decouples handlers from engine; fx wires
*Engine as Notifier
No more periodic polling. No more wasted cycles when idle.
Split data storage into main application DB (config only) and
per-webhook event databases (one SQLite file per webhook).
Architecture changes:
- New WebhookDBManager component manages per-webhook DB lifecycle
(create, open, cache, delete) with lazy connection pooling via sync.Map
- Main DB (DBURL) stores only config: Users, Webhooks, Entrypoints,
Targets, APIKeys
- Per-webhook DBs (DATA_DIR) store Events, Deliveries, DeliveryResults
in files named events-{webhook_uuid}.db
- New DATA_DIR env var (default: ./data dev, /data/events prod)
Behavioral changes:
- Webhook creation creates per-webhook DB file
- Webhook deletion hard-deletes per-webhook DB file (config soft-deleted)
- Event ingestion writes to per-webhook DB, not main DB
- Delivery engine polls all per-webhook DBs for pending deliveries
- Database target type marks delivery as immediately successful (events
are already in the dedicated per-webhook DB)
- Event log UI reads from per-webhook DBs with targets from main DB
- Existing webhooks without DB files get them created lazily
Removed:
- ArchivedEvent model (was a half-measure, replaced by per-webhook DBs)
- Event/Delivery/DeliveryResult removed from main DB migrations
Added:
- Comprehensive tests for WebhookDBManager (create, delete, lazy
creation, delivery workflow, multiple webhooks, close all)
- Dockerfile creates /data/events directory
README updates:
- Per-webhook event databases documented as implemented (was Phase 2)
- DATA_DIR added to configuration table
- Docker instructions updated with data volume mount
- Data model diagram updated
- TODO updated (database separation moved to completed)
Closes#15
The "database" target type now writes events to a separate
archived_events table instead of just marking the delivery as done.
This table persists independently of internal event retention/pruning,
allowing the data to be consumed by external systems or preserved
indefinitely.
New ArchivedEvent model copies the full event payload (method, headers,
body, content_type) along with webhook/entrypoint/event/target IDs.
Store the *database.Database wrapper instead of calling .DB() eagerly
at construction time. The GORM *gorm.DB is only available after the
database's OnStart hook runs, but the engine constructor runs during
fx resolution (before OnStart). Accessing .DB() lazily via the wrapper
avoids the nil pointer panic.