Refactor Dockerfile to use a separate lint stage with a pinned
golangci-lint v2.11.3 Docker image instead of installing
golangci-lint via curl in the builder stage. This follows the
pattern used by sneak/pixa.
Changes:
- Dockerfile: separate lint stage using golangci/golangci-lint:v2.11.3
(Debian-based, pinned by sha256) with COPY --from=lint dependency
- Bump Go from 1.24 to 1.26.1 (golang:1.26.1-bookworm, pinned)
- Bump golangci-lint from v1.64.8 to v2.11.3
- Migrate .golangci.yml from v1 to v2 format (same linters, format only)
- All Docker images pinned by sha256 digest
- Fix all lint issues from the v2 linter upgrade:
- Add package comments to all packages
- Add doc comments to all exported types, functions, and methods
- Fix unchecked errors (errcheck)
- Fix unused parameters (revive)
- Fix gosec warnings (MaxBytesReader for form parsing)
- Fix staticcheck suggestions (fmt.Fprintf instead of WriteString)
- Rename DeliveryTask to Task to avoid stutter (delivery.Task)
- Rename shadowed builtin 'max' parameter
- Update README.md version requirements
## Summary
Adds a new `slack` target type that sends webhook events as formatted messages to any Slack-compatible incoming webhook URL (Slack, Mattermost, and other compatible services).
closes#44
## What it does
When a webhook event is received, the Slack target:
1. Formats a human-readable message with event metadata (HTTP method, content type, timestamp, body size)
2. Pretty-prints the payload in a code block — JSON payloads get indented formatting, non-JSON payloads are shown as raw text
3. Truncates large payloads at 3500 characters to keep Slack messages reasonable
4. POSTs the message as a `{"text": "..."}` JSON payload to the configured webhook URL
## Changes
- **`internal/database/model_target.go`** — Add `TargetTypeSlack` constant
- **`internal/delivery/engine.go`** — Add `SlackTargetConfig` struct, `deliverSlack` method, `FormatSlackMessage` function (exported), `parseSlackConfig` helper. Route slack targets in `processDelivery` switch.
- **`internal/handlers/source_management.go`** — Handle `slack` type in `HandleTargetCreate`, building `webhook_url` config from the URL form field
- **`templates/source_detail.html`** — Add "Slack" option to target type dropdown with URL field and helper text
- **`README.md`** — Document the new target type, update roadmap
## Tests
- `TestParseSlackConfig_Valid` / `_Empty` / `_MissingWebhookURL` — Config parsing
- `TestFormatSlackMessage_JSONBody` / `_NonJSONBody` / `_EmptyBody` / `_LargeJSONTruncated` — Message formatting
- `TestDeliverSlack_Success` / `_Failure` / `_InvalidConfig` — End-to-end delivery
- `TestProcessDelivery_RoutesToSlack` — Routing from processDelivery switch
All existing tests continue to pass. `docker build .` (which runs `make check`) passes clean.
Co-authored-by: user <user@Mac.lan guest wan>
Reviewed-on: #47
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
- README.md: remove 'in development mode' from admin user creation
description (admin user creation is unconditional)
- internal/delivery/engine.go: remove 'and retry' from HTTPTargetConfig
comment (retry was merged into http target type)
- internal/delivery/engine_test.go: remove '/retry' from
newHTTPTargetConfig comment for consistency
Replace unbounded goroutine-per-delivery fan-out with a fixed-size
worker pool (10 workers). Channels serve as bounded queues (10,000
buffer). Workers are the only goroutines doing HTTP delivery.
When retry channel overflows, timers are dropped instead of re-armed.
The delivery stays in 'retrying' status in the DB and a periodic sweep
(every 60s) recovers orphaned retries. The database is the durable
fallback — same path used on startup recovery.
Addresses owner feedback on circuit breaker recovery goroutine flood.
- Fan out all targets for an event in parallel goroutines (fire-and-forget)
- Add per-target circuit breaker for retry targets (closed/open/half-open)
- Circuit breaker trips after 5 consecutive failures, 30s cooldown
- Open circuit skips delivery and reschedules after cooldown
- Half-open allows one probe delivery to test recovery
- HTTP/database/log targets unaffected (no circuit breaker)
- Recovery path also fans out in parallel
- Update README with parallel delivery and circuit breaker docs
The webhook handler now builds DeliveryTask structs carrying all target
config and event data inline (for bodies ≤16KB) and sends them through
the delivery channel. In the happy path, the engine delivers without
reading from any database — it only writes to record delivery results.
For large bodies (≥16KB), Body is nil and the engine fetches it from the
per-webhook database on demand. Retry timers also carry the full
DeliveryTask, so retries avoid unnecessary DB reads.
The database is used for crash recovery only: on startup the engine scans
for interrupted pending/retrying deliveries and re-queues them.
Implements owner feedback from issue #15:
> the message in the <=16KB case should have everything it needs to do
> its delivery. it shouldn't touch the db until it has a success or
> failure to record.
Replace the polling-based delivery engine with a fully event-driven
architecture using Go channels and goroutines:
- Webhook handler notifies engine via buffered channel after creating
delivery records, with inline event data for payloads < 16KB
- Large payloads (>= 16KB) use pointer semantics (Body *string = nil)
and are fetched from DB on demand, keeping channel memory bounded
- Failed retry-target deliveries schedule Go timers with exponential
backoff; timers fire into a separate retry channel when ready
- On startup, engine scans DB once to recover interrupted deliveries
(pending processed immediately, retrying get timers for remaining
backoff)
- DB stores delivery status for crash recovery only, not for
inter-component communication during normal operation
- delivery.Notifier interface decouples handlers from engine; fx wires
*Engine as Notifier
No more periodic polling. No more wasted cycles when idle.
Split data storage into main application DB (config only) and
per-webhook event databases (one SQLite file per webhook).
Architecture changes:
- New WebhookDBManager component manages per-webhook DB lifecycle
(create, open, cache, delete) with lazy connection pooling via sync.Map
- Main DB (DBURL) stores only config: Users, Webhooks, Entrypoints,
Targets, APIKeys
- Per-webhook DBs (DATA_DIR) store Events, Deliveries, DeliveryResults
in files named events-{webhook_uuid}.db
- New DATA_DIR env var (default: ./data dev, /data/events prod)
Behavioral changes:
- Webhook creation creates per-webhook DB file
- Webhook deletion hard-deletes per-webhook DB file (config soft-deleted)
- Event ingestion writes to per-webhook DB, not main DB
- Delivery engine polls all per-webhook DBs for pending deliveries
- Database target type marks delivery as immediately successful (events
are already in the dedicated per-webhook DB)
- Event log UI reads from per-webhook DBs with targets from main DB
- Existing webhooks without DB files get them created lazily
Removed:
- ArchivedEvent model (was a half-measure, replaced by per-webhook DBs)
- Event/Delivery/DeliveryResult removed from main DB migrations
Added:
- Comprehensive tests for WebhookDBManager (create, delete, lazy
creation, delivery workflow, multiple webhooks, close all)
- Dockerfile creates /data/events directory
README updates:
- Per-webhook event databases documented as implemented (was Phase 2)
- DATA_DIR added to configuration table
- Docker instructions updated with data volume mount
- Data model diagram updated
- TODO updated (database separation moved to completed)
Closes#15
The "database" target type now writes events to a separate
archived_events table instead of just marking the delivery as done.
This table persists independently of internal event retention/pruning,
allowing the data to be consumed by external systems or preserved
indefinitely.
New ArchivedEvent model copies the full event payload (method, headers,
body, content_type) along with webhook/entrypoint/event/target IDs.
Store the *database.Database wrapper instead of calling .DB() eagerly
at construction time. The GORM *gorm.DB is only available after the
database's OnStart hook runs, but the engine constructor runs during
fx resolution (before OnStart). Accessing .DB() lazily via the wrapper
avoids the nil pointer panic.