Files
netwatch/backend
sneak 8ca57746df
All checks were successful
check / check (push) Successful in 33s
Move backend Dockerfile to repo root for git access
Place the backend Dockerfile at repo root as Dockerfile.backend so
the build context includes .git, giving git describe access for
version stamping. Fix .gitignore pattern to anchor /netwatch-server
so it does not exclude cmd/netwatch-server/. Remove .git from
.dockerignore. Update CI workflow and backend Makefile docker target.
2026-02-27 12:45:38 +07:00
..

netwatch-server is an MIT-licensed Go HTTP backend by @sneak that receives telemetry reports from the NetWatch SPA and persists them as zstd-compressed JSONL files on disk.

Getting Started

# Build and run locally
make run

# Run tests, lint, and format check
make check

# Docker
docker build -t netwatch-server .
docker run -p 8080:8080 netwatch-server

Rationale

The NetWatch frontend collects latency measurements from the browser but has no way to persist or aggregate them. This backend provides a minimal POST /api/v1/reports endpoint that buffers incoming reports in memory and flushes them to compressed files on disk for later analysis.

Design

The server is structured as an fx-wired Go application under cmd/netwatch-server/. Internal packages in internal/ follow standard Go project layout:

  • config: Loads configuration from environment variables and config files via Viper.
  • handlers: HTTP request handlers for the API (health check, report ingestion).
  • reportbuf: In-memory buffer that accumulates JSONL report lines and flushes to zstd-compressed files when the buffer reaches 10 MiB or every 60 seconds.
  • server: Chi-based HTTP server with middleware wiring and route registration.
  • healthcheck, middleware, logger, globals: Supporting infrastructure.

Configuration

Variable Default Description
PORT 8080 HTTP listen port
DATA_DIR ./data/reports Directory for compressed reports
DEBUG false Enable debug logging

Report storage

Reports are written as reports-<timestamp>.jsonl.zst files in DATA_DIR. Each file contains one JSON object per line, compressed with zstd. Files are created with O_EXCL to prevent overwrites.

TODO

  • Add integration test that POSTs a report and verifies the compressed output
  • Add report decompression/query endpoint
  • Add metrics (Prometheus) for buffer size, flush count, report count
  • Add retention policy to prune old report files

License

MIT. See LICENSE.

Author

@sneak