Compare commits
1 Commits
feature/da
...
e3e1f1c2e2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e3e1f1c2e2 |
50
README.md
50
README.md
@@ -170,9 +170,8 @@ vaultik [--config <path>] store info
|
|||||||
* Config is located at `/etc/vaultik/config.yml` by default
|
* Config is located at `/etc/vaultik/config.yml` by default
|
||||||
* Optional snapshot names argument to create specific snapshots (default: all)
|
* Optional snapshot names argument to create specific snapshots (default: all)
|
||||||
* `--cron`: Silent unless error (for crontab)
|
* `--cron`: Silent unless error (for crontab)
|
||||||
* `--daemon`: Run continuously with filesystem monitoring and periodic scans (see [daemon mode](#daemon-mode))
|
* `--daemon`: Run continuously with inotify monitoring and periodic scans
|
||||||
* `--prune`: Delete old snapshots and orphaned blobs after backup
|
* `--prune`: Delete old snapshots and orphaned blobs after backup
|
||||||
* `--skip-errors`: Skip file read errors (log them loudly but continue)
|
|
||||||
|
|
||||||
**snapshot list**: List all snapshots with their timestamps and sizes
|
**snapshot list**: List all snapshots with their timestamps and sizes
|
||||||
* `--json`: Output in JSON format
|
* `--json`: Output in JSON format
|
||||||
@@ -209,53 +208,6 @@ vaultik [--config <path>] store info
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## daemon mode
|
|
||||||
|
|
||||||
When `--daemon` is passed to `snapshot create`, vaultik runs as a
|
|
||||||
long-running process that continuously monitors configured directories for
|
|
||||||
changes and creates backups automatically.
|
|
||||||
|
|
||||||
```sh
|
|
||||||
vaultik --config /etc/vaultik.yaml snapshot create --daemon
|
|
||||||
```
|
|
||||||
|
|
||||||
### how it works
|
|
||||||
|
|
||||||
1. **Initial backup**: On startup, a full backup of all configured snapshots
|
|
||||||
runs immediately.
|
|
||||||
2. **Filesystem watching**: All configured snapshot paths are monitored for
|
|
||||||
file changes using OS-native filesystem notifications (inotify on Linux,
|
|
||||||
FSEvents on macOS, ReadDirectoryChangesW on Windows) via the
|
|
||||||
[fsnotify](https://github.com/fsnotify/fsnotify) library.
|
|
||||||
3. **Periodic backups**: At each `backup_interval` tick, if filesystem
|
|
||||||
changes have been detected and `min_time_between_run` has elapsed since
|
|
||||||
the last backup, a backup runs for only the affected snapshots.
|
|
||||||
4. **Full scans**: At each `full_scan_interval` tick, a full backup of all
|
|
||||||
snapshots runs regardless of detected changes. This catches any changes
|
|
||||||
that filesystem notifications may have missed.
|
|
||||||
5. **Graceful shutdown**: On SIGTERM or SIGINT, the daemon completes any
|
|
||||||
in-progress backup before exiting.
|
|
||||||
|
|
||||||
### configuration
|
|
||||||
|
|
||||||
These config fields control daemon behavior:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
backup_interval: 1h # How often to check for changes and run backups
|
|
||||||
full_scan_interval: 24h # How often to do a complete scan of all paths
|
|
||||||
min_time_between_run: 15m # Minimum gap between consecutive backup runs
|
|
||||||
```
|
|
||||||
|
|
||||||
### notes
|
|
||||||
|
|
||||||
* New directories created under watched paths are automatically picked up.
|
|
||||||
* The daemon uses the same `CreateSnapshot` logic as one-shot mode — each
|
|
||||||
backup run is a standard incremental snapshot.
|
|
||||||
* The `--prune`, `--cron`, and `--skip-errors` flags work in daemon mode
|
|
||||||
and apply to each individual backup run.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## architecture
|
## architecture
|
||||||
|
|
||||||
### s3 bucket layout
|
### s3 bucket layout
|
||||||
|
|||||||
28
TODO.md
28
TODO.md
@@ -106,21 +106,23 @@ User must have rclone configured separately (via `rclone config`).
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Daemon Mode (Complete)
|
## Post-1.0 (Daemon Mode)
|
||||||
|
|
||||||
1. [x] Implement cross-platform filesystem watcher (via fsnotify)
|
1. Implement inotify file watcher for Linux
|
||||||
- Watches source directories for changes
|
- Watch source directories for changes
|
||||||
- Tracks dirty paths in memory
|
- Track dirty paths in memory
|
||||||
- Automatically watches new directories
|
|
||||||
|
|
||||||
1. [x] Implement backup scheduler in daemon mode
|
1. Implement FSEvents watcher for macOS
|
||||||
- Respects backup_interval config
|
- Watch source directories for changes
|
||||||
- Triggers backup when dirty paths exist and interval elapsed
|
- Track dirty paths in memory
|
||||||
- Implements full_scan_interval for periodic full scans
|
|
||||||
- Respects min_time_between_run to prevent excessive runs
|
|
||||||
|
|
||||||
1. [x] Add proper signal handling for daemon
|
1. Implement backup scheduler in daemon mode
|
||||||
|
- Respect backup_interval config
|
||||||
|
- Trigger backup when dirty paths exist and interval elapsed
|
||||||
|
- Implement full_scan_interval for periodic full scans
|
||||||
|
|
||||||
|
1. Add proper signal handling for daemon
|
||||||
- Graceful shutdown on SIGTERM/SIGINT
|
- Graceful shutdown on SIGTERM/SIGINT
|
||||||
- Completes in-progress backup before exit
|
- Complete in-progress backup before exit
|
||||||
|
|
||||||
1. [x] Write tests for daemon mode
|
1. Write tests for daemon mode
|
||||||
|
|||||||
3
go.mod
3
go.mod
@@ -13,7 +13,6 @@ require (
|
|||||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.90.0
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.90.0
|
||||||
github.com/aws/smithy-go v1.23.2
|
github.com/aws/smithy-go v1.23.2
|
||||||
github.com/dustin/go-humanize v1.0.1
|
github.com/dustin/go-humanize v1.0.1
|
||||||
github.com/fsnotify/fsnotify v1.9.0
|
|
||||||
github.com/gobwas/glob v0.2.3
|
github.com/gobwas/glob v0.2.3
|
||||||
github.com/google/uuid v1.6.0
|
github.com/google/uuid v1.6.0
|
||||||
github.com/johannesboyne/gofakes3 v0.0.0-20250603205740-ed9094be7668
|
github.com/johannesboyne/gofakes3 v0.0.0-20250603205740-ed9094be7668
|
||||||
@@ -25,7 +24,6 @@ require (
|
|||||||
github.com/spf13/cobra v1.10.1
|
github.com/spf13/cobra v1.10.1
|
||||||
github.com/stretchr/testify v1.11.1
|
github.com/stretchr/testify v1.11.1
|
||||||
go.uber.org/fx v1.24.0
|
go.uber.org/fx v1.24.0
|
||||||
golang.org/x/sync v0.18.0
|
|
||||||
golang.org/x/term v0.37.0
|
golang.org/x/term v0.37.0
|
||||||
gopkg.in/yaml.v3 v3.0.1
|
gopkg.in/yaml.v3 v3.0.1
|
||||||
modernc.org/sqlite v1.38.0
|
modernc.org/sqlite v1.38.0
|
||||||
@@ -268,6 +266,7 @@ require (
|
|||||||
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
|
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
|
||||||
golang.org/x/net v0.47.0 // indirect
|
golang.org/x/net v0.47.0 // indirect
|
||||||
golang.org/x/oauth2 v0.33.0 // indirect
|
golang.org/x/oauth2 v0.33.0 // indirect
|
||||||
|
golang.org/x/sync v0.18.0 // indirect
|
||||||
golang.org/x/sys v0.38.0 // indirect
|
golang.org/x/sys v0.38.0 // indirect
|
||||||
golang.org/x/text v0.31.0 // indirect
|
golang.org/x/text v0.31.0 // indirect
|
||||||
golang.org/x/time v0.14.0 // indirect
|
golang.org/x/time v0.14.0 // indirect
|
||||||
|
|||||||
4
go.sum
4
go.sum
@@ -286,8 +286,8 @@ github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2
|
|||||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||||
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
|
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
|
||||||
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
|
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
|
||||||
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
|
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
|
||||||
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
|
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
|
||||||
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
|
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
|
||||||
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
|
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.11 h1:AQvxbp830wPhHTqc1u7nzoLT+ZFxGY7emj5DR5DYFik=
|
github.com/gabriel-vasile/mimetype v1.4.11 h1:AQvxbp830wPhHTqc1u7nzoLT+ZFxGY7emj5DR5DYFik=
|
||||||
|
|||||||
@@ -1,434 +0,0 @@
|
|||||||
package vaultik
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"os/signal"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"syscall"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
|
||||||
"github.com/fsnotify/fsnotify"
|
|
||||||
)
|
|
||||||
|
|
||||||
// daemonMinBackupInterval is the absolute minimum time allowed between backup runs,
|
|
||||||
// regardless of config, to prevent runaway backup loops.
|
|
||||||
const daemonMinBackupInterval = 1 * time.Minute
|
|
||||||
|
|
||||||
// daemonShutdownTimeout is the maximum time to wait for an in-progress backup
|
|
||||||
// to complete during graceful shutdown before force-exiting.
|
|
||||||
const daemonShutdownTimeout = 5 * time.Minute
|
|
||||||
|
|
||||||
// RunDaemon runs vaultik in daemon mode: it watches configured directories for
|
|
||||||
// changes using filesystem notifications, runs periodic backups at the configured
|
|
||||||
// interval, and performs full scans at the full_scan_interval. It handles
|
|
||||||
// SIGTERM/SIGINT for graceful shutdown, completing any in-progress backup before
|
|
||||||
// exiting.
|
|
||||||
func (v *Vaultik) RunDaemon(opts *SnapshotCreateOptions) error {
|
|
||||||
backupInterval := v.Config.BackupInterval
|
|
||||||
if backupInterval < daemonMinBackupInterval {
|
|
||||||
backupInterval = daemonMinBackupInterval
|
|
||||||
}
|
|
||||||
|
|
||||||
minTimeBetween := v.Config.MinTimeBetweenRun
|
|
||||||
if minTimeBetween < daemonMinBackupInterval {
|
|
||||||
minTimeBetween = daemonMinBackupInterval
|
|
||||||
}
|
|
||||||
|
|
||||||
fullScanInterval := v.Config.FullScanInterval
|
|
||||||
if fullScanInterval <= 0 {
|
|
||||||
fullScanInterval = 24 * time.Hour
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info("Starting daemon mode",
|
|
||||||
"backup_interval", backupInterval,
|
|
||||||
"min_time_between_run", minTimeBetween,
|
|
||||||
"full_scan_interval", fullScanInterval,
|
|
||||||
)
|
|
||||||
v.printfStdout("Daemon mode started\n")
|
|
||||||
v.printfStdout(" Backup interval: %s\n", backupInterval)
|
|
||||||
v.printfStdout(" Min time between: %s\n", minTimeBetween)
|
|
||||||
v.printfStdout(" Full scan interval: %s\n", fullScanInterval)
|
|
||||||
|
|
||||||
// Create a daemon-scoped context that we cancel on signal.
|
|
||||||
ctx, cancel := context.WithCancel(v.ctx)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
// Set up signal handling for graceful shutdown.
|
|
||||||
sigCh := make(chan os.Signal, 1)
|
|
||||||
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
|
|
||||||
|
|
||||||
// Tracker for filesystem change events.
|
|
||||||
tracker := newChangeTracker()
|
|
||||||
|
|
||||||
// Start the filesystem watcher.
|
|
||||||
watcher, err := v.startWatcher(ctx, tracker)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("starting filesystem watcher: %w", err)
|
|
||||||
}
|
|
||||||
defer func() { _ = watcher.Close() }()
|
|
||||||
|
|
||||||
// Timers
|
|
||||||
backupTicker := time.NewTicker(backupInterval)
|
|
||||||
defer backupTicker.Stop()
|
|
||||||
|
|
||||||
fullScanTicker := time.NewTicker(fullScanInterval)
|
|
||||||
defer fullScanTicker.Stop()
|
|
||||||
|
|
||||||
var lastBackupTime time.Time
|
|
||||||
backupRunning := make(chan struct{}, 1) // semaphore: 1 = backup in progress
|
|
||||||
|
|
||||||
// Run an initial full backup immediately on startup.
|
|
||||||
log.Info("Running initial backup on daemon startup")
|
|
||||||
v.printfStdout("Running initial backup...\n")
|
|
||||||
if err := v.runDaemonBackup(ctx, opts, tracker, false); err != nil {
|
|
||||||
if ctx.Err() != nil {
|
|
||||||
return nil // context cancelled, shutting down
|
|
||||||
}
|
|
||||||
log.Error("Initial backup failed", "error", err)
|
|
||||||
v.printfStderr("Initial backup failed: %v\n", err)
|
|
||||||
// Continue running — next scheduled backup may succeed.
|
|
||||||
} else {
|
|
||||||
lastBackupTime = time.Now()
|
|
||||||
tracker.reset()
|
|
||||||
}
|
|
||||||
|
|
||||||
v.printfStdout("Watching for changes...\n")
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
log.Info("Daemon context cancelled, shutting down")
|
|
||||||
return nil
|
|
||||||
|
|
||||||
case sig := <-sigCh:
|
|
||||||
log.Info("Received signal, initiating graceful shutdown", "signal", sig)
|
|
||||||
v.printfStdout("\nReceived %s, shutting down...\n", sig)
|
|
||||||
cancel()
|
|
||||||
|
|
||||||
// Wait for any in-progress backup to finish.
|
|
||||||
select {
|
|
||||||
case backupRunning <- struct{}{}:
|
|
||||||
// No backup running, we can exit immediately.
|
|
||||||
<-backupRunning
|
|
||||||
default:
|
|
||||||
// Backup is running, wait for it to complete.
|
|
||||||
v.printfStdout("Waiting for in-progress backup to complete...\n")
|
|
||||||
shutdownTimer := time.NewTimer(daemonShutdownTimeout)
|
|
||||||
select {
|
|
||||||
case backupRunning <- struct{}{}:
|
|
||||||
<-backupRunning
|
|
||||||
shutdownTimer.Stop()
|
|
||||||
case <-shutdownTimer.C:
|
|
||||||
log.Warn("Shutdown timeout exceeded, forcing exit")
|
|
||||||
v.printfStderr("Shutdown timeout exceeded, forcing exit\n")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
|
|
||||||
case <-backupTicker.C:
|
|
||||||
// Periodic backup tick. Only run if there are changes and enough
|
|
||||||
// time has elapsed since the last run.
|
|
||||||
if !tracker.hasChanges() {
|
|
||||||
log.Debug("Backup tick: no changes detected, skipping")
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if time.Since(lastBackupTime) < minTimeBetween {
|
|
||||||
log.Debug("Backup tick: too soon since last backup",
|
|
||||||
"last_backup", lastBackupTime,
|
|
||||||
"min_interval", minTimeBetween,
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to acquire the backup semaphore (non-blocking).
|
|
||||||
select {
|
|
||||||
case backupRunning <- struct{}{}:
|
|
||||||
default:
|
|
||||||
log.Debug("Backup tick: backup already in progress, skipping")
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info("Running scheduled backup", "changes", tracker.changeCount())
|
|
||||||
v.printfStdout("Running scheduled backup (%d changes detected)...\n", tracker.changeCount())
|
|
||||||
if err := v.runDaemonBackup(ctx, opts, tracker, false); err != nil {
|
|
||||||
if ctx.Err() != nil {
|
|
||||||
<-backupRunning
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
log.Error("Scheduled backup failed", "error", err)
|
|
||||||
v.printfStderr("Scheduled backup failed: %v\n", err)
|
|
||||||
} else {
|
|
||||||
lastBackupTime = time.Now()
|
|
||||||
tracker.reset()
|
|
||||||
}
|
|
||||||
<-backupRunning
|
|
||||||
|
|
||||||
case <-fullScanTicker.C:
|
|
||||||
// Full scan — ignore whether changes were detected; do a complete scan.
|
|
||||||
if time.Since(lastBackupTime) < minTimeBetween {
|
|
||||||
log.Debug("Full scan tick: too soon since last backup, deferring")
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
select {
|
|
||||||
case backupRunning <- struct{}{}:
|
|
||||||
default:
|
|
||||||
log.Debug("Full scan tick: backup already in progress, skipping")
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info("Running full periodic scan")
|
|
||||||
v.printfStdout("Running full periodic scan...\n")
|
|
||||||
if err := v.runDaemonBackup(ctx, opts, tracker, true); err != nil {
|
|
||||||
if ctx.Err() != nil {
|
|
||||||
<-backupRunning
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
log.Error("Full scan backup failed", "error", err)
|
|
||||||
v.printfStderr("Full scan backup failed: %v\n", err)
|
|
||||||
} else {
|
|
||||||
lastBackupTime = time.Now()
|
|
||||||
tracker.reset()
|
|
||||||
}
|
|
||||||
<-backupRunning
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// runDaemonBackup executes a single backup run within the daemon loop.
|
|
||||||
// If fullScan is true, all snapshots are processed regardless of tracked changes.
|
|
||||||
// Otherwise, only snapshots whose paths overlap with tracked changes are processed.
|
|
||||||
func (v *Vaultik) runDaemonBackup(ctx context.Context, opts *SnapshotCreateOptions, tracker *changeTracker, fullScan bool) error {
|
|
||||||
startTime := time.Now()
|
|
||||||
|
|
||||||
// Build a one-shot create options for this run.
|
|
||||||
runOpts := &SnapshotCreateOptions{
|
|
||||||
Cron: opts.Cron,
|
|
||||||
Prune: opts.Prune,
|
|
||||||
SkipErrors: opts.SkipErrors,
|
|
||||||
}
|
|
||||||
|
|
||||||
if !fullScan {
|
|
||||||
// Filter to only snapshots whose paths had changes.
|
|
||||||
changedPaths := tracker.changedPaths()
|
|
||||||
affected := v.snapshotsAffectedByChanges(changedPaths)
|
|
||||||
if len(affected) == 0 {
|
|
||||||
log.Debug("No snapshots affected by changes")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
runOpts.Snapshots = affected
|
|
||||||
log.Info("Running incremental backup for affected snapshots", "snapshots", affected)
|
|
||||||
}
|
|
||||||
// fullScan: leave runOpts.Snapshots empty → CreateSnapshot processes all.
|
|
||||||
|
|
||||||
// Use a child context so cancellation propagates but we can still finish
|
|
||||||
// if the parent hasn't been cancelled.
|
|
||||||
childCtx, childCancel := context.WithCancel(ctx)
|
|
||||||
defer childCancel()
|
|
||||||
|
|
||||||
// Temporarily swap the Vaultik context.
|
|
||||||
origCtx := v.ctx
|
|
||||||
v.ctx = childCtx
|
|
||||||
defer func() { v.ctx = origCtx }()
|
|
||||||
|
|
||||||
if err := v.CreateSnapshot(runOpts); err != nil {
|
|
||||||
return fmt.Errorf("backup run failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info("Daemon backup complete", "duration", time.Since(startTime))
|
|
||||||
v.printfStdout("Backup complete in %s\n", formatDuration(time.Since(startTime)))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// snapshotsAffectedByChanges returns the names of configured snapshots whose
|
|
||||||
// paths overlap with any of the changed paths.
|
|
||||||
func (v *Vaultik) snapshotsAffectedByChanges(changedPaths []string) []string {
|
|
||||||
var affected []string
|
|
||||||
for _, snapName := range v.Config.SnapshotNames() {
|
|
||||||
snapCfg := v.Config.Snapshots[snapName]
|
|
||||||
for _, snapPath := range snapCfg.Paths {
|
|
||||||
absSnapPath, err := filepath.Abs(snapPath)
|
|
||||||
if err != nil {
|
|
||||||
absSnapPath = snapPath
|
|
||||||
}
|
|
||||||
for _, changed := range changedPaths {
|
|
||||||
if isSubpath(changed, absSnapPath) {
|
|
||||||
affected = append(affected, snapName)
|
|
||||||
goto nextSnapshot
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
nextSnapshot:
|
|
||||||
}
|
|
||||||
return affected
|
|
||||||
}
|
|
||||||
|
|
||||||
// isSubpath returns true if child is under parent (or equal to it).
|
|
||||||
func isSubpath(child, parent string) bool {
|
|
||||||
// Normalize both paths.
|
|
||||||
child = filepath.Clean(child)
|
|
||||||
parent = filepath.Clean(parent)
|
|
||||||
if child == parent {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
// Ensure parent ends with a separator for prefix matching,
|
|
||||||
// unless parent is the root directory (which already ends with /).
|
|
||||||
prefix := parent
|
|
||||||
if !strings.HasSuffix(prefix, string(filepath.Separator)) {
|
|
||||||
prefix += string(filepath.Separator)
|
|
||||||
}
|
|
||||||
return strings.HasPrefix(child, prefix)
|
|
||||||
}
|
|
||||||
|
|
||||||
// startWatcher creates an fsnotify watcher and adds all configured snapshot paths.
|
|
||||||
// It spawns a goroutine that reads events and feeds the change tracker.
|
|
||||||
func (v *Vaultik) startWatcher(ctx context.Context, tracker *changeTracker) (*fsnotify.Watcher, error) {
|
|
||||||
watcher, err := fsnotify.NewWatcher()
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("creating watcher: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Collect unique absolute paths to watch.
|
|
||||||
watchPaths := make(map[string]struct{})
|
|
||||||
for _, snapName := range v.Config.SnapshotNames() {
|
|
||||||
snapCfg := v.Config.Snapshots[snapName]
|
|
||||||
for _, p := range snapCfg.Paths {
|
|
||||||
absPath, err := filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
log.Warn("Failed to resolve absolute path for watch", "path", p, "error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
watchPaths[absPath] = struct{}{}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add paths to watcher. Walk the top-level to add subdirectories
|
|
||||||
// since fsnotify doesn't recurse automatically.
|
|
||||||
for p := range watchPaths {
|
|
||||||
if err := v.addWatchRecursive(watcher, p); err != nil {
|
|
||||||
log.Warn("Failed to watch path", "path", p, "error", err)
|
|
||||||
// Non-fatal: the path might not exist yet.
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Spawn the event reader goroutine.
|
|
||||||
go v.watcherLoop(ctx, watcher, tracker)
|
|
||||||
|
|
||||||
return watcher, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// addWatchRecursive walks a directory tree and adds each directory to the watcher.
|
|
||||||
func (v *Vaultik) addWatchRecursive(watcher *fsnotify.Watcher, root string) error {
|
|
||||||
return filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
// Can't read — skip this subtree.
|
|
||||||
if info != nil && info.IsDir() {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if info.IsDir() {
|
|
||||||
// Skip common directories that don't need watching.
|
|
||||||
base := filepath.Base(path)
|
|
||||||
if base == ".git" || base == "node_modules" || base == "__pycache__" {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
if err := watcher.Add(path); err != nil {
|
|
||||||
log.Debug("Failed to watch directory", "path", path, "error", err)
|
|
||||||
// Non-fatal: continue walking.
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// watcherLoop reads filesystem events from the watcher and records them
|
|
||||||
// in the change tracker. It runs until the context is cancelled.
|
|
||||||
func (v *Vaultik) watcherLoop(ctx context.Context, watcher *fsnotify.Watcher, tracker *changeTracker) {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case event, ok := <-watcher.Events:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
// Only track write/create/remove/rename events.
|
|
||||||
if event.Op&(fsnotify.Write|fsnotify.Create|fsnotify.Remove|fsnotify.Rename) != 0 {
|
|
||||||
tracker.recordChange(event.Name)
|
|
||||||
log.Debug("Filesystem change detected", "path", event.Name, "op", event.Op)
|
|
||||||
}
|
|
||||||
// If a new directory was created, watch it too.
|
|
||||||
if event.Op&fsnotify.Create != 0 {
|
|
||||||
if info, err := os.Stat(event.Name); err == nil && info.IsDir() {
|
|
||||||
if err := v.addWatchRecursive(watcher, event.Name); err != nil {
|
|
||||||
log.Debug("Failed to watch new directory", "path", event.Name, "error", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case err, ok := <-watcher.Errors:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
log.Warn("Filesystem watcher error", "error", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// changeTracker records filesystem paths that have been modified since the
|
|
||||||
// last backup. It is safe for concurrent use.
|
|
||||||
type changeTracker struct {
|
|
||||||
mu sync.Mutex
|
|
||||||
changes map[string]time.Time // path → last change time
|
|
||||||
}
|
|
||||||
|
|
||||||
// newChangeTracker creates a new empty change tracker.
|
|
||||||
func newChangeTracker() *changeTracker {
|
|
||||||
return &changeTracker{
|
|
||||||
changes: make(map[string]time.Time),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// recordChange records that a path has been modified.
|
|
||||||
func (ct *changeTracker) recordChange(path string) {
|
|
||||||
ct.mu.Lock()
|
|
||||||
ct.changes[path] = time.Now()
|
|
||||||
ct.mu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
// hasChanges returns true if any changes have been recorded.
|
|
||||||
func (ct *changeTracker) hasChanges() bool {
|
|
||||||
ct.mu.Lock()
|
|
||||||
defer ct.mu.Unlock()
|
|
||||||
return len(ct.changes) > 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// changeCount returns the number of unique changed paths.
|
|
||||||
func (ct *changeTracker) changeCount() int {
|
|
||||||
ct.mu.Lock()
|
|
||||||
defer ct.mu.Unlock()
|
|
||||||
return len(ct.changes)
|
|
||||||
}
|
|
||||||
|
|
||||||
// changedPaths returns all changed paths.
|
|
||||||
func (ct *changeTracker) changedPaths() []string {
|
|
||||||
ct.mu.Lock()
|
|
||||||
defer ct.mu.Unlock()
|
|
||||||
paths := make([]string, 0, len(ct.changes))
|
|
||||||
for p := range ct.changes {
|
|
||||||
paths = append(paths, p)
|
|
||||||
}
|
|
||||||
return paths
|
|
||||||
}
|
|
||||||
|
|
||||||
// reset clears all recorded changes.
|
|
||||||
func (ct *changeTracker) reset() {
|
|
||||||
ct.mu.Lock()
|
|
||||||
ct.changes = make(map[string]time.Time)
|
|
||||||
ct.mu.Unlock()
|
|
||||||
}
|
|
||||||
@@ -1,196 +0,0 @@
|
|||||||
package vaultik
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestNewChangeTracker(t *testing.T) {
|
|
||||||
ct := newChangeTracker()
|
|
||||||
require.NotNil(t, ct)
|
|
||||||
assert.False(t, ct.hasChanges())
|
|
||||||
assert.Equal(t, 0, ct.changeCount())
|
|
||||||
assert.Empty(t, ct.changedPaths())
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestChangeTrackerRecordChange(t *testing.T) {
|
|
||||||
ct := newChangeTracker()
|
|
||||||
|
|
||||||
ct.recordChange("/home/user/file1.txt")
|
|
||||||
assert.True(t, ct.hasChanges())
|
|
||||||
assert.Equal(t, 1, ct.changeCount())
|
|
||||||
|
|
||||||
ct.recordChange("/home/user/file2.txt")
|
|
||||||
assert.Equal(t, 2, ct.changeCount())
|
|
||||||
|
|
||||||
// Duplicate path should update time but not increase count.
|
|
||||||
ct.recordChange("/home/user/file1.txt")
|
|
||||||
assert.Equal(t, 2, ct.changeCount())
|
|
||||||
|
|
||||||
paths := ct.changedPaths()
|
|
||||||
assert.Len(t, paths, 2)
|
|
||||||
assert.Contains(t, paths, "/home/user/file1.txt")
|
|
||||||
assert.Contains(t, paths, "/home/user/file2.txt")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestChangeTrackerReset(t *testing.T) {
|
|
||||||
ct := newChangeTracker()
|
|
||||||
|
|
||||||
ct.recordChange("/home/user/file1.txt")
|
|
||||||
ct.recordChange("/home/user/file2.txt")
|
|
||||||
assert.Equal(t, 2, ct.changeCount())
|
|
||||||
|
|
||||||
ct.reset()
|
|
||||||
assert.False(t, ct.hasChanges())
|
|
||||||
assert.Equal(t, 0, ct.changeCount())
|
|
||||||
assert.Empty(t, ct.changedPaths())
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestChangeTrackerConcurrency(t *testing.T) {
|
|
||||||
ct := newChangeTracker()
|
|
||||||
done := make(chan struct{})
|
|
||||||
|
|
||||||
// Write from multiple goroutines simultaneously.
|
|
||||||
for i := 0; i < 10; i++ {
|
|
||||||
go func(n int) {
|
|
||||||
for j := 0; j < 100; j++ {
|
|
||||||
ct.recordChange("/path/" + string(rune('a'+n)))
|
|
||||||
}
|
|
||||||
done <- struct{}{}
|
|
||||||
}(i)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also read concurrently.
|
|
||||||
go func() {
|
|
||||||
for i := 0; i < 100; i++ {
|
|
||||||
_ = ct.hasChanges()
|
|
||||||
_ = ct.changeCount()
|
|
||||||
_ = ct.changedPaths()
|
|
||||||
}
|
|
||||||
done <- struct{}{}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Wait for all goroutines.
|
|
||||||
for i := 0; i < 11; i++ {
|
|
||||||
<-done
|
|
||||||
}
|
|
||||||
|
|
||||||
assert.True(t, ct.hasChanges())
|
|
||||||
assert.LessOrEqual(t, ct.changeCount(), 10) // 10 unique paths
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestChangeTrackerRecordTimestamp(t *testing.T) {
|
|
||||||
ct := newChangeTracker()
|
|
||||||
|
|
||||||
before := time.Now()
|
|
||||||
ct.recordChange("/some/path")
|
|
||||||
after := time.Now()
|
|
||||||
|
|
||||||
ct.mu.Lock()
|
|
||||||
ts := ct.changes["/some/path"]
|
|
||||||
ct.mu.Unlock()
|
|
||||||
|
|
||||||
assert.False(t, ts.Before(before))
|
|
||||||
assert.False(t, ts.After(after))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIsSubpath(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
child string
|
|
||||||
parent string
|
|
||||||
expected bool
|
|
||||||
}{
|
|
||||||
{"/home/user/file.txt", "/home/user", true},
|
|
||||||
{"/home/user", "/home/user", true},
|
|
||||||
{"/home/user/deep/nested/file.txt", "/home/user", true},
|
|
||||||
{"/home/other/file.txt", "/home/user", false},
|
|
||||||
{"/home/username/file.txt", "/home/user", false}, // not a subpath, just prefix match
|
|
||||||
{"/etc/config", "/home/user", false},
|
|
||||||
{"/", "/", true},
|
|
||||||
{"/a", "/", true},
|
|
||||||
{"/a/b", "/a", true},
|
|
||||||
{"/ab", "/a", false},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.child+"_under_"+tt.parent, func(t *testing.T) {
|
|
||||||
result := isSubpath(tt.child, tt.parent)
|
|
||||||
assert.Equal(t, tt.expected, result)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSnapshotsAffectedByChanges(t *testing.T) {
|
|
||||||
// We can't easily test this without a full Vaultik instance with config,
|
|
||||||
// but we can verify the helper function isSubpath which it depends on.
|
|
||||||
// The full integration is tested via the daemon integration test.
|
|
||||||
|
|
||||||
// Verify basic subpath logic used by snapshotsAffectedByChanges.
|
|
||||||
assert.True(t, isSubpath("/home/user/docs/report.txt", "/home/user"))
|
|
||||||
assert.False(t, isSubpath("/var/log/syslog", "/home/user"))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDaemonConstants(t *testing.T) {
|
|
||||||
// Verify daemon constants are reasonable values.
|
|
||||||
assert.GreaterOrEqual(t, daemonMinBackupInterval, 1*time.Minute)
|
|
||||||
assert.GreaterOrEqual(t, daemonShutdownTimeout, 1*time.Minute)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestRunDaemon_CancelledContext(t *testing.T) {
|
|
||||||
// Create a temporary directory to use as a snapshot path.
|
|
||||||
tmpDir := t.TempDir()
|
|
||||||
|
|
||||||
// Write a file so the watched path is non-empty.
|
|
||||||
err := os.WriteFile(filepath.Join(tmpDir, "testfile.txt"), []byte("hello"), 0o644)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// Build a minimal Vaultik with daemon-friendly config.
|
|
||||||
// RunDaemon will fail on the initial backup (no storage configured),
|
|
||||||
// but it should continue running. We cancel the context to verify
|
|
||||||
// graceful shutdown.
|
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
|
||||||
stdout := &bytes.Buffer{}
|
|
||||||
stderr := &bytes.Buffer{}
|
|
||||||
|
|
||||||
v := &Vaultik{
|
|
||||||
Config: &config.Config{
|
|
||||||
BackupInterval: 1 * time.Hour,
|
|
||||||
FullScanInterval: 24 * time.Hour,
|
|
||||||
MinTimeBetweenRun: 1 * time.Minute,
|
|
||||||
Snapshots: map[string]config.SnapshotConfig{
|
|
||||||
"test": {
|
|
||||||
Paths: []string{tmpDir},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
ctx: ctx,
|
|
||||||
cancel: cancel,
|
|
||||||
Stdout: stdout,
|
|
||||||
Stderr: stderr,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cancel the context shortly after RunDaemon starts so the daemon
|
|
||||||
// loop exits via its ctx.Done() path.
|
|
||||||
go func() {
|
|
||||||
// Wait for the initial backup to fail (it will, since there's no
|
|
||||||
// storage backend), then cancel.
|
|
||||||
time.Sleep(200 * time.Millisecond)
|
|
||||||
cancel()
|
|
||||||
}()
|
|
||||||
|
|
||||||
err = v.RunDaemon(&SnapshotCreateOptions{})
|
|
||||||
// RunDaemon should return nil on context cancellation (graceful shutdown).
|
|
||||||
assert.NoError(t, err)
|
|
||||||
|
|
||||||
// Verify daemon printed startup messages.
|
|
||||||
output := stdout.String()
|
|
||||||
assert.Contains(t, output, "Daemon mode started")
|
|
||||||
}
|
|
||||||
@@ -80,9 +80,8 @@ func parseSnapshotTimestamp(snapshotID string) (time.Time, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// parseSnapshotName extracts the snapshot name from a snapshot ID.
|
// parseSnapshotName extracts the snapshot name from a snapshot ID.
|
||||||
// Format: hostname_snapshotname_timestamp — the middle part(s) between hostname
|
// Format: hostname_snapshotname_timestamp (3 parts) or hostname_timestamp (2 parts, no name).
|
||||||
// and the RFC3339 timestamp are the snapshot name (may contain underscores).
|
// Returns the snapshot name, or empty string if no name component is present.
|
||||||
// Returns the snapshot name, or empty string if the ID is malformed.
|
|
||||||
func parseSnapshotName(snapshotID string) string {
|
func parseSnapshotName(snapshotID string) string {
|
||||||
parts := strings.Split(snapshotID, "_")
|
parts := strings.Split(snapshotID, "_")
|
||||||
if len(parts) < 3 {
|
if len(parts) < 3 {
|
||||||
|
|||||||
@@ -20,11 +20,26 @@ func TestParseSnapshotName(t *testing.T) {
|
|||||||
snapshotID: "server1_system_2026-02-15T09:30:00Z",
|
snapshotID: "server1_system_2026-02-15T09:30:00Z",
|
||||||
want: "system",
|
want: "system",
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: "no snapshot name (legacy format)",
|
||||||
|
snapshotID: "myhost_2026-01-12T14:41:15Z",
|
||||||
|
want: "",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
name: "name with underscores",
|
name: "name with underscores",
|
||||||
snapshotID: "myhost_my_special_backup_2026-03-01T00:00:00Z",
|
snapshotID: "myhost_my_special_backup_2026-03-01T00:00:00Z",
|
||||||
want: "my_special_backup",
|
want: "my_special_backup",
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: "single part (edge case)",
|
||||||
|
snapshotID: "nounderscore",
|
||||||
|
want: "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty string",
|
||||||
|
snapshotID: "",
|
||||||
|
want: "",
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
@@ -74,3 +89,31 @@ func TestParseSnapshotTimestamp(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestSnapshotPurgeOptions(t *testing.T) {
|
||||||
|
opts := &SnapshotPurgeOptions{
|
||||||
|
KeepLatest: true,
|
||||||
|
Name: "home",
|
||||||
|
Force: true,
|
||||||
|
}
|
||||||
|
if !opts.KeepLatest {
|
||||||
|
t.Error("Expected KeepLatest to be true")
|
||||||
|
}
|
||||||
|
if opts.Name != "home" {
|
||||||
|
t.Errorf("Expected Name to be 'home', got %q", opts.Name)
|
||||||
|
}
|
||||||
|
if !opts.Force {
|
||||||
|
t.Error("Expected Force to be true")
|
||||||
|
}
|
||||||
|
|
||||||
|
opts2 := &SnapshotPurgeOptions{
|
||||||
|
OlderThan: "30d",
|
||||||
|
Name: "system",
|
||||||
|
}
|
||||||
|
if opts2.OlderThan != "30d" {
|
||||||
|
t.Errorf("Expected OlderThan to be '30d', got %q", opts2.OlderThan)
|
||||||
|
}
|
||||||
|
if opts2.Name != "system" {
|
||||||
|
t.Errorf("Expected Name to be 'system', got %q", opts2.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -228,6 +228,53 @@ func TestPurgeOlderThan_WithNameFilter(t *testing.T) {
|
|||||||
assert.Contains(t, remaining, "testhost_home_2026-01-01T00:00:00Z")
|
assert.Contains(t, remaining, "testhost_home_2026-01-01T00:00:00Z")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestPurgeKeepLatest_LegacyNoNameSnapshots(t *testing.T) {
|
||||||
|
// Legacy snapshots without a name component (hostname_timestamp).
|
||||||
|
// Should be grouped together under empty-name.
|
||||||
|
snapshotIDs := []string{
|
||||||
|
"testhost_2026-01-01T00:00:00Z",
|
||||||
|
"testhost_2026-01-01T01:00:00Z",
|
||||||
|
"testhost_2026-01-01T02:00:00Z",
|
||||||
|
}
|
||||||
|
|
||||||
|
v := setupPurgeTest(t, snapshotIDs)
|
||||||
|
|
||||||
|
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
|
||||||
|
KeepLatest: true,
|
||||||
|
Force: true,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
remaining := listRemainingSnapshots(t, v)
|
||||||
|
assert.Len(t, remaining, 1)
|
||||||
|
assert.Contains(t, remaining, "testhost_2026-01-01T02:00:00Z")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPurgeKeepLatest_MixedNamedAndLegacy(t *testing.T) {
|
||||||
|
// Mix of named snapshots and legacy ones (no name).
|
||||||
|
snapshotIDs := []string{
|
||||||
|
"testhost_2026-01-01T00:00:00Z",
|
||||||
|
"testhost_home_2026-01-01T01:00:00Z",
|
||||||
|
"testhost_2026-01-01T02:00:00Z",
|
||||||
|
"testhost_home_2026-01-01T03:00:00Z",
|
||||||
|
}
|
||||||
|
|
||||||
|
v := setupPurgeTest(t, snapshotIDs)
|
||||||
|
|
||||||
|
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
|
||||||
|
KeepLatest: true,
|
||||||
|
Force: true,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
remaining := listRemainingSnapshots(t, v)
|
||||||
|
|
||||||
|
// Should keep latest of each group: latest legacy + latest home
|
||||||
|
assert.Len(t, remaining, 2)
|
||||||
|
assert.Contains(t, remaining, "testhost_2026-01-01T02:00:00Z")
|
||||||
|
assert.Contains(t, remaining, "testhost_home_2026-01-01T03:00:00Z")
|
||||||
|
}
|
||||||
|
|
||||||
func TestPurgeKeepLatest_ThreeNames(t *testing.T) {
|
func TestPurgeKeepLatest_ThreeNames(t *testing.T) {
|
||||||
// Three different snapshot names with multiple snapshots each.
|
// Three different snapshot names with multiple snapshots each.
|
||||||
snapshotIDs := []string{
|
snapshotIDs := []string{
|
||||||
|
|||||||
@@ -8,7 +8,6 @@ import (
|
|||||||
"regexp"
|
"regexp"
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
|
||||||
"text/tabwriter"
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -17,7 +16,6 @@ import (
|
|||||||
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
||||||
"git.eeqj.de/sneak/vaultik/internal/types"
|
"git.eeqj.de/sneak/vaultik/internal/types"
|
||||||
"github.com/dustin/go-humanize"
|
"github.com/dustin/go-humanize"
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// SnapshotCreateOptions contains options for the snapshot create command
|
// SnapshotCreateOptions contains options for the snapshot create command
|
||||||
@@ -58,7 +56,9 @@ func (v *Vaultik) CreateSnapshot(opts *SnapshotCreateOptions) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if opts.Daemon {
|
if opts.Daemon {
|
||||||
return v.RunDaemon(opts)
|
log.Info("Running in daemon mode")
|
||||||
|
// TODO: Implement daemon mode with inotify
|
||||||
|
return fmt.Errorf("daemon mode not yet implemented")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Determine which snapshots to process
|
// Determine which snapshots to process
|
||||||
@@ -95,10 +95,7 @@ func (v *Vaultik) CreateSnapshot(opts *SnapshotCreateOptions) error {
|
|||||||
log.Info("Pruning enabled - deleting old snapshots and unreferenced blobs")
|
log.Info("Pruning enabled - deleting old snapshots and unreferenced blobs")
|
||||||
v.printlnStdout("\nPruning old snapshots (keeping latest)...")
|
v.printlnStdout("\nPruning old snapshots (keeping latest)...")
|
||||||
|
|
||||||
if err := v.PurgeSnapshotsWithOptions(&SnapshotPurgeOptions{
|
if err := v.PurgeSnapshots(true, "", true); err != nil {
|
||||||
KeepLatest: true,
|
|
||||||
Force: true,
|
|
||||||
}); err != nil {
|
|
||||||
return fmt.Errorf("prune: purging old snapshots: %w", err)
|
return fmt.Errorf("prune: purging old snapshots: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -441,9 +438,6 @@ func (v *Vaultik) reconcileLocalWithRemote(remoteSnapshots map[string]bool) (map
|
|||||||
func (v *Vaultik) buildSnapshotInfoList(remoteSnapshots map[string]bool, localSnapshotMap map[string]*database.Snapshot) ([]SnapshotInfo, error) {
|
func (v *Vaultik) buildSnapshotInfoList(remoteSnapshots map[string]bool, localSnapshotMap map[string]*database.Snapshot) ([]SnapshotInfo, error) {
|
||||||
snapshots := make([]SnapshotInfo, 0, len(remoteSnapshots))
|
snapshots := make([]SnapshotInfo, 0, len(remoteSnapshots))
|
||||||
|
|
||||||
// remoteOnly collects snapshot IDs that need a manifest download.
|
|
||||||
var remoteOnly []string
|
|
||||||
|
|
||||||
for snapshotID := range remoteSnapshots {
|
for snapshotID := range remoteSnapshots {
|
||||||
if localSnap, exists := localSnapshotMap[snapshotID]; exists && localSnap.CompletedAt != nil {
|
if localSnap, exists := localSnapshotMap[snapshotID]; exists && localSnap.CompletedAt != nil {
|
||||||
totalSize, err := v.Repositories.Snapshots.GetSnapshotTotalCompressedSize(v.ctx, snapshotID)
|
totalSize, err := v.Repositories.Snapshots.GetSnapshotTotalCompressedSize(v.ctx, snapshotID)
|
||||||
@@ -464,73 +458,16 @@ func (v *Vaultik) buildSnapshotInfoList(remoteSnapshots map[string]bool, localSn
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Pre-add with zero size; will be filled by concurrent downloads.
|
totalSize, err := v.getManifestSize(snapshotID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to get manifest size for %s: %w", snapshotID, err)
|
||||||
|
}
|
||||||
|
|
||||||
snapshots = append(snapshots, SnapshotInfo{
|
snapshots = append(snapshots, SnapshotInfo{
|
||||||
ID: types.SnapshotID(snapshotID),
|
ID: types.SnapshotID(snapshotID),
|
||||||
Timestamp: timestamp,
|
Timestamp: timestamp,
|
||||||
CompressedSize: 0,
|
CompressedSize: totalSize,
|
||||||
})
|
})
|
||||||
remoteOnly = append(remoteOnly, snapshotID)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Download manifests concurrently for remote-only snapshots.
|
|
||||||
if len(remoteOnly) > 0 {
|
|
||||||
// maxConcurrentManifestDownloads bounds parallel manifest fetches to
|
|
||||||
// avoid overwhelming the S3 endpoint while still being much faster
|
|
||||||
// than serial downloads.
|
|
||||||
const maxConcurrentManifestDownloads = 10
|
|
||||||
|
|
||||||
type manifestResult struct {
|
|
||||||
snapshotID string
|
|
||||||
size int64
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
mu sync.Mutex
|
|
||||||
results []manifestResult
|
|
||||||
)
|
|
||||||
|
|
||||||
g, gctx := errgroup.WithContext(v.ctx)
|
|
||||||
g.SetLimit(maxConcurrentManifestDownloads)
|
|
||||||
|
|
||||||
for _, sid := range remoteOnly {
|
|
||||||
g.Go(func() error {
|
|
||||||
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", sid)
|
|
||||||
reader, err := v.Storage.Get(gctx, manifestPath)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("downloading manifest for %s: %w", sid, err)
|
|
||||||
}
|
|
||||||
defer func() { _ = reader.Close() }()
|
|
||||||
|
|
||||||
manifest, err := snapshot.DecodeManifest(reader)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("decoding manifest for %s: %w", sid, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
mu.Lock()
|
|
||||||
results = append(results, manifestResult{
|
|
||||||
snapshotID: sid,
|
|
||||||
size: manifest.TotalCompressedSize,
|
|
||||||
})
|
|
||||||
mu.Unlock()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := g.Wait(); err != nil {
|
|
||||||
return nil, fmt.Errorf("fetching manifest sizes: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build a lookup from results and patch the pre-added entries.
|
|
||||||
sizeMap := make(map[string]int64, len(results))
|
|
||||||
for _, r := range results {
|
|
||||||
sizeMap[r.snapshotID] = r.size
|
|
||||||
}
|
|
||||||
for i := range snapshots {
|
|
||||||
if sz, ok := sizeMap[string(snapshots[i].ID)]; ok {
|
|
||||||
snapshots[i].CompressedSize = sz
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -591,7 +528,18 @@ type SnapshotPurgeOptions struct {
|
|||||||
Name string // Filter purge to a specific snapshot name
|
Name string // Filter purge to a specific snapshot name
|
||||||
}
|
}
|
||||||
|
|
||||||
// PurgeSnapshotsWithOptions removes old snapshots based on criteria.
|
// PurgeSnapshots removes old snapshots based on criteria.
|
||||||
|
// When keepLatest is true, retention is applied per snapshot name — the latest
|
||||||
|
// snapshot for each distinct name is kept.
|
||||||
|
func (v *Vaultik) PurgeSnapshots(keepLatest bool, olderThan string, force bool) error {
|
||||||
|
return v.PurgeSnapshotsWithOptions(&SnapshotPurgeOptions{
|
||||||
|
KeepLatest: keepLatest,
|
||||||
|
OlderThan: olderThan,
|
||||||
|
Force: force,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// PurgeSnapshotsWithOptions removes old snapshots based on criteria with full options.
|
||||||
// When KeepLatest is true, retention is applied per snapshot name — the latest
|
// When KeepLatest is true, retention is applied per snapshot name — the latest
|
||||||
// snapshot for each distinct name is kept. If Name is non-empty, only snapshots
|
// snapshot for each distinct name is kept. If Name is non-empty, only snapshots
|
||||||
// matching that name are considered for purge.
|
// matching that name are considered for purge.
|
||||||
@@ -869,6 +817,23 @@ func (v *Vaultik) outputVerifyJSON(result *VerifyResult) error {
|
|||||||
|
|
||||||
// Helper methods that were previously on SnapshotApp
|
// Helper methods that were previously on SnapshotApp
|
||||||
|
|
||||||
|
func (v *Vaultik) getManifestSize(snapshotID string) (int64, error) {
|
||||||
|
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
|
||||||
|
|
||||||
|
reader, err := v.Storage.Get(v.ctx, manifestPath)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("downloading manifest: %w", err)
|
||||||
|
}
|
||||||
|
defer func() { _ = reader.Close() }()
|
||||||
|
|
||||||
|
manifest, err := snapshot.DecodeManifest(reader)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("decoding manifest: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return manifest.TotalCompressedSize, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (v *Vaultik) downloadManifest(snapshotID string) (*snapshot.Manifest, error) {
|
func (v *Vaultik) downloadManifest(snapshotID string) (*snapshot.Manifest, error) {
|
||||||
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
|
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
|
||||||
|
|
||||||
@@ -1052,7 +1017,6 @@ func (v *Vaultik) listAllRemoteSnapshotIDs() ([]string, error) {
|
|||||||
log.Info("Listing all snapshots")
|
log.Info("Listing all snapshots")
|
||||||
objectCh := v.Storage.ListStream(v.ctx, "metadata/")
|
objectCh := v.Storage.ListStream(v.ctx, "metadata/")
|
||||||
|
|
||||||
seen := make(map[string]bool)
|
|
||||||
var snapshotIDs []string
|
var snapshotIDs []string
|
||||||
for object := range objectCh {
|
for object := range objectCh {
|
||||||
if object.Err != nil {
|
if object.Err != nil {
|
||||||
@@ -1067,8 +1031,14 @@ func (v *Vaultik) listAllRemoteSnapshotIDs() ([]string, error) {
|
|||||||
}
|
}
|
||||||
if strings.HasSuffix(object.Key, "/") || strings.Contains(object.Key, "/manifest.json.zst") {
|
if strings.HasSuffix(object.Key, "/") || strings.Contains(object.Key, "/manifest.json.zst") {
|
||||||
sid := parts[1]
|
sid := parts[1]
|
||||||
if !seen[sid] {
|
found := false
|
||||||
seen[sid] = true
|
for _, id := range snapshotIDs {
|
||||||
|
if id == sid {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
snapshotIDs = append(snapshotIDs, sid)
|
snapshotIDs = append(snapshotIDs, sid)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user