12 Commits

Author SHA1 Message Date
user
87acc05a77 fix: add RunDaemon test, remove dead daemonWatcherBatchDelay constant
All checks were successful
check / check (pull_request) Successful in 3m7s
- Add TestRunDaemon_CancelledContext: exercises RunDaemon with a
  daemon-friendly config, cancels via context, verifies clean return
  and startup output
- Remove unused daemonWatcherBatchDelay constant (batch-settle logic
  was never implemented; the watcher loop records changes immediately)
- Update TestDaemonConstants to remove reference to deleted constant
2026-03-24 13:40:08 -07:00
user
07a31a54d4 feat: implement daemon mode with filesystem watching
All checks were successful
check / check (pull_request) Successful in 4m57s
Replace the daemon mode stub with a full implementation that:

- Watches configured snapshot paths for filesystem changes using
  fsnotify (inotify on Linux, FSEvents on macOS, etc.)
- Runs an initial full backup on startup
- Triggers incremental backups at backup_interval when changes are
  detected, only for snapshots whose paths were affected
- Performs full periodic scans at full_scan_interval regardless of
  detected changes
- Respects min_time_between_run to prevent excessive backup runs
- Handles SIGTERM/SIGINT for graceful shutdown, completing any
  in-progress backup before exiting
- Automatically watches newly created subdirectories
- Uses a backup semaphore to prevent concurrent backup runs

New files:
- internal/vaultik/daemon.go: RunDaemon(), changeTracker, watcher setup
- internal/vaultik/daemon_test.go: Tests for changeTracker, isSubpath,
  concurrency safety, and daemon constants

closes #3
2026-03-24 09:48:06 -07:00
65da291ddf feat: per-name purge filtering for snapshot purge (#51)
All checks were successful
check / check (push) Successful in 4m50s
## Summary

`PurgeSnapshots` now applies `--keep-latest` retention per snapshot name instead of globally across all names.

### Problem

Previously, `--keep-latest` would keep only the single most recent snapshot across ALL snapshot names. For example, with snapshots:
- `system_2024-01-15`
- `home_2024-01-14`
- `system_2024-01-13`

`--keep-latest` would keep only `system_2024-01-15` and delete the latest `home` snapshot too.

### Solution

1. **Per-name retention**: `--keep-latest` now groups snapshots by name and keeps the latest of each group. In the example above, both `system_2024-01-15` and `home_2024-01-14` would be kept.

2. **`--name` flag**: New flag to filter purge operations to a specific snapshot name. `--name home --keep-latest` only purges `home` snapshots, leaving all `system` snapshots untouched.

### Changes

- `internal/vaultik/helpers.go`: Add `parseSnapshotName()` to extract the snapshot name from a snapshot ID (`hostname_name_timestamp` format)
- `internal/vaultik/snapshot.go`: Add `SnapshotPurgeOptions` struct with `Name` field, add `PurgeSnapshotsWithOptions()` method, modify `--keep-latest` logic to group by name
- `internal/cli/purge.go` and `internal/cli/snapshot.go`: Add `--name` flag to both purge CLI surfaces
- `README.md`: Update CLI documentation

### Tests

- `helpers_test.go`: Unit tests for `parseSnapshotName()` and `parseSnapshotTimestamp()`
- `purge_per_name_test.go`: Integration tests covering:
  - Per-name retention with multiple names
  - Single-name retention
  - `--name` filter with `--keep-latest`
  - `--name` filter with `--older-than`
  - No-match name filter (all snapshots retained)
  - Legacy snapshots without name component
  - Mixed named and legacy snapshots
  - Three different snapshot names

### Backward Compatibility

The existing `PurgeSnapshots(keepLatest, olderThan, force)` signature is preserved as a wrapper around the new `PurgeSnapshotsWithOptions()`. The `--prune` flag in `snapshot create` continues to work unchanged.

`docker build .` passes (lint, fmt-check, all tests).

closes [#9](#9)

Co-authored-by: user <user@Mac.lan guest wan>
Co-authored-by: clawbot <clawbot@noreply.git.eeqj.de>
Reviewed-on: #51
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-22 00:50:24 +01:00
dcf3ec399a feat: concurrent manifest downloads in ListSnapshots (#50)
All checks were successful
check / check (push) Successful in 2m27s
## Summary

Replace serial `getManifestSize()` calls in `ListSnapshots` with bounded concurrent downloads using `errgroup`. For each remote snapshot not in the local DB, manifest downloads now run in parallel (up to 10 concurrent goroutines) instead of one at a time.

## Changes

- Use `errgroup` with `SetLimit(10)` for bounded concurrency
- Collect remote-only snapshot IDs first, pre-add entries with zero size
- Download manifests concurrently, patch sizes from results
- Remove now-unused `getManifestSize` helper (logic inlined into goroutines)
- Promote `golang.org/x/sync` from indirect to direct dependency

## Testing

- `make check` passes (fmt-check, lint, tests)
- `docker build .` passes

closes #8

Co-authored-by: user <user@Mac.lan guest wan>
Reviewed-on: #50
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-22 00:41:39 +01:00
495dede1bc fix: replace O(n²) duplicate detection with map-based O(1) lookups (#45)
Some checks failed
check / check (push) Has been cancelled
Replace linear scan deduplication of snapshot IDs in `RemoveAllSnapshots()` and `PruneBlobs()` with `map[string]bool` for O(1) lookups.

Previously, each new snapshot ID was checked against the entire collected slice via a linear scan, resulting in O(n²) overall complexity. Now a `seen` map provides constant-time membership checks while preserving insertion order in the slice.

**Changes:**
- `internal/vaultik/snapshot.go` (`RemoveAllSnapshots`): replaced linear `for` loop dedup with `seen` map
- `internal/vaultik/prune.go` (`PruneBlobs`): replaced linear `for` loop dedup with `seen` map

closes #12

Co-authored-by: clawbot <clawbot@noreply.git.eeqj.de>
Reviewed-on: #45
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-22 00:40:56 +01:00
1c72a37bc8 Remove all ctime usage and storage (#55)
All checks were successful
check / check (push) Successful in 5s
Remove all ctime from the codebase per sneak's decision on [PR #48](#48).

## Rationale

- ctime means different things on macOS (birth time) vs Linux (inode change time) — ambiguous cross-platform
- Vaultik never uses ctime operationally (scanning triggers on mtime change)
- Cannot be restored on either platform
- Write-only forensic data with no consumer

## Changes

- **Schema** (`internal/database/schema.sql`): Removed `ctime` column from `files` table
- **Model** (`internal/database/models.go`): Removed `CTime` field from `File` struct
- **Database layer** (`internal/database/files.go`): Removed ctime from all INSERT/SELECT queries, ON CONFLICT updates, and scan targets in both `scanFile` and `scanFileRows` helpers; updated `CreateBatch` accordingly
- **Scanner** (`internal/snapshot/scanner.go`): Removed `CTime: info.ModTime()` assignment in `checkFileInMemory()`
- **Tests**: Removed all `CTime` field assignments from 8 test files
- **Documentation**: Removed ctime references from `ARCHITECTURE.md` and `docs/DATAMODEL.md`

`docker build .` passes clean (lint, fmt-check, all tests).

closes #54

Co-authored-by: user <user@Mac.lan guest wan>
Reviewed-on: #55
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-20 03:12:46 +01:00
60b6746db9 schema: add ON DELETE CASCADE to snapshot_files.file_id and snapshot_blobs.blob_id FKs (#46)
All checks were successful
check / check (push) Successful in 2m47s
Add `ON DELETE CASCADE` to the two foreign keys that were missing it:

- `snapshot_files.file_id` → `files(id)`
- `snapshot_blobs.blob_id` → `blobs(id)`

This ensures that when a file or blob row is deleted, the corresponding snapshot junction rows are automatically cleaned up, consistent with the other CASCADE FKs already in the schema.

closes #19

Co-authored-by: user <user@Mac.lan guest wan>
Reviewed-on: #46
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-19 14:03:39 +01:00
f28c8a73b7 fix: add ON DELETE CASCADE to uploads FK on snapshot_id (#44)
All checks were successful
check / check (push) Successful in 2m24s
The `uploads` table's foreign key on `snapshot_id` did not cascade deletes, unlike `snapshot_files` and `snapshot_blobs`. This caused FK violations when deleting snapshots with associated upload records (if FK enforcement is enabled) unless uploads were manually deleted first.

Adds `ON DELETE CASCADE` to the `snapshot_id` FK in `schema.sql` for consistency with the other snapshot-referencing tables.

`docker build .` passes (fmt-check, lint, all tests, build).

closes #18

Co-authored-by: clawbot <clawbot@noreply.git.eeqj.de>
Reviewed-on: #44
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-19 13:59:27 +01:00
1c0f5b8eb2 Rename blob_fetch_stub.go to blob_fetch.go (#53)
All checks were successful
check / check (push) Successful in 4m28s
Renames `internal/vaultik/blob_fetch_stub.go` to `internal/vaultik/blob_fetch.go`.

The file contains production code (`hashVerifyReader`, `FetchAndDecryptBlob`), not stubs. The `_stub` suffix was a misnomer from the original implementation in [PR #39](#39).

Pure rename — no code changes. All tests, linting, and formatting pass.

closes #52

Co-authored-by: user <user@Mac.lan guest wan>
Reviewed-on: #53
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-19 09:33:35 +01:00
689109a2b8 fix: remove destructive sync from ListSnapshots (#49)
Some checks failed
check / check (push) Has been cancelled
## Summary

`ListSnapshots()` silently deleted local snapshot records not found in remote storage. A list/read operation should not have destructive side effects.

## Changes

1. **Removed destructive sync from `ListSnapshots()`** — the inline loop that deleted local snapshots not present in remote storage has been removed entirely. `ListSnapshots()` now only reads and displays data.

2. **Improved `syncWithRemote()` cascade cleanup** — updated `syncWithRemote()` to use `deleteSnapshotFromLocalDB()` instead of directly calling `Repositories.Snapshots.Delete()`. This ensures proper cascade deletion of related records (`snapshot_files`, `snapshot_blobs`, `snapshot_uploads`) before deleting the snapshot record itself, matching the thorough cleanup that the removed `ListSnapshots` code was doing.

The explicit sync behavior remains available via `syncWithRemote()`, which is called by `PurgeSnapshots()`.

## Testing

- `docker build .` passes (lint, fmt-check, all tests, compilation)

closes #15

Co-authored-by: clawbot <clawbot@eeqj.de>
Reviewed-on: #49
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-19 09:32:52 +01:00
ac2f21a89d Refactor: break up oversized methods into smaller descriptive helpers (#41)
All checks were successful
check / check (push) Successful in 4m17s
Closes #40

Per sneak's feedback on PR #37: methods were too long. This PR breaks all methods over 100-150 lines into smaller, descriptively named helper methods.

## Refactored methods (8 total)

| Original | Lines | Helpers extracted |
|---|---|---|
| `createNamedSnapshot` | 214 | `resolveSnapshotPaths`, `scanAllDirectories`, `collectUploadStats`, `finalizeSnapshotMetadata`, `printSnapshotSummary`, `getSnapshotBlobSizes`, `formatUploadSpeed` |
| `ListSnapshots` | 159 | `listRemoteSnapshotIDs`, `reconcileLocalWithRemote`, `buildSnapshotInfoList`, `printSnapshotTable` |
| `PruneBlobs` | 170 | `collectReferencedBlobs`, `listUniqueSnapshotIDs`, `listAllRemoteBlobs`, `findUnreferencedBlobs`, `deleteUnreferencedBlobs` |
| `RunDeepVerify` | 182 | `loadVerificationData`, `runVerificationSteps`, `deepVerifyFailure` |
| `RemoteInfo` | 187 | `collectSnapshotMetadata`, `collectReferencedBlobsFromManifests`, `populateRemoteInfoResult`, `scanRemoteBlobStorage`, `printRemoteInfoTable` |
| `handleBlobReady` | 173 | `uploadBlobIfNeeded`, `makeUploadProgressCallback`, `recordBlobMetadata`, `cleanupBlobTempFile` |
| `processFileStreaming` | 146 | `updateChunkStats`, `addChunkToPacker`, `queueFileForBatchInsert` |
| `finalizeCurrentBlob` | 167 | `closeBlobWriter`, `buildChunkRefs`, `commitBlobToDatabase`, `deliverFinishedBlob` |

## Verification

- `go build ./...` 
- `make test`  (all tests pass)
- `golangci-lint run`  (0 issues)
- No behavioral changes, pure restructuring

Co-authored-by: user <user@Mac.lan guest wan>
Reviewed-on: #41
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-19 00:23:45 +01:00
8c59f55096 fix: verify blob hash after download and decryption (closes #5) (#39)
All checks were successful
check / check (push) Successful in 2m27s
## Summary

Add double-SHA-256 hash verification of decrypted plaintext in `FetchAndDecryptBlob`. This ensures blob integrity during restore operations by comparing the computed hash against the expected blob hash before returning data to the caller.

The blob hash is `SHA256(SHA256(plaintext))` as produced by `blobgen.Writer.Sum256()`. Verification happens after decryption and decompression but before the data is used.

## Test

Added `blob_fetch_hash_test.go` with tests for:
- Correct hash passes verification
- Mismatched hash returns descriptive error

## make test output

```
golangci-lint run
0 issues.

ok  git.eeqj.de/sneak/vaultik/internal/blob       4.563s
ok  git.eeqj.de/sneak/vaultik/internal/blobgen    3.981s
ok  git.eeqj.de/sneak/vaultik/internal/chunker    4.127s
ok  git.eeqj.de/sneak/vaultik/internal/cli        1.499s
ok  git.eeqj.de/sneak/vaultik/internal/config     1.905s
ok  git.eeqj.de/sneak/vaultik/internal/crypto     0.519s
ok  git.eeqj.de/sneak/vaultik/internal/database   4.590s
ok  git.eeqj.de/sneak/vaultik/internal/globals    0.650s
ok  git.eeqj.de/sneak/vaultik/internal/models     0.779s
ok  git.eeqj.de/sneak/vaultik/internal/pidlock    2.945s
ok  git.eeqj.de/sneak/vaultik/internal/s3         3.286s
ok  git.eeqj.de/sneak/vaultik/internal/snapshot   3.979s
ok  git.eeqj.de/sneak/vaultik/internal/vaultik    4.418s
```

All tests pass, 0 lint issues.

Co-authored-by: user <user@Mac.lan guest wan>
Co-authored-by: clawbot <clawbot@noreply.git.eeqj.de>
Reviewed-on: #39
Co-authored-by: clawbot <sneak+clawbot@sneak.cloud>
Co-committed-by: clawbot <sneak+clawbot@sneak.cloud>
2026-03-19 00:21:11 +01:00
31 changed files with 1441 additions and 263 deletions

View File

@@ -54,7 +54,7 @@ The database tracks five primary entities and their relationships:
#### File (`database.File`) #### File (`database.File`)
Represents a file or directory in the backup system. Stores metadata needed for restoration: Represents a file or directory in the backup system. Stores metadata needed for restoration:
- Path, timestamps (mtime, ctime) - Path, mtime
- Size, mode, ownership (uid, gid) - Size, mode, ownership (uid, gid)
- Symlink target (if applicable) - Symlink target (if applicable)

View File

@@ -150,7 +150,7 @@ passphrase is needed or stored locally.
vaultik [--config <path>] snapshot create [snapshot-names...] [--cron] [--daemon] [--prune] vaultik [--config <path>] snapshot create [snapshot-names...] [--cron] [--daemon] [--prune]
vaultik [--config <path>] snapshot list [--json] vaultik [--config <path>] snapshot list [--json]
vaultik [--config <path>] snapshot verify <snapshot-id> [--deep] vaultik [--config <path>] snapshot verify <snapshot-id> [--deep]
vaultik [--config <path>] snapshot purge [--keep-latest | --older-than <duration>] [--force] vaultik [--config <path>] snapshot purge [--keep-latest | --older-than <duration>] [--name <name>] [--force]
vaultik [--config <path>] snapshot remove <snapshot-id> [--dry-run] [--force] vaultik [--config <path>] snapshot remove <snapshot-id> [--dry-run] [--force]
vaultik [--config <path>] snapshot prune vaultik [--config <path>] snapshot prune
vaultik [--config <path>] restore <snapshot-id> <target-dir> [paths...] vaultik [--config <path>] restore <snapshot-id> <target-dir> [paths...]
@@ -170,8 +170,9 @@ vaultik [--config <path>] store info
* Config is located at `/etc/vaultik/config.yml` by default * Config is located at `/etc/vaultik/config.yml` by default
* Optional snapshot names argument to create specific snapshots (default: all) * Optional snapshot names argument to create specific snapshots (default: all)
* `--cron`: Silent unless error (for crontab) * `--cron`: Silent unless error (for crontab)
* `--daemon`: Run continuously with inotify monitoring and periodic scans * `--daemon`: Run continuously with filesystem monitoring and periodic scans (see [daemon mode](#daemon-mode))
* `--prune`: Delete old snapshots and orphaned blobs after backup * `--prune`: Delete old snapshots and orphaned blobs after backup
* `--skip-errors`: Skip file read errors (log them loudly but continue)
**snapshot list**: List all snapshots with their timestamps and sizes **snapshot list**: List all snapshots with their timestamps and sizes
* `--json`: Output in JSON format * `--json`: Output in JSON format
@@ -180,8 +181,9 @@ vaultik [--config <path>] store info
* `--deep`: Download and verify blob contents (not just existence) * `--deep`: Download and verify blob contents (not just existence)
**snapshot purge**: Remove old snapshots based on criteria **snapshot purge**: Remove old snapshots based on criteria
* `--keep-latest`: Keep only the most recent snapshot * `--keep-latest`: Keep the most recent snapshot per snapshot name
* `--older-than`: Remove snapshots older than duration (e.g., 30d, 6mo, 1y) * `--older-than`: Remove snapshots older than duration (e.g., 30d, 6mo, 1y)
* `--name`: Filter purge to a specific snapshot name
* `--force`: Skip confirmation prompt * `--force`: Skip confirmation prompt
**snapshot remove**: Remove a specific snapshot **snapshot remove**: Remove a specific snapshot
@@ -207,6 +209,53 @@ vaultik [--config <path>] store info
--- ---
## daemon mode
When `--daemon` is passed to `snapshot create`, vaultik runs as a
long-running process that continuously monitors configured directories for
changes and creates backups automatically.
```sh
vaultik --config /etc/vaultik.yaml snapshot create --daemon
```
### how it works
1. **Initial backup**: On startup, a full backup of all configured snapshots
runs immediately.
2. **Filesystem watching**: All configured snapshot paths are monitored for
file changes using OS-native filesystem notifications (inotify on Linux,
FSEvents on macOS, ReadDirectoryChangesW on Windows) via the
[fsnotify](https://github.com/fsnotify/fsnotify) library.
3. **Periodic backups**: At each `backup_interval` tick, if filesystem
changes have been detected and `min_time_between_run` has elapsed since
the last backup, a backup runs for only the affected snapshots.
4. **Full scans**: At each `full_scan_interval` tick, a full backup of all
snapshots runs regardless of detected changes. This catches any changes
that filesystem notifications may have missed.
5. **Graceful shutdown**: On SIGTERM or SIGINT, the daemon completes any
in-progress backup before exiting.
### configuration
These config fields control daemon behavior:
```yaml
backup_interval: 1h # How often to check for changes and run backups
full_scan_interval: 24h # How often to do a complete scan of all paths
min_time_between_run: 15m # Minimum gap between consecutive backup runs
```
### notes
* New directories created under watched paths are automatically picked up.
* The daemon uses the same `CreateSnapshot` logic as one-shot mode — each
backup run is a standard incremental snapshot.
* The `--prune`, `--cron`, and `--skip-errors` flags work in daemon mode
and apply to each individual backup run.
---
## architecture ## architecture
### s3 bucket layout ### s3 bucket layout

28
TODO.md
View File

@@ -106,23 +106,21 @@ User must have rclone configured separately (via `rclone config`).
--- ---
## Post-1.0 (Daemon Mode) ## Daemon Mode (Complete)
1. Implement inotify file watcher for Linux 1. [x] Implement cross-platform filesystem watcher (via fsnotify)
- Watch source directories for changes - Watches source directories for changes
- Track dirty paths in memory - Tracks dirty paths in memory
- Automatically watches new directories
1. Implement FSEvents watcher for macOS 1. [x] Implement backup scheduler in daemon mode
- Watch source directories for changes - Respects backup_interval config
- Track dirty paths in memory - Triggers backup when dirty paths exist and interval elapsed
- Implements full_scan_interval for periodic full scans
- Respects min_time_between_run to prevent excessive runs
1. Implement backup scheduler in daemon mode 1. [x] Add proper signal handling for daemon
- Respect backup_interval config
- Trigger backup when dirty paths exist and interval elapsed
- Implement full_scan_interval for periodic full scans
1. Add proper signal handling for daemon
- Graceful shutdown on SIGTERM/SIGINT - Graceful shutdown on SIGTERM/SIGINT
- Complete in-progress backup before exit - Completes in-progress backup before exit
1. Write tests for daemon mode 1. [x] Write tests for daemon mode

View File

@@ -17,7 +17,6 @@ Stores metadata about files in the filesystem being backed up.
- `id` (TEXT PRIMARY KEY) - UUID for the file record - `id` (TEXT PRIMARY KEY) - UUID for the file record
- `path` (TEXT NOT NULL UNIQUE) - Absolute file path - `path` (TEXT NOT NULL UNIQUE) - Absolute file path
- `mtime` (INTEGER NOT NULL) - Modification time as Unix timestamp - `mtime` (INTEGER NOT NULL) - Modification time as Unix timestamp
- `ctime` (INTEGER NOT NULL) - Change time as Unix timestamp
- `size` (INTEGER NOT NULL) - File size in bytes - `size` (INTEGER NOT NULL) - File size in bytes
- `mode` (INTEGER NOT NULL) - Unix file permissions and type - `mode` (INTEGER NOT NULL) - Unix file permissions and type
- `uid` (INTEGER NOT NULL) - User ID of file owner - `uid` (INTEGER NOT NULL) - User ID of file owner

3
go.mod
View File

@@ -13,6 +13,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/s3 v1.90.0 github.com/aws/aws-sdk-go-v2/service/s3 v1.90.0
github.com/aws/smithy-go v1.23.2 github.com/aws/smithy-go v1.23.2
github.com/dustin/go-humanize v1.0.1 github.com/dustin/go-humanize v1.0.1
github.com/fsnotify/fsnotify v1.9.0
github.com/gobwas/glob v0.2.3 github.com/gobwas/glob v0.2.3
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/johannesboyne/gofakes3 v0.0.0-20250603205740-ed9094be7668 github.com/johannesboyne/gofakes3 v0.0.0-20250603205740-ed9094be7668
@@ -24,6 +25,7 @@ require (
github.com/spf13/cobra v1.10.1 github.com/spf13/cobra v1.10.1
github.com/stretchr/testify v1.11.1 github.com/stretchr/testify v1.11.1
go.uber.org/fx v1.24.0 go.uber.org/fx v1.24.0
golang.org/x/sync v0.18.0
golang.org/x/term v0.37.0 golang.org/x/term v0.37.0
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
modernc.org/sqlite v1.38.0 modernc.org/sqlite v1.38.0
@@ -266,7 +268,6 @@ require (
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/net v0.47.0 // indirect golang.org/x/net v0.47.0 // indirect
golang.org/x/oauth2 v0.33.0 // indirect golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.31.0 // indirect golang.org/x/text v0.31.0 // indirect
golang.org/x/time v0.14.0 // indirect golang.org/x/time v0.14.0 // indirect

4
go.sum
View File

@@ -286,8 +286,8 @@ github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg= github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag= github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/gabriel-vasile/mimetype v1.4.11 h1:AQvxbp830wPhHTqc1u7nzoLT+ZFxGY7emj5DR5DYFik= github.com/gabriel-vasile/mimetype v1.4.11 h1:AQvxbp830wPhHTqc1u7nzoLT+ZFxGY7emj5DR5DYFik=

View File

@@ -11,16 +11,9 @@ import (
"go.uber.org/fx" "go.uber.org/fx"
) )
// PurgeOptions contains options for the purge command
type PurgeOptions struct {
KeepLatest bool
OlderThan string
Force bool
}
// NewPurgeCommand creates the purge command // NewPurgeCommand creates the purge command
func NewPurgeCommand() *cobra.Command { func NewPurgeCommand() *cobra.Command {
opts := &PurgeOptions{} opts := &vaultik.SnapshotPurgeOptions{}
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "purge", Use: "purge",
@@ -28,8 +21,15 @@ func NewPurgeCommand() *cobra.Command {
Long: `Removes snapshots based on age or count criteria. Long: `Removes snapshots based on age or count criteria.
This command allows you to: This command allows you to:
- Keep only the latest snapshot (--keep-latest) - Keep only the latest snapshot per name (--keep-latest)
- Remove snapshots older than a specific duration (--older-than) - Remove snapshots older than a specific duration (--older-than)
- Filter to a specific snapshot name (--name)
When --keep-latest is used, retention is applied per snapshot name. For example,
if you have snapshots named "home" and "system", --keep-latest keeps the most
recent of each.
Use --name to restrict the purge to a single snapshot name.
Config is located at /etc/vaultik/config.yml by default, but can be overridden by Config is located at /etc/vaultik/config.yml by default, but can be overridden by
specifying a path using --config or by setting VAULTIK_CONFIG to a path.`, specifying a path using --config or by setting VAULTIK_CONFIG to a path.`,
@@ -66,7 +66,7 @@ specifying a path using --config or by setting VAULTIK_CONFIG to a path.`,
// Start the purge operation in a goroutine // Start the purge operation in a goroutine
go func() { go func() {
// Run the purge operation // Run the purge operation
if err := v.PurgeSnapshots(opts.KeepLatest, opts.OlderThan, opts.Force); err != nil { if err := v.PurgeSnapshotsWithOptions(opts); err != nil {
if err != context.Canceled { if err != context.Canceled {
log.Error("Purge operation failed", "error", err) log.Error("Purge operation failed", "error", err)
os.Exit(1) os.Exit(1)
@@ -92,9 +92,10 @@ specifying a path using --config or by setting VAULTIK_CONFIG to a path.`,
}, },
} }
cmd.Flags().BoolVar(&opts.KeepLatest, "keep-latest", false, "Keep only the latest snapshot") cmd.Flags().BoolVar(&opts.KeepLatest, "keep-latest", false, "Keep only the latest snapshot per name")
cmd.Flags().StringVar(&opts.OlderThan, "older-than", "", "Remove snapshots older than duration (e.g. 30d, 6m, 1y)") cmd.Flags().StringVar(&opts.OlderThan, "older-than", "", "Remove snapshots older than duration (e.g. 30d, 6m, 1y)")
cmd.Flags().BoolVar(&opts.Force, "force", false, "Skip confirmation prompts") cmd.Flags().BoolVar(&opts.Force, "force", false, "Skip confirmation prompts")
cmd.Flags().StringVar(&opts.Name, "name", "", "Filter purge to a specific snapshot name")
return cmd return cmd
} }

View File

@@ -167,21 +167,25 @@ func newSnapshotListCommand() *cobra.Command {
// newSnapshotPurgeCommand creates the 'snapshot purge' subcommand // newSnapshotPurgeCommand creates the 'snapshot purge' subcommand
func newSnapshotPurgeCommand() *cobra.Command { func newSnapshotPurgeCommand() *cobra.Command {
var keepLatest bool opts := &vaultik.SnapshotPurgeOptions{}
var olderThan string
var force bool
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "purge", Use: "purge",
Short: "Purge old snapshots", Short: "Purge old snapshots",
Long: "Removes snapshots based on age or count criteria", Long: `Removes snapshots based on age or count criteria.
When --keep-latest is used, retention is applied per snapshot name. For example,
if you have snapshots named "home" and "system", --keep-latest keeps the most
recent of each.
Use --name to restrict the purge to a single snapshot name.`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
// Validate flags // Validate flags
if !keepLatest && olderThan == "" { if !opts.KeepLatest && opts.OlderThan == "" {
return fmt.Errorf("must specify either --keep-latest or --older-than") return fmt.Errorf("must specify either --keep-latest or --older-than")
} }
if keepLatest && olderThan != "" { if opts.KeepLatest && opts.OlderThan != "" {
return fmt.Errorf("cannot specify both --keep-latest and --older-than") return fmt.Errorf("cannot specify both --keep-latest and --older-than")
} }
@@ -205,7 +209,7 @@ func newSnapshotPurgeCommand() *cobra.Command {
lc.Append(fx.Hook{ lc.Append(fx.Hook{
OnStart: func(ctx context.Context) error { OnStart: func(ctx context.Context) error {
go func() { go func() {
if err := v.PurgeSnapshots(keepLatest, olderThan, force); err != nil { if err := v.PurgeSnapshotsWithOptions(opts); err != nil {
if err != context.Canceled { if err != context.Canceled {
log.Error("Failed to purge snapshots", "error", err) log.Error("Failed to purge snapshots", "error", err)
os.Exit(1) os.Exit(1)
@@ -228,9 +232,10 @@ func newSnapshotPurgeCommand() *cobra.Command {
}, },
} }
cmd.Flags().BoolVar(&keepLatest, "keep-latest", false, "Keep only the latest snapshot") cmd.Flags().BoolVar(&opts.KeepLatest, "keep-latest", false, "Keep only the latest snapshot per name")
cmd.Flags().StringVar(&olderThan, "older-than", "", "Remove snapshots older than duration (e.g., 30d, 6m, 1y)") cmd.Flags().StringVar(&opts.OlderThan, "older-than", "", "Remove snapshots older than duration (e.g., 30d, 6m, 1y)")
cmd.Flags().BoolVar(&force, "force", false, "Skip confirmation prompt") cmd.Flags().BoolVar(&opts.Force, "force", false, "Skip confirmation prompt")
cmd.Flags().StringVar(&opts.Name, "name", "", "Filter purge to a specific snapshot name")
return cmd return cmd
} }

View File

@@ -29,7 +29,6 @@ func TestCascadeDeleteDebug(t *testing.T) {
file := &File{ file := &File{
Path: "/cascade-test.txt", Path: "/cascade-test.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,

View File

@@ -22,7 +22,6 @@ func TestChunkFileRepository(t *testing.T) {
file1 := &File{ file1 := &File{
Path: "/file1.txt", Path: "/file1.txt",
MTime: testTime, MTime: testTime,
CTime: testTime,
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -37,7 +36,6 @@ func TestChunkFileRepository(t *testing.T) {
file2 := &File{ file2 := &File{
Path: "/file2.txt", Path: "/file2.txt",
MTime: testTime, MTime: testTime,
CTime: testTime,
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -138,9 +136,9 @@ func TestChunkFileRepositoryComplexDeduplication(t *testing.T) {
// Create test files // Create test files
testTime := time.Now().Truncate(time.Second) testTime := time.Now().Truncate(time.Second)
file1 := &File{Path: "/file1.txt", MTime: testTime, CTime: testTime, Size: 3072, Mode: 0644, UID: 1000, GID: 1000} file1 := &File{Path: "/file1.txt", MTime: testTime, Size: 3072, Mode: 0644, UID: 1000, GID: 1000}
file2 := &File{Path: "/file2.txt", MTime: testTime, CTime: testTime, Size: 3072, Mode: 0644, UID: 1000, GID: 1000} file2 := &File{Path: "/file2.txt", MTime: testTime, Size: 3072, Mode: 0644, UID: 1000, GID: 1000}
file3 := &File{Path: "/file3.txt", MTime: testTime, CTime: testTime, Size: 2048, Mode: 0644, UID: 1000, GID: 1000} file3 := &File{Path: "/file3.txt", MTime: testTime, Size: 2048, Mode: 0644, UID: 1000, GID: 1000}
if err := fileRepo.Create(ctx, nil, file1); err != nil { if err := fileRepo.Create(ctx, nil, file1); err != nil {
t.Fatalf("failed to create file1: %v", err) t.Fatalf("failed to create file1: %v", err)

View File

@@ -22,7 +22,6 @@ func TestFileChunkRepository(t *testing.T) {
file := &File{ file := &File{
Path: "/test/file.txt", Path: "/test/file.txt",
MTime: testTime, MTime: testTime,
CTime: testTime,
Size: 3072, Size: 3072,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -135,7 +134,6 @@ func TestFileChunkRepositoryMultipleFiles(t *testing.T) {
file := &File{ file := &File{
Path: types.FilePath(path), Path: types.FilePath(path),
MTime: testTime, MTime: testTime,
CTime: testTime,
Size: 2048, Size: 2048,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,

View File

@@ -25,12 +25,11 @@ func (r *FileRepository) Create(ctx context.Context, tx *sql.Tx, file *File) err
} }
query := ` query := `
INSERT INTO files (id, path, source_path, mtime, ctime, size, mode, uid, gid, link_target) INSERT INTO files (id, path, source_path, mtime, size, mode, uid, gid, link_target)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(path) DO UPDATE SET ON CONFLICT(path) DO UPDATE SET
source_path = excluded.source_path, source_path = excluded.source_path,
mtime = excluded.mtime, mtime = excluded.mtime,
ctime = excluded.ctime,
size = excluded.size, size = excluded.size,
mode = excluded.mode, mode = excluded.mode,
uid = excluded.uid, uid = excluded.uid,
@@ -42,10 +41,10 @@ func (r *FileRepository) Create(ctx context.Context, tx *sql.Tx, file *File) err
var idStr string var idStr string
var err error var err error
if tx != nil { if tx != nil {
LogSQL("Execute", query, file.ID.String(), file.Path.String(), file.SourcePath.String(), file.MTime.Unix(), file.CTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget.String()) LogSQL("Execute", query, file.ID.String(), file.Path.String(), file.SourcePath.String(), file.MTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget.String())
err = tx.QueryRowContext(ctx, query, file.ID.String(), file.Path.String(), file.SourcePath.String(), file.MTime.Unix(), file.CTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget.String()).Scan(&idStr) err = tx.QueryRowContext(ctx, query, file.ID.String(), file.Path.String(), file.SourcePath.String(), file.MTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget.String()).Scan(&idStr)
} else { } else {
err = r.db.QueryRowWithLog(ctx, query, file.ID.String(), file.Path.String(), file.SourcePath.String(), file.MTime.Unix(), file.CTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget.String()).Scan(&idStr) err = r.db.QueryRowWithLog(ctx, query, file.ID.String(), file.Path.String(), file.SourcePath.String(), file.MTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget.String()).Scan(&idStr)
} }
if err != nil { if err != nil {
@@ -63,7 +62,7 @@ func (r *FileRepository) Create(ctx context.Context, tx *sql.Tx, file *File) err
func (r *FileRepository) GetByPath(ctx context.Context, path string) (*File, error) { func (r *FileRepository) GetByPath(ctx context.Context, path string) (*File, error) {
query := ` query := `
SELECT id, path, source_path, mtime, ctime, size, mode, uid, gid, link_target SELECT id, path, source_path, mtime, size, mode, uid, gid, link_target
FROM files FROM files
WHERE path = ? WHERE path = ?
` `
@@ -82,7 +81,7 @@ func (r *FileRepository) GetByPath(ctx context.Context, path string) (*File, err
// GetByID retrieves a file by its UUID // GetByID retrieves a file by its UUID
func (r *FileRepository) GetByID(ctx context.Context, id types.FileID) (*File, error) { func (r *FileRepository) GetByID(ctx context.Context, id types.FileID) (*File, error) {
query := ` query := `
SELECT id, path, source_path, mtime, ctime, size, mode, uid, gid, link_target SELECT id, path, source_path, mtime, size, mode, uid, gid, link_target
FROM files FROM files
WHERE id = ? WHERE id = ?
` `
@@ -100,7 +99,7 @@ func (r *FileRepository) GetByID(ctx context.Context, id types.FileID) (*File, e
func (r *FileRepository) GetByPathTx(ctx context.Context, tx *sql.Tx, path string) (*File, error) { func (r *FileRepository) GetByPathTx(ctx context.Context, tx *sql.Tx, path string) (*File, error) {
query := ` query := `
SELECT id, path, source_path, mtime, ctime, size, mode, uid, gid, link_target SELECT id, path, source_path, mtime, size, mode, uid, gid, link_target
FROM files FROM files
WHERE path = ? WHERE path = ?
` `
@@ -123,7 +122,7 @@ func (r *FileRepository) GetByPathTx(ctx context.Context, tx *sql.Tx, path strin
func (r *FileRepository) scanFile(row *sql.Row) (*File, error) { func (r *FileRepository) scanFile(row *sql.Row) (*File, error) {
var file File var file File
var idStr, pathStr, sourcePathStr string var idStr, pathStr, sourcePathStr string
var mtimeUnix, ctimeUnix int64 var mtimeUnix int64
var linkTarget sql.NullString var linkTarget sql.NullString
err := row.Scan( err := row.Scan(
@@ -131,7 +130,6 @@ func (r *FileRepository) scanFile(row *sql.Row) (*File, error) {
&pathStr, &pathStr,
&sourcePathStr, &sourcePathStr,
&mtimeUnix, &mtimeUnix,
&ctimeUnix,
&file.Size, &file.Size,
&file.Mode, &file.Mode,
&file.UID, &file.UID,
@@ -149,7 +147,6 @@ func (r *FileRepository) scanFile(row *sql.Row) (*File, error) {
file.Path = types.FilePath(pathStr) file.Path = types.FilePath(pathStr)
file.SourcePath = types.SourcePath(sourcePathStr) file.SourcePath = types.SourcePath(sourcePathStr)
file.MTime = time.Unix(mtimeUnix, 0).UTC() file.MTime = time.Unix(mtimeUnix, 0).UTC()
file.CTime = time.Unix(ctimeUnix, 0).UTC()
if linkTarget.Valid { if linkTarget.Valid {
file.LinkTarget = types.FilePath(linkTarget.String) file.LinkTarget = types.FilePath(linkTarget.String)
} }
@@ -161,7 +158,7 @@ func (r *FileRepository) scanFile(row *sql.Row) (*File, error) {
func (r *FileRepository) scanFileRows(rows *sql.Rows) (*File, error) { func (r *FileRepository) scanFileRows(rows *sql.Rows) (*File, error) {
var file File var file File
var idStr, pathStr, sourcePathStr string var idStr, pathStr, sourcePathStr string
var mtimeUnix, ctimeUnix int64 var mtimeUnix int64
var linkTarget sql.NullString var linkTarget sql.NullString
err := rows.Scan( err := rows.Scan(
@@ -169,7 +166,6 @@ func (r *FileRepository) scanFileRows(rows *sql.Rows) (*File, error) {
&pathStr, &pathStr,
&sourcePathStr, &sourcePathStr,
&mtimeUnix, &mtimeUnix,
&ctimeUnix,
&file.Size, &file.Size,
&file.Mode, &file.Mode,
&file.UID, &file.UID,
@@ -187,7 +183,6 @@ func (r *FileRepository) scanFileRows(rows *sql.Rows) (*File, error) {
file.Path = types.FilePath(pathStr) file.Path = types.FilePath(pathStr)
file.SourcePath = types.SourcePath(sourcePathStr) file.SourcePath = types.SourcePath(sourcePathStr)
file.MTime = time.Unix(mtimeUnix, 0).UTC() file.MTime = time.Unix(mtimeUnix, 0).UTC()
file.CTime = time.Unix(ctimeUnix, 0).UTC()
if linkTarget.Valid { if linkTarget.Valid {
file.LinkTarget = types.FilePath(linkTarget.String) file.LinkTarget = types.FilePath(linkTarget.String)
} }
@@ -197,7 +192,7 @@ func (r *FileRepository) scanFileRows(rows *sql.Rows) (*File, error) {
func (r *FileRepository) ListModifiedSince(ctx context.Context, since time.Time) ([]*File, error) { func (r *FileRepository) ListModifiedSince(ctx context.Context, since time.Time) ([]*File, error) {
query := ` query := `
SELECT id, path, source_path, mtime, ctime, size, mode, uid, gid, link_target SELECT id, path, source_path, mtime, size, mode, uid, gid, link_target
FROM files FROM files
WHERE mtime >= ? WHERE mtime >= ?
ORDER BY path ORDER BY path
@@ -258,7 +253,7 @@ func (r *FileRepository) DeleteByID(ctx context.Context, tx *sql.Tx, id types.Fi
func (r *FileRepository) ListByPrefix(ctx context.Context, prefix string) ([]*File, error) { func (r *FileRepository) ListByPrefix(ctx context.Context, prefix string) ([]*File, error) {
query := ` query := `
SELECT id, path, source_path, mtime, ctime, size, mode, uid, gid, link_target SELECT id, path, source_path, mtime, size, mode, uid, gid, link_target
FROM files FROM files
WHERE path LIKE ? || '%' WHERE path LIKE ? || '%'
ORDER BY path ORDER BY path
@@ -285,7 +280,7 @@ func (r *FileRepository) ListByPrefix(ctx context.Context, prefix string) ([]*Fi
// ListAll returns all files in the database // ListAll returns all files in the database
func (r *FileRepository) ListAll(ctx context.Context) ([]*File, error) { func (r *FileRepository) ListAll(ctx context.Context) ([]*File, error) {
query := ` query := `
SELECT id, path, source_path, mtime, ctime, size, mode, uid, gid, link_target SELECT id, path, source_path, mtime, size, mode, uid, gid, link_target
FROM files FROM files
ORDER BY path ORDER BY path
` `
@@ -315,7 +310,7 @@ func (r *FileRepository) CreateBatch(ctx context.Context, tx *sql.Tx, files []*F
return nil return nil
} }
// Each File has 10 values, so batch at 100 to be safe with SQLite's variable limit // Each File has 9 values, so batch at 100 to be safe with SQLite's variable limit
const batchSize = 100 const batchSize = 100
for i := 0; i < len(files); i += batchSize { for i := 0; i < len(files); i += batchSize {
@@ -325,19 +320,18 @@ func (r *FileRepository) CreateBatch(ctx context.Context, tx *sql.Tx, files []*F
} }
batch := files[i:end] batch := files[i:end]
query := `INSERT INTO files (id, path, source_path, mtime, ctime, size, mode, uid, gid, link_target) VALUES ` query := `INSERT INTO files (id, path, source_path, mtime, size, mode, uid, gid, link_target) VALUES `
args := make([]interface{}, 0, len(batch)*10) args := make([]interface{}, 0, len(batch)*9)
for j, f := range batch { for j, f := range batch {
if j > 0 { if j > 0 {
query += ", " query += ", "
} }
query += "(?, ?, ?, ?, ?, ?, ?, ?, ?, ?)" query += "(?, ?, ?, ?, ?, ?, ?, ?, ?)"
args = append(args, f.ID.String(), f.Path.String(), f.SourcePath.String(), f.MTime.Unix(), f.CTime.Unix(), f.Size, f.Mode, f.UID, f.GID, f.LinkTarget.String()) args = append(args, f.ID.String(), f.Path.String(), f.SourcePath.String(), f.MTime.Unix(), f.Size, f.Mode, f.UID, f.GID, f.LinkTarget.String())
} }
query += ` ON CONFLICT(path) DO UPDATE SET query += ` ON CONFLICT(path) DO UPDATE SET
source_path = excluded.source_path, source_path = excluded.source_path,
mtime = excluded.mtime, mtime = excluded.mtime,
ctime = excluded.ctime,
size = excluded.size, size = excluded.size,
mode = excluded.mode, mode = excluded.mode,
uid = excluded.uid, uid = excluded.uid,

View File

@@ -39,7 +39,6 @@ func TestFileRepository(t *testing.T) {
file := &File{ file := &File{
Path: "/test/file.txt", Path: "/test/file.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -124,7 +123,6 @@ func TestFileRepositorySymlink(t *testing.T) {
symlink := &File{ symlink := &File{
Path: "/test/link", Path: "/test/link",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 0, Size: 0,
Mode: uint32(0777 | os.ModeSymlink), Mode: uint32(0777 | os.ModeSymlink),
UID: 1000, UID: 1000,
@@ -161,7 +159,6 @@ func TestFileRepositoryTransaction(t *testing.T) {
file := &File{ file := &File{
Path: "/test/tx_file.txt", Path: "/test/tx_file.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,

View File

@@ -17,7 +17,6 @@ type File struct {
Path types.FilePath // Absolute path of the file Path types.FilePath // Absolute path of the file
SourcePath types.SourcePath // The source directory this file came from (for restore path stripping) SourcePath types.SourcePath // The source directory this file came from (for restore path stripping)
MTime time.Time MTime time.Time
CTime time.Time
Size int64 Size int64
Mode uint32 Mode uint32
UID uint32 UID uint32

View File

@@ -23,7 +23,6 @@ func TestRepositoriesTransaction(t *testing.T) {
file := &File{ file := &File{
Path: "/test/tx_file.txt", Path: "/test/tx_file.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -146,7 +145,6 @@ func TestRepositoriesTransactionRollback(t *testing.T) {
file := &File{ file := &File{
Path: "/test/rollback_file.txt", Path: "/test/rollback_file.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -202,7 +200,6 @@ func TestRepositoriesReadTransaction(t *testing.T) {
file := &File{ file := &File{
Path: "/test/read_file.txt", Path: "/test/read_file.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -226,7 +223,6 @@ func TestRepositoriesReadTransaction(t *testing.T) {
_ = repos.Files.Create(ctx, tx, &File{ _ = repos.Files.Create(ctx, tx, &File{
Path: "/test/should_fail.txt", Path: "/test/should_fail.txt",
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 0, Size: 0,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,

View File

@@ -23,7 +23,6 @@ func TestFileRepositoryUUIDGeneration(t *testing.T) {
{ {
Path: "/file1.txt", Path: "/file1.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -32,7 +31,6 @@ func TestFileRepositoryUUIDGeneration(t *testing.T) {
{ {
Path: "/file2.txt", Path: "/file2.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 2048, Size: 2048,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -72,7 +70,6 @@ func TestFileRepositoryGetByID(t *testing.T) {
file := &File{ file := &File{
Path: "/test.txt", Path: "/test.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -120,7 +117,6 @@ func TestOrphanedFileCleanup(t *testing.T) {
file1 := &File{ file1 := &File{
Path: "/orphaned.txt", Path: "/orphaned.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -129,7 +125,6 @@ func TestOrphanedFileCleanup(t *testing.T) {
file2 := &File{ file2 := &File{
Path: "/referenced.txt", Path: "/referenced.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 2048, Size: 2048,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -218,7 +213,6 @@ func TestOrphanedChunkCleanup(t *testing.T) {
file := &File{ file := &File{
Path: "/test.txt", Path: "/test.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -348,7 +342,6 @@ func TestFileChunkRepositoryWithUUIDs(t *testing.T) {
file := &File{ file := &File{
Path: "/test.txt", Path: "/test.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 3072, Size: 3072,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -419,7 +412,6 @@ func TestChunkFileRepositoryWithUUIDs(t *testing.T) {
file1 := &File{ file1 := &File{
Path: "/file1.txt", Path: "/file1.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -428,7 +420,6 @@ func TestChunkFileRepositoryWithUUIDs(t *testing.T) {
file2 := &File{ file2 := &File{
Path: "/file2.txt", Path: "/file2.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -586,7 +577,6 @@ func TestComplexOrphanedDataScenario(t *testing.T) {
files[i] = &File{ files[i] = &File{
Path: types.FilePath(fmt.Sprintf("/file%d.txt", i)), Path: types.FilePath(fmt.Sprintf("/file%d.txt", i)),
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -678,7 +668,6 @@ func TestCascadeDelete(t *testing.T) {
file := &File{ file := &File{
Path: "/cascade-test.txt", Path: "/cascade-test.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -750,7 +739,6 @@ func TestTransactionIsolation(t *testing.T) {
file := &File{ file := &File{
Path: "/tx-test.txt", Path: "/tx-test.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -812,7 +800,6 @@ func TestConcurrentOrphanedCleanup(t *testing.T) {
file := &File{ file := &File{
Path: types.FilePath(fmt.Sprintf("/concurrent-%d.txt", i)), Path: types.FilePath(fmt.Sprintf("/concurrent-%d.txt", i)),
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,

View File

@@ -18,7 +18,6 @@ func TestOrphanedFileCleanupDebug(t *testing.T) {
file1 := &File{ file1 := &File{
Path: "/orphaned.txt", Path: "/orphaned.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -27,7 +26,6 @@ func TestOrphanedFileCleanupDebug(t *testing.T) {
file2 := &File{ file2 := &File{
Path: "/referenced.txt", Path: "/referenced.txt",
MTime: time.Now().Truncate(time.Second), MTime: time.Now().Truncate(time.Second),
CTime: time.Now().Truncate(time.Second),
Size: 2048, Size: 2048,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,

View File

@@ -29,7 +29,6 @@ func TestFileRepositoryEdgeCases(t *testing.T) {
file: &File{ file: &File{
Path: "", Path: "",
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -42,7 +41,6 @@ func TestFileRepositoryEdgeCases(t *testing.T) {
file: &File{ file: &File{
Path: types.FilePath("/" + strings.Repeat("a", 4096)), Path: types.FilePath("/" + strings.Repeat("a", 4096)),
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -55,7 +53,6 @@ func TestFileRepositoryEdgeCases(t *testing.T) {
file: &File{ file: &File{
Path: "/test/file with spaces and 特殊文字.txt", Path: "/test/file with spaces and 特殊文字.txt",
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -68,7 +65,6 @@ func TestFileRepositoryEdgeCases(t *testing.T) {
file: &File{ file: &File{
Path: "/empty.txt", Path: "/empty.txt",
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 0, Size: 0,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -81,7 +77,6 @@ func TestFileRepositoryEdgeCases(t *testing.T) {
file: &File{ file: &File{
Path: "/link", Path: "/link",
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 0, Size: 0,
Mode: 0777 | 0120000, // symlink mode Mode: 0777 | 0120000, // symlink mode
UID: 1000, UID: 1000,
@@ -123,7 +118,6 @@ func TestDuplicateHandling(t *testing.T) {
file1 := &File{ file1 := &File{
Path: "/duplicate.txt", Path: "/duplicate.txt",
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -132,7 +126,6 @@ func TestDuplicateHandling(t *testing.T) {
file2 := &File{ file2 := &File{
Path: "/duplicate.txt", // Same path Path: "/duplicate.txt", // Same path
MTime: time.Now().Add(time.Hour), MTime: time.Now().Add(time.Hour),
CTime: time.Now().Add(time.Hour),
Size: 2048, Size: 2048,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -192,7 +185,6 @@ func TestDuplicateHandling(t *testing.T) {
file := &File{ file := &File{
Path: "/test-dup-fc.txt", Path: "/test-dup-fc.txt",
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -244,7 +236,6 @@ func TestNullHandling(t *testing.T) {
file := &File{ file := &File{
Path: "/regular.txt", Path: "/regular.txt",
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -349,7 +340,6 @@ func TestLargeDatasets(t *testing.T) {
file := &File{ file := &File{
Path: types.FilePath(fmt.Sprintf("/large/file%05d.txt", i)), Path: types.FilePath(fmt.Sprintf("/large/file%05d.txt", i)),
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: int64(i * 1024), Size: int64(i * 1024),
Mode: 0644, Mode: 0644,
UID: uint32(1000 + (i % 10)), UID: uint32(1000 + (i % 10)),
@@ -474,7 +464,6 @@ func TestQueryInjection(t *testing.T) {
file := &File{ file := &File{
Path: types.FilePath(injection), Path: types.FilePath(injection),
MTime: time.Now(), MTime: time.Now(),
CTime: time.Now(),
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,
@@ -513,7 +502,6 @@ func TestTimezoneHandling(t *testing.T) {
file := &File{ file := &File{
Path: "/timezone-test.txt", Path: "/timezone-test.txt",
MTime: nyTime, MTime: nyTime,
CTime: nyTime,
Size: 1024, Size: 1024,
Mode: 0644, Mode: 0644,
UID: 1000, UID: 1000,

View File

@@ -8,7 +8,6 @@ CREATE TABLE IF NOT EXISTS files (
path TEXT NOT NULL UNIQUE, path TEXT NOT NULL UNIQUE,
source_path TEXT NOT NULL DEFAULT '', -- The source directory this file came from (for restore path stripping) source_path TEXT NOT NULL DEFAULT '', -- The source directory this file came from (for restore path stripping)
mtime INTEGER NOT NULL, mtime INTEGER NOT NULL,
ctime INTEGER NOT NULL,
size INTEGER NOT NULL, size INTEGER NOT NULL,
mode INTEGER NOT NULL, mode INTEGER NOT NULL,
uid INTEGER NOT NULL, uid INTEGER NOT NULL,
@@ -103,7 +102,7 @@ CREATE TABLE IF NOT EXISTS snapshot_files (
file_id TEXT NOT NULL, file_id TEXT NOT NULL,
PRIMARY KEY (snapshot_id, file_id), PRIMARY KEY (snapshot_id, file_id),
FOREIGN KEY (snapshot_id) REFERENCES snapshots(id) ON DELETE CASCADE, FOREIGN KEY (snapshot_id) REFERENCES snapshots(id) ON DELETE CASCADE,
FOREIGN KEY (file_id) REFERENCES files(id) FOREIGN KEY (file_id) REFERENCES files(id) ON DELETE CASCADE
); );
-- Index for efficient file lookups (used in orphan detection) -- Index for efficient file lookups (used in orphan detection)
@@ -116,7 +115,7 @@ CREATE TABLE IF NOT EXISTS snapshot_blobs (
blob_hash TEXT NOT NULL, blob_hash TEXT NOT NULL,
PRIMARY KEY (snapshot_id, blob_id), PRIMARY KEY (snapshot_id, blob_id),
FOREIGN KEY (snapshot_id) REFERENCES snapshots(id) ON DELETE CASCADE, FOREIGN KEY (snapshot_id) REFERENCES snapshots(id) ON DELETE CASCADE,
FOREIGN KEY (blob_id) REFERENCES blobs(id) FOREIGN KEY (blob_id) REFERENCES blobs(id) ON DELETE CASCADE
); );
-- Index for efficient blob lookups (used in orphan detection) -- Index for efficient blob lookups (used in orphan detection)
@@ -130,7 +129,7 @@ CREATE TABLE IF NOT EXISTS uploads (
size INTEGER NOT NULL, size INTEGER NOT NULL,
duration_ms INTEGER NOT NULL, duration_ms INTEGER NOT NULL,
FOREIGN KEY (blob_hash) REFERENCES blobs(blob_hash), FOREIGN KEY (blob_hash) REFERENCES blobs(blob_hash),
FOREIGN KEY (snapshot_id) REFERENCES snapshots(id) FOREIGN KEY (snapshot_id) REFERENCES snapshots(id) ON DELETE CASCADE
); );
-- Index for efficient snapshot lookups -- Index for efficient snapshot lookups

View File

@@ -345,7 +345,6 @@ func (b *BackupEngine) Backup(ctx context.Context, fsys fs.FS, root string) (str
Size: info.Size(), Size: info.Size(),
Mode: uint32(info.Mode()), Mode: uint32(info.Mode()),
MTime: info.ModTime(), MTime: info.ModTime(),
CTime: info.ModTime(), // Use mtime as ctime for test
UID: 1000, // Default UID for test UID: 1000, // Default UID for test
GID: 1000, // Default GID for test GID: 1000, // Default GID for test
} }

View File

@@ -785,7 +785,6 @@ func (s *Scanner) checkFileInMemory(path string, info os.FileInfo, knownFiles ma
Path: types.FilePath(path), Path: types.FilePath(path),
SourcePath: types.SourcePath(s.currentSourcePath), // Store source directory for restore path stripping SourcePath: types.SourcePath(s.currentSourcePath), // Store source directory for restore path stripping
MTime: info.ModTime(), MTime: info.ModTime(),
CTime: info.ModTime(), // afero doesn't provide ctime
Size: info.Size(), Size: info.Size(),
Mode: uint32(info.Mode()), Mode: uint32(info.Mode()),
UID: uid, UID: uid,

View File

@@ -0,0 +1,93 @@
package vaultik
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"filippo.io/age"
"git.eeqj.de/sneak/vaultik/internal/blobgen"
)
// hashVerifyReader wraps a blobgen.Reader and verifies the double-SHA-256 hash
// of decrypted plaintext when Close is called. It reuses the hash that
// blobgen.Reader already computes internally via its TeeReader, avoiding
// redundant SHA-256 computation.
type hashVerifyReader struct {
reader *blobgen.Reader // underlying decrypted blob reader (has internal hasher)
fetcher io.ReadCloser // raw fetched stream (closed on Close)
blobHash string // expected double-SHA-256 hex
done bool // EOF reached
}
func (h *hashVerifyReader) Read(p []byte) (int, error) {
n, err := h.reader.Read(p)
if err == io.EOF {
h.done = true
}
return n, err
}
// Close verifies the hash (if the stream was fully read) and closes underlying readers.
func (h *hashVerifyReader) Close() error {
readerErr := h.reader.Close()
fetcherErr := h.fetcher.Close()
if h.done {
firstHash := h.reader.Sum256()
secondHasher := sha256.New()
secondHasher.Write(firstHash)
actualHashHex := hex.EncodeToString(secondHasher.Sum(nil))
if actualHashHex != h.blobHash {
return fmt.Errorf("blob hash mismatch: expected %s, got %s", h.blobHash[:16], actualHashHex[:16])
}
}
if readerErr != nil {
return readerErr
}
return fetcherErr
}
// FetchAndDecryptBlob downloads a blob, decrypts and decompresses it, and
// returns a streaming reader that computes the double-SHA-256 hash on the fly.
// The hash is verified when the returned reader is closed (after fully reading).
// This avoids buffering the entire blob in memory.
func (v *Vaultik) FetchAndDecryptBlob(ctx context.Context, blobHash string, expectedSize int64, identity age.Identity) (io.ReadCloser, error) {
rc, _, err := v.FetchBlob(ctx, blobHash, expectedSize)
if err != nil {
return nil, err
}
reader, err := blobgen.NewReader(rc, identity)
if err != nil {
_ = rc.Close()
return nil, fmt.Errorf("creating blob reader: %w", err)
}
return &hashVerifyReader{
reader: reader,
fetcher: rc,
blobHash: blobHash,
}, nil
}
// FetchBlob downloads a blob and returns a reader for the encrypted data.
func (v *Vaultik) FetchBlob(ctx context.Context, blobHash string, expectedSize int64) (io.ReadCloser, int64, error) {
blobPath := fmt.Sprintf("blobs/%s/%s/%s", blobHash[:2], blobHash[2:4], blobHash)
rc, err := v.Storage.Get(ctx, blobPath)
if err != nil {
return nil, 0, fmt.Errorf("downloading blob %s: %w", blobHash[:16], err)
}
info, err := v.Storage.Stat(ctx, blobPath)
if err != nil {
_ = rc.Close()
return nil, 0, fmt.Errorf("stat blob %s: %w", blobHash[:16], err)
}
return rc, info.Size, nil
}

View File

@@ -0,0 +1,100 @@
package vaultik_test
import (
"bytes"
"context"
"crypto/sha256"
"encoding/hex"
"io"
"strings"
"testing"
"filippo.io/age"
"git.eeqj.de/sneak/vaultik/internal/blobgen"
"git.eeqj.de/sneak/vaultik/internal/vaultik"
)
// TestFetchAndDecryptBlobVerifiesHash verifies that FetchAndDecryptBlob checks
// the double-SHA-256 hash of the decrypted plaintext against the expected blob hash.
func TestFetchAndDecryptBlobVerifiesHash(t *testing.T) {
identity, err := age.GenerateX25519Identity()
if err != nil {
t.Fatalf("generating identity: %v", err)
}
// Create test data and encrypt it using blobgen.Writer
plaintext := []byte("hello world test data for blob hash verification")
var encBuf bytes.Buffer
writer, err := blobgen.NewWriter(&encBuf, 1, []string{identity.Recipient().String()})
if err != nil {
t.Fatalf("creating blobgen writer: %v", err)
}
if _, err := writer.Write(plaintext); err != nil {
t.Fatalf("writing plaintext: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("closing writer: %v", err)
}
encryptedData := encBuf.Bytes()
// Compute correct double-SHA-256 hash of the plaintext (matches blobgen.Writer.Sum256)
firstHash := sha256.Sum256(plaintext)
secondHash := sha256.Sum256(firstHash[:])
correctHash := hex.EncodeToString(secondHash[:])
// Verify our hash matches what blobgen.Writer produces
writerHash := hex.EncodeToString(writer.Sum256())
if correctHash != writerHash {
t.Fatalf("hash computation mismatch: manual=%s, writer=%s", correctHash, writerHash)
}
// Set up mock storage with the blob at the correct path
mockStorage := NewMockStorer()
blobPath := "blobs/" + correctHash[:2] + "/" + correctHash[2:4] + "/" + correctHash
mockStorage.mu.Lock()
mockStorage.data[blobPath] = encryptedData
mockStorage.mu.Unlock()
tv := vaultik.NewForTesting(mockStorage)
ctx := context.Background()
t.Run("correct hash succeeds", func(t *testing.T) {
rc, err := tv.FetchAndDecryptBlob(ctx, correctHash, int64(len(encryptedData)), identity)
if err != nil {
t.Fatalf("expected success, got error: %v", err)
}
data, err := io.ReadAll(rc)
if err != nil {
t.Fatalf("reading stream: %v", err)
}
if err := rc.Close(); err != nil {
t.Fatalf("close (hash verification) failed: %v", err)
}
if !bytes.Equal(data, plaintext) {
t.Fatalf("decrypted data mismatch: got %q, want %q", data, plaintext)
}
})
t.Run("wrong hash fails", func(t *testing.T) {
// Use a fake hash that doesn't match the actual plaintext
fakeHash := strings.Repeat("ab", 32) // 64 hex chars
fakePath := "blobs/" + fakeHash[:2] + "/" + fakeHash[2:4] + "/" + fakeHash
mockStorage.mu.Lock()
mockStorage.data[fakePath] = encryptedData
mockStorage.mu.Unlock()
rc, err := tv.FetchAndDecryptBlob(ctx, fakeHash, int64(len(encryptedData)), identity)
if err != nil {
t.Fatalf("unexpected error opening stream: %v", err)
}
// Read all data — hash is verified on Close
_, _ = io.ReadAll(rc)
err = rc.Close()
if err == nil {
t.Fatal("expected error for mismatched hash, got nil")
}
if !strings.Contains(err.Error(), "hash mismatch") {
t.Fatalf("expected hash mismatch error, got: %v", err)
}
})
}

View File

@@ -1,55 +0,0 @@
package vaultik
import (
"context"
"fmt"
"io"
"filippo.io/age"
"git.eeqj.de/sneak/vaultik/internal/blobgen"
)
// FetchAndDecryptBlobResult holds the result of fetching and decrypting a blob.
type FetchAndDecryptBlobResult struct {
Data []byte
}
// FetchAndDecryptBlob downloads a blob, decrypts it, and returns the plaintext data.
func (v *Vaultik) FetchAndDecryptBlob(ctx context.Context, blobHash string, expectedSize int64, identity age.Identity) (*FetchAndDecryptBlobResult, error) {
rc, _, err := v.FetchBlob(ctx, blobHash, expectedSize)
if err != nil {
return nil, err
}
defer func() { _ = rc.Close() }()
reader, err := blobgen.NewReader(rc, identity)
if err != nil {
return nil, fmt.Errorf("creating blob reader: %w", err)
}
defer func() { _ = reader.Close() }()
data, err := io.ReadAll(reader)
if err != nil {
return nil, fmt.Errorf("reading blob data: %w", err)
}
return &FetchAndDecryptBlobResult{Data: data}, nil
}
// FetchBlob downloads a blob and returns a reader for the encrypted data.
func (v *Vaultik) FetchBlob(ctx context.Context, blobHash string, expectedSize int64) (io.ReadCloser, int64, error) {
blobPath := fmt.Sprintf("blobs/%s/%s/%s", blobHash[:2], blobHash[2:4], blobHash)
rc, err := v.Storage.Get(ctx, blobPath)
if err != nil {
return nil, 0, fmt.Errorf("downloading blob %s: %w", blobHash[:16], err)
}
info, err := v.Storage.Stat(ctx, blobPath)
if err != nil {
_ = rc.Close()
return nil, 0, fmt.Errorf("stat blob %s: %w", blobHash[:16], err)
}
return rc, info.Size, nil
}

434
internal/vaultik/daemon.go Normal file
View File

@@ -0,0 +1,434 @@
package vaultik
import (
"context"
"fmt"
"os"
"os/signal"
"path/filepath"
"strings"
"sync"
"syscall"
"time"
"git.eeqj.de/sneak/vaultik/internal/log"
"github.com/fsnotify/fsnotify"
)
// daemonMinBackupInterval is the absolute minimum time allowed between backup runs,
// regardless of config, to prevent runaway backup loops.
const daemonMinBackupInterval = 1 * time.Minute
// daemonShutdownTimeout is the maximum time to wait for an in-progress backup
// to complete during graceful shutdown before force-exiting.
const daemonShutdownTimeout = 5 * time.Minute
// RunDaemon runs vaultik in daemon mode: it watches configured directories for
// changes using filesystem notifications, runs periodic backups at the configured
// interval, and performs full scans at the full_scan_interval. It handles
// SIGTERM/SIGINT for graceful shutdown, completing any in-progress backup before
// exiting.
func (v *Vaultik) RunDaemon(opts *SnapshotCreateOptions) error {
backupInterval := v.Config.BackupInterval
if backupInterval < daemonMinBackupInterval {
backupInterval = daemonMinBackupInterval
}
minTimeBetween := v.Config.MinTimeBetweenRun
if minTimeBetween < daemonMinBackupInterval {
minTimeBetween = daemonMinBackupInterval
}
fullScanInterval := v.Config.FullScanInterval
if fullScanInterval <= 0 {
fullScanInterval = 24 * time.Hour
}
log.Info("Starting daemon mode",
"backup_interval", backupInterval,
"min_time_between_run", minTimeBetween,
"full_scan_interval", fullScanInterval,
)
v.printfStdout("Daemon mode started\n")
v.printfStdout(" Backup interval: %s\n", backupInterval)
v.printfStdout(" Min time between: %s\n", minTimeBetween)
v.printfStdout(" Full scan interval: %s\n", fullScanInterval)
// Create a daemon-scoped context that we cancel on signal.
ctx, cancel := context.WithCancel(v.ctx)
defer cancel()
// Set up signal handling for graceful shutdown.
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
// Tracker for filesystem change events.
tracker := newChangeTracker()
// Start the filesystem watcher.
watcher, err := v.startWatcher(ctx, tracker)
if err != nil {
return fmt.Errorf("starting filesystem watcher: %w", err)
}
defer func() { _ = watcher.Close() }()
// Timers
backupTicker := time.NewTicker(backupInterval)
defer backupTicker.Stop()
fullScanTicker := time.NewTicker(fullScanInterval)
defer fullScanTicker.Stop()
var lastBackupTime time.Time
backupRunning := make(chan struct{}, 1) // semaphore: 1 = backup in progress
// Run an initial full backup immediately on startup.
log.Info("Running initial backup on daemon startup")
v.printfStdout("Running initial backup...\n")
if err := v.runDaemonBackup(ctx, opts, tracker, false); err != nil {
if ctx.Err() != nil {
return nil // context cancelled, shutting down
}
log.Error("Initial backup failed", "error", err)
v.printfStderr("Initial backup failed: %v\n", err)
// Continue running — next scheduled backup may succeed.
} else {
lastBackupTime = time.Now()
tracker.reset()
}
v.printfStdout("Watching for changes...\n")
for {
select {
case <-ctx.Done():
log.Info("Daemon context cancelled, shutting down")
return nil
case sig := <-sigCh:
log.Info("Received signal, initiating graceful shutdown", "signal", sig)
v.printfStdout("\nReceived %s, shutting down...\n", sig)
cancel()
// Wait for any in-progress backup to finish.
select {
case backupRunning <- struct{}{}:
// No backup running, we can exit immediately.
<-backupRunning
default:
// Backup is running, wait for it to complete.
v.printfStdout("Waiting for in-progress backup to complete...\n")
shutdownTimer := time.NewTimer(daemonShutdownTimeout)
select {
case backupRunning <- struct{}{}:
<-backupRunning
shutdownTimer.Stop()
case <-shutdownTimer.C:
log.Warn("Shutdown timeout exceeded, forcing exit")
v.printfStderr("Shutdown timeout exceeded, forcing exit\n")
}
}
return nil
case <-backupTicker.C:
// Periodic backup tick. Only run if there are changes and enough
// time has elapsed since the last run.
if !tracker.hasChanges() {
log.Debug("Backup tick: no changes detected, skipping")
continue
}
if time.Since(lastBackupTime) < minTimeBetween {
log.Debug("Backup tick: too soon since last backup",
"last_backup", lastBackupTime,
"min_interval", minTimeBetween,
)
continue
}
// Try to acquire the backup semaphore (non-blocking).
select {
case backupRunning <- struct{}{}:
default:
log.Debug("Backup tick: backup already in progress, skipping")
continue
}
log.Info("Running scheduled backup", "changes", tracker.changeCount())
v.printfStdout("Running scheduled backup (%d changes detected)...\n", tracker.changeCount())
if err := v.runDaemonBackup(ctx, opts, tracker, false); err != nil {
if ctx.Err() != nil {
<-backupRunning
return nil
}
log.Error("Scheduled backup failed", "error", err)
v.printfStderr("Scheduled backup failed: %v\n", err)
} else {
lastBackupTime = time.Now()
tracker.reset()
}
<-backupRunning
case <-fullScanTicker.C:
// Full scan — ignore whether changes were detected; do a complete scan.
if time.Since(lastBackupTime) < minTimeBetween {
log.Debug("Full scan tick: too soon since last backup, deferring")
continue
}
select {
case backupRunning <- struct{}{}:
default:
log.Debug("Full scan tick: backup already in progress, skipping")
continue
}
log.Info("Running full periodic scan")
v.printfStdout("Running full periodic scan...\n")
if err := v.runDaemonBackup(ctx, opts, tracker, true); err != nil {
if ctx.Err() != nil {
<-backupRunning
return nil
}
log.Error("Full scan backup failed", "error", err)
v.printfStderr("Full scan backup failed: %v\n", err)
} else {
lastBackupTime = time.Now()
tracker.reset()
}
<-backupRunning
}
}
}
// runDaemonBackup executes a single backup run within the daemon loop.
// If fullScan is true, all snapshots are processed regardless of tracked changes.
// Otherwise, only snapshots whose paths overlap with tracked changes are processed.
func (v *Vaultik) runDaemonBackup(ctx context.Context, opts *SnapshotCreateOptions, tracker *changeTracker, fullScan bool) error {
startTime := time.Now()
// Build a one-shot create options for this run.
runOpts := &SnapshotCreateOptions{
Cron: opts.Cron,
Prune: opts.Prune,
SkipErrors: opts.SkipErrors,
}
if !fullScan {
// Filter to only snapshots whose paths had changes.
changedPaths := tracker.changedPaths()
affected := v.snapshotsAffectedByChanges(changedPaths)
if len(affected) == 0 {
log.Debug("No snapshots affected by changes")
return nil
}
runOpts.Snapshots = affected
log.Info("Running incremental backup for affected snapshots", "snapshots", affected)
}
// fullScan: leave runOpts.Snapshots empty → CreateSnapshot processes all.
// Use a child context so cancellation propagates but we can still finish
// if the parent hasn't been cancelled.
childCtx, childCancel := context.WithCancel(ctx)
defer childCancel()
// Temporarily swap the Vaultik context.
origCtx := v.ctx
v.ctx = childCtx
defer func() { v.ctx = origCtx }()
if err := v.CreateSnapshot(runOpts); err != nil {
return fmt.Errorf("backup run failed: %w", err)
}
log.Info("Daemon backup complete", "duration", time.Since(startTime))
v.printfStdout("Backup complete in %s\n", formatDuration(time.Since(startTime)))
return nil
}
// snapshotsAffectedByChanges returns the names of configured snapshots whose
// paths overlap with any of the changed paths.
func (v *Vaultik) snapshotsAffectedByChanges(changedPaths []string) []string {
var affected []string
for _, snapName := range v.Config.SnapshotNames() {
snapCfg := v.Config.Snapshots[snapName]
for _, snapPath := range snapCfg.Paths {
absSnapPath, err := filepath.Abs(snapPath)
if err != nil {
absSnapPath = snapPath
}
for _, changed := range changedPaths {
if isSubpath(changed, absSnapPath) {
affected = append(affected, snapName)
goto nextSnapshot
}
}
}
nextSnapshot:
}
return affected
}
// isSubpath returns true if child is under parent (or equal to it).
func isSubpath(child, parent string) bool {
// Normalize both paths.
child = filepath.Clean(child)
parent = filepath.Clean(parent)
if child == parent {
return true
}
// Ensure parent ends with a separator for prefix matching,
// unless parent is the root directory (which already ends with /).
prefix := parent
if !strings.HasSuffix(prefix, string(filepath.Separator)) {
prefix += string(filepath.Separator)
}
return strings.HasPrefix(child, prefix)
}
// startWatcher creates an fsnotify watcher and adds all configured snapshot paths.
// It spawns a goroutine that reads events and feeds the change tracker.
func (v *Vaultik) startWatcher(ctx context.Context, tracker *changeTracker) (*fsnotify.Watcher, error) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
return nil, fmt.Errorf("creating watcher: %w", err)
}
// Collect unique absolute paths to watch.
watchPaths := make(map[string]struct{})
for _, snapName := range v.Config.SnapshotNames() {
snapCfg := v.Config.Snapshots[snapName]
for _, p := range snapCfg.Paths {
absPath, err := filepath.Abs(p)
if err != nil {
log.Warn("Failed to resolve absolute path for watch", "path", p, "error", err)
continue
}
watchPaths[absPath] = struct{}{}
}
}
// Add paths to watcher. Walk the top-level to add subdirectories
// since fsnotify doesn't recurse automatically.
for p := range watchPaths {
if err := v.addWatchRecursive(watcher, p); err != nil {
log.Warn("Failed to watch path", "path", p, "error", err)
// Non-fatal: the path might not exist yet.
}
}
// Spawn the event reader goroutine.
go v.watcherLoop(ctx, watcher, tracker)
return watcher, nil
}
// addWatchRecursive walks a directory tree and adds each directory to the watcher.
func (v *Vaultik) addWatchRecursive(watcher *fsnotify.Watcher, root string) error {
return filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
if err != nil {
// Can't read — skip this subtree.
if info != nil && info.IsDir() {
return filepath.SkipDir
}
return nil
}
if info.IsDir() {
// Skip common directories that don't need watching.
base := filepath.Base(path)
if base == ".git" || base == "node_modules" || base == "__pycache__" {
return filepath.SkipDir
}
if err := watcher.Add(path); err != nil {
log.Debug("Failed to watch directory", "path", path, "error", err)
// Non-fatal: continue walking.
}
}
return nil
})
}
// watcherLoop reads filesystem events from the watcher and records them
// in the change tracker. It runs until the context is cancelled.
func (v *Vaultik) watcherLoop(ctx context.Context, watcher *fsnotify.Watcher, tracker *changeTracker) {
for {
select {
case <-ctx.Done():
return
case event, ok := <-watcher.Events:
if !ok {
return
}
// Only track write/create/remove/rename events.
if event.Op&(fsnotify.Write|fsnotify.Create|fsnotify.Remove|fsnotify.Rename) != 0 {
tracker.recordChange(event.Name)
log.Debug("Filesystem change detected", "path", event.Name, "op", event.Op)
}
// If a new directory was created, watch it too.
if event.Op&fsnotify.Create != 0 {
if info, err := os.Stat(event.Name); err == nil && info.IsDir() {
if err := v.addWatchRecursive(watcher, event.Name); err != nil {
log.Debug("Failed to watch new directory", "path", event.Name, "error", err)
}
}
}
case err, ok := <-watcher.Errors:
if !ok {
return
}
log.Warn("Filesystem watcher error", "error", err)
}
}
}
// changeTracker records filesystem paths that have been modified since the
// last backup. It is safe for concurrent use.
type changeTracker struct {
mu sync.Mutex
changes map[string]time.Time // path → last change time
}
// newChangeTracker creates a new empty change tracker.
func newChangeTracker() *changeTracker {
return &changeTracker{
changes: make(map[string]time.Time),
}
}
// recordChange records that a path has been modified.
func (ct *changeTracker) recordChange(path string) {
ct.mu.Lock()
ct.changes[path] = time.Now()
ct.mu.Unlock()
}
// hasChanges returns true if any changes have been recorded.
func (ct *changeTracker) hasChanges() bool {
ct.mu.Lock()
defer ct.mu.Unlock()
return len(ct.changes) > 0
}
// changeCount returns the number of unique changed paths.
func (ct *changeTracker) changeCount() int {
ct.mu.Lock()
defer ct.mu.Unlock()
return len(ct.changes)
}
// changedPaths returns all changed paths.
func (ct *changeTracker) changedPaths() []string {
ct.mu.Lock()
defer ct.mu.Unlock()
paths := make([]string, 0, len(ct.changes))
for p := range ct.changes {
paths = append(paths, p)
}
return paths
}
// reset clears all recorded changes.
func (ct *changeTracker) reset() {
ct.mu.Lock()
ct.changes = make(map[string]time.Time)
ct.mu.Unlock()
}

View File

@@ -0,0 +1,196 @@
package vaultik
import (
"bytes"
"context"
"os"
"path/filepath"
"testing"
"time"
"git.eeqj.de/sneak/vaultik/internal/config"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewChangeTracker(t *testing.T) {
ct := newChangeTracker()
require.NotNil(t, ct)
assert.False(t, ct.hasChanges())
assert.Equal(t, 0, ct.changeCount())
assert.Empty(t, ct.changedPaths())
}
func TestChangeTrackerRecordChange(t *testing.T) {
ct := newChangeTracker()
ct.recordChange("/home/user/file1.txt")
assert.True(t, ct.hasChanges())
assert.Equal(t, 1, ct.changeCount())
ct.recordChange("/home/user/file2.txt")
assert.Equal(t, 2, ct.changeCount())
// Duplicate path should update time but not increase count.
ct.recordChange("/home/user/file1.txt")
assert.Equal(t, 2, ct.changeCount())
paths := ct.changedPaths()
assert.Len(t, paths, 2)
assert.Contains(t, paths, "/home/user/file1.txt")
assert.Contains(t, paths, "/home/user/file2.txt")
}
func TestChangeTrackerReset(t *testing.T) {
ct := newChangeTracker()
ct.recordChange("/home/user/file1.txt")
ct.recordChange("/home/user/file2.txt")
assert.Equal(t, 2, ct.changeCount())
ct.reset()
assert.False(t, ct.hasChanges())
assert.Equal(t, 0, ct.changeCount())
assert.Empty(t, ct.changedPaths())
}
func TestChangeTrackerConcurrency(t *testing.T) {
ct := newChangeTracker()
done := make(chan struct{})
// Write from multiple goroutines simultaneously.
for i := 0; i < 10; i++ {
go func(n int) {
for j := 0; j < 100; j++ {
ct.recordChange("/path/" + string(rune('a'+n)))
}
done <- struct{}{}
}(i)
}
// Also read concurrently.
go func() {
for i := 0; i < 100; i++ {
_ = ct.hasChanges()
_ = ct.changeCount()
_ = ct.changedPaths()
}
done <- struct{}{}
}()
// Wait for all goroutines.
for i := 0; i < 11; i++ {
<-done
}
assert.True(t, ct.hasChanges())
assert.LessOrEqual(t, ct.changeCount(), 10) // 10 unique paths
}
func TestChangeTrackerRecordTimestamp(t *testing.T) {
ct := newChangeTracker()
before := time.Now()
ct.recordChange("/some/path")
after := time.Now()
ct.mu.Lock()
ts := ct.changes["/some/path"]
ct.mu.Unlock()
assert.False(t, ts.Before(before))
assert.False(t, ts.After(after))
}
func TestIsSubpath(t *testing.T) {
tests := []struct {
child string
parent string
expected bool
}{
{"/home/user/file.txt", "/home/user", true},
{"/home/user", "/home/user", true},
{"/home/user/deep/nested/file.txt", "/home/user", true},
{"/home/other/file.txt", "/home/user", false},
{"/home/username/file.txt", "/home/user", false}, // not a subpath, just prefix match
{"/etc/config", "/home/user", false},
{"/", "/", true},
{"/a", "/", true},
{"/a/b", "/a", true},
{"/ab", "/a", false},
}
for _, tt := range tests {
t.Run(tt.child+"_under_"+tt.parent, func(t *testing.T) {
result := isSubpath(tt.child, tt.parent)
assert.Equal(t, tt.expected, result)
})
}
}
func TestSnapshotsAffectedByChanges(t *testing.T) {
// We can't easily test this without a full Vaultik instance with config,
// but we can verify the helper function isSubpath which it depends on.
// The full integration is tested via the daemon integration test.
// Verify basic subpath logic used by snapshotsAffectedByChanges.
assert.True(t, isSubpath("/home/user/docs/report.txt", "/home/user"))
assert.False(t, isSubpath("/var/log/syslog", "/home/user"))
}
func TestDaemonConstants(t *testing.T) {
// Verify daemon constants are reasonable values.
assert.GreaterOrEqual(t, daemonMinBackupInterval, 1*time.Minute)
assert.GreaterOrEqual(t, daemonShutdownTimeout, 1*time.Minute)
}
func TestRunDaemon_CancelledContext(t *testing.T) {
// Create a temporary directory to use as a snapshot path.
tmpDir := t.TempDir()
// Write a file so the watched path is non-empty.
err := os.WriteFile(filepath.Join(tmpDir, "testfile.txt"), []byte("hello"), 0o644)
require.NoError(t, err)
// Build a minimal Vaultik with daemon-friendly config.
// RunDaemon will fail on the initial backup (no storage configured),
// but it should continue running. We cancel the context to verify
// graceful shutdown.
ctx, cancel := context.WithCancel(context.Background())
stdout := &bytes.Buffer{}
stderr := &bytes.Buffer{}
v := &Vaultik{
Config: &config.Config{
BackupInterval: 1 * time.Hour,
FullScanInterval: 24 * time.Hour,
MinTimeBetweenRun: 1 * time.Minute,
Snapshots: map[string]config.SnapshotConfig{
"test": {
Paths: []string{tmpDir},
},
},
},
ctx: ctx,
cancel: cancel,
Stdout: stdout,
Stderr: stderr,
}
// Cancel the context shortly after RunDaemon starts so the daemon
// loop exits via its ctx.Done() path.
go func() {
// Wait for the initial backup to fail (it will, since there's no
// storage backend), then cancel.
time.Sleep(200 * time.Millisecond)
cancel()
}()
err = v.RunDaemon(&SnapshotCreateOptions{})
// RunDaemon should return nil on context cancellation (graceful shutdown).
assert.NoError(t, err)
// Verify daemon printed startup messages.
output := stdout.String()
assert.Contains(t, output, "Daemon mode started")
}

View File

@@ -79,6 +79,22 @@ func parseSnapshotTimestamp(snapshotID string) (time.Time, error) {
return timestamp.UTC(), nil return timestamp.UTC(), nil
} }
// parseSnapshotName extracts the snapshot name from a snapshot ID.
// Format: hostname_snapshotname_timestamp — the middle part(s) between hostname
// and the RFC3339 timestamp are the snapshot name (may contain underscores).
// Returns the snapshot name, or empty string if the ID is malformed.
func parseSnapshotName(snapshotID string) string {
parts := strings.Split(snapshotID, "_")
if len(parts) < 3 {
// Format: hostname_timestamp — no snapshot name
return ""
}
// Format: hostname_name_timestamp — middle parts are the name.
// The last part is the RFC3339 timestamp, the first part is the hostname,
// everything in between is the snapshot name (which may itself contain underscores).
return strings.Join(parts[1:len(parts)-1], "_")
}
// parseDuration parses a duration string with support for days // parseDuration parses a duration string with support for days
func parseDuration(s string) (time.Duration, error) { func parseDuration(s string) (time.Duration, error) {
// Check for days suffix // Check for days suffix

View File

@@ -0,0 +1,76 @@
package vaultik
import (
"testing"
)
func TestParseSnapshotName(t *testing.T) {
tests := []struct {
name string
snapshotID string
want string
}{
{
name: "standard format with name",
snapshotID: "myhost_home_2026-01-12T14:41:15Z",
want: "home",
},
{
name: "standard format with different name",
snapshotID: "server1_system_2026-02-15T09:30:00Z",
want: "system",
},
{
name: "name with underscores",
snapshotID: "myhost_my_special_backup_2026-03-01T00:00:00Z",
want: "my_special_backup",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := parseSnapshotName(tt.snapshotID)
if got != tt.want {
t.Errorf("parseSnapshotName(%q) = %q, want %q", tt.snapshotID, got, tt.want)
}
})
}
}
func TestParseSnapshotTimestamp(t *testing.T) {
tests := []struct {
name string
snapshotID string
wantErr bool
}{
{
name: "valid with name",
snapshotID: "myhost_home_2026-01-12T14:41:15Z",
wantErr: false,
},
{
name: "valid without name",
snapshotID: "myhost_2026-01-12T14:41:15Z",
wantErr: false,
},
{
name: "invalid - single part",
snapshotID: "nounderscore",
wantErr: true,
},
{
name: "invalid - bad timestamp",
snapshotID: "myhost_home_notadate",
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := parseSnapshotTimestamp(tt.snapshotID)
if (err != nil) != tt.wantErr {
t.Errorf("parseSnapshotTimestamp(%q) error = %v, wantErr %v", tt.snapshotID, err, tt.wantErr)
}
})
}
}

View File

@@ -0,0 +1,256 @@
package vaultik_test
import (
"bytes"
"context"
"database/sql"
"strings"
"testing"
"time"
"git.eeqj.de/sneak/vaultik/internal/database"
"git.eeqj.de/sneak/vaultik/internal/log"
"git.eeqj.de/sneak/vaultik/internal/types"
"git.eeqj.de/sneak/vaultik/internal/vaultik"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// setupPurgeTest creates a Vaultik instance with an in-memory database and mock
// storage pre-populated with the given snapshot IDs. Each snapshot is marked as
// completed. Remote metadata stubs are created so syncWithRemote keeps them.
func setupPurgeTest(t *testing.T, snapshotIDs []string) *vaultik.Vaultik {
t.Helper()
log.Initialize(log.Config{})
ctx := context.Background()
db, err := database.New(ctx, ":memory:")
require.NoError(t, err)
t.Cleanup(func() { _ = db.Close() })
repos := database.NewRepositories(db)
mockStorage := NewMockStorer()
// Insert each snapshot into the DB and create remote metadata stubs.
// Use timestamps parsed from snapshot IDs for realistic ordering.
for _, id := range snapshotIDs {
// Parse timestamp from the snapshot ID
parts := strings.Split(id, "_")
timestampStr := parts[len(parts)-1]
startedAt, err := time.Parse(time.RFC3339, timestampStr)
require.NoError(t, err, "parsing timestamp from snapshot ID %q", id)
completedAt := startedAt.Add(5 * time.Minute)
snap := &database.Snapshot{
ID: types.SnapshotID(id),
Hostname: "testhost",
VaultikVersion: "test",
StartedAt: startedAt,
CompletedAt: &completedAt,
}
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
return repos.Snapshots.Create(ctx, tx, snap)
})
require.NoError(t, err, "creating snapshot %s", id)
// Create remote metadata stub so syncWithRemote keeps it
metadataKey := "metadata/" + id + "/manifest.json.zst"
err = mockStorage.Put(ctx, metadataKey, strings.NewReader("stub"))
require.NoError(t, err)
}
stdout := &bytes.Buffer{}
stderr := &bytes.Buffer{}
stdin := &bytes.Buffer{}
v := &vaultik.Vaultik{
Storage: mockStorage,
Repositories: repos,
DB: db,
Stdout: stdout,
Stderr: stderr,
Stdin: stdin,
}
v.SetContext(ctx)
return v
}
// listRemainingSnapshots returns IDs of all completed snapshots in the database.
func listRemainingSnapshots(t *testing.T, v *vaultik.Vaultik) []string {
t.Helper()
ctx := context.Background()
dbSnaps, err := v.Repositories.Snapshots.ListRecent(ctx, 10000)
require.NoError(t, err)
var ids []string
for _, s := range dbSnaps {
if s.CompletedAt != nil {
ids = append(ids, s.ID.String())
}
}
return ids
}
func TestPurgeKeepLatest_PerName(t *testing.T) {
// Create snapshots for two different names: "home" and "system".
// With per-name --keep-latest, the latest of each should be kept.
snapshotIDs := []string{
"testhost_system_2026-01-01T00:00:00Z",
"testhost_home_2026-01-01T01:00:00Z",
"testhost_system_2026-01-01T02:00:00Z",
"testhost_home_2026-01-01T03:00:00Z",
"testhost_system_2026-01-01T04:00:00Z",
}
v := setupPurgeTest(t, snapshotIDs)
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
KeepLatest: true,
Force: true,
})
require.NoError(t, err)
remaining := listRemainingSnapshots(t, v)
// Should keep the latest of each name
assert.Len(t, remaining, 2, "should keep exactly 2 snapshots (one per name)")
assert.Contains(t, remaining, "testhost_system_2026-01-01T04:00:00Z", "should keep latest system")
assert.Contains(t, remaining, "testhost_home_2026-01-01T03:00:00Z", "should keep latest home")
}
func TestPurgeKeepLatest_SingleName(t *testing.T) {
// All snapshots have the same name — keep-latest should keep exactly one.
snapshotIDs := []string{
"testhost_home_2026-01-01T00:00:00Z",
"testhost_home_2026-01-01T01:00:00Z",
"testhost_home_2026-01-01T02:00:00Z",
}
v := setupPurgeTest(t, snapshotIDs)
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
KeepLatest: true,
Force: true,
})
require.NoError(t, err)
remaining := listRemainingSnapshots(t, v)
assert.Len(t, remaining, 1)
assert.Contains(t, remaining, "testhost_home_2026-01-01T02:00:00Z", "should keep the newest")
}
func TestPurgeKeepLatest_WithNameFilter(t *testing.T) {
// Use --name to filter purge to only "home" snapshots.
// "system" snapshots should be untouched.
snapshotIDs := []string{
"testhost_system_2026-01-01T00:00:00Z",
"testhost_home_2026-01-01T01:00:00Z",
"testhost_system_2026-01-01T02:00:00Z",
"testhost_home_2026-01-01T03:00:00Z",
"testhost_home_2026-01-01T04:00:00Z",
}
v := setupPurgeTest(t, snapshotIDs)
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
KeepLatest: true,
Force: true,
Name: "home",
})
require.NoError(t, err)
remaining := listRemainingSnapshots(t, v)
// 2 system snapshots untouched + 1 latest home = 3
assert.Len(t, remaining, 3)
assert.Contains(t, remaining, "testhost_system_2026-01-01T00:00:00Z")
assert.Contains(t, remaining, "testhost_system_2026-01-01T02:00:00Z")
assert.Contains(t, remaining, "testhost_home_2026-01-01T04:00:00Z")
}
func TestPurgeKeepLatest_NoSnapshots(t *testing.T) {
v := setupPurgeTest(t, nil)
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
KeepLatest: true,
Force: true,
})
require.NoError(t, err)
}
func TestPurgeKeepLatest_NameFilterNoMatch(t *testing.T) {
snapshotIDs := []string{
"testhost_system_2026-01-01T00:00:00Z",
"testhost_system_2026-01-01T01:00:00Z",
}
v := setupPurgeTest(t, snapshotIDs)
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
KeepLatest: true,
Force: true,
Name: "nonexistent",
})
require.NoError(t, err)
// All snapshots should remain — the name filter matched nothing
remaining := listRemainingSnapshots(t, v)
assert.Len(t, remaining, 2)
}
func TestPurgeOlderThan_WithNameFilter(t *testing.T) {
// Snapshots with different names and timestamps.
// --older-than should apply only to the named subset when --name is used.
snapshotIDs := []string{
"testhost_system_2020-01-01T00:00:00Z",
"testhost_home_2020-01-01T00:00:00Z",
"testhost_system_2026-01-01T00:00:00Z",
"testhost_home_2026-01-01T00:00:00Z",
}
v := setupPurgeTest(t, snapshotIDs)
// Purge only "home" snapshots older than 365 days
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
OlderThan: "365d",
Force: true,
Name: "home",
})
require.NoError(t, err)
remaining := listRemainingSnapshots(t, v)
// Old system stays (not filtered by name), old home deleted, recent ones stay
assert.Len(t, remaining, 3)
assert.Contains(t, remaining, "testhost_system_2020-01-01T00:00:00Z")
assert.Contains(t, remaining, "testhost_system_2026-01-01T00:00:00Z")
assert.Contains(t, remaining, "testhost_home_2026-01-01T00:00:00Z")
}
func TestPurgeKeepLatest_ThreeNames(t *testing.T) {
// Three different snapshot names with multiple snapshots each.
snapshotIDs := []string{
"testhost_home_2026-01-01T00:00:00Z",
"testhost_system_2026-01-01T01:00:00Z",
"testhost_media_2026-01-01T02:00:00Z",
"testhost_home_2026-01-01T03:00:00Z",
"testhost_system_2026-01-01T04:00:00Z",
"testhost_media_2026-01-01T05:00:00Z",
"testhost_home_2026-01-01T06:00:00Z",
}
v := setupPurgeTest(t, snapshotIDs)
err := v.PurgeSnapshotsWithOptions(&vaultik.SnapshotPurgeOptions{
KeepLatest: true,
Force: true,
})
require.NoError(t, err)
remaining := listRemainingSnapshots(t, v)
assert.Len(t, remaining, 3, "should keep one per name")
assert.Contains(t, remaining, "testhost_home_2026-01-01T06:00:00Z")
assert.Contains(t, remaining, "testhost_system_2026-01-01T04:00:00Z")
assert.Contains(t, remaining, "testhost_media_2026-01-01T05:00:00Z")
}

View File

@@ -558,11 +558,23 @@ func (v *Vaultik) restoreRegularFile(
// downloadBlob downloads and decrypts a blob // downloadBlob downloads and decrypts a blob
func (v *Vaultik) downloadBlob(ctx context.Context, blobHash string, expectedSize int64, identity age.Identity) ([]byte, error) { func (v *Vaultik) downloadBlob(ctx context.Context, blobHash string, expectedSize int64, identity age.Identity) ([]byte, error) {
result, err := v.FetchAndDecryptBlob(ctx, blobHash, expectedSize, identity) rc, err := v.FetchAndDecryptBlob(ctx, blobHash, expectedSize, identity)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return result.Data, nil
data, err := io.ReadAll(rc)
if err != nil {
_ = rc.Close()
return nil, fmt.Errorf("reading blob data: %w", err)
}
// Close triggers hash verification
if err := rc.Close(); err != nil {
return nil, err
}
return data, nil
} }
// verifyRestoredFiles verifies that all restored files match their expected chunk hashes // verifyRestoredFiles verifies that all restored files match their expected chunk hashes

View File

@@ -8,6 +8,7 @@ import (
"regexp" "regexp"
"sort" "sort"
"strings" "strings"
"sync"
"text/tabwriter" "text/tabwriter"
"time" "time"
@@ -16,6 +17,7 @@ import (
"git.eeqj.de/sneak/vaultik/internal/snapshot" "git.eeqj.de/sneak/vaultik/internal/snapshot"
"git.eeqj.de/sneak/vaultik/internal/types" "git.eeqj.de/sneak/vaultik/internal/types"
"github.com/dustin/go-humanize" "github.com/dustin/go-humanize"
"golang.org/x/sync/errgroup"
) )
// SnapshotCreateOptions contains options for the snapshot create command // SnapshotCreateOptions contains options for the snapshot create command
@@ -56,9 +58,7 @@ func (v *Vaultik) CreateSnapshot(opts *SnapshotCreateOptions) error {
} }
if opts.Daemon { if opts.Daemon {
log.Info("Running in daemon mode") return v.RunDaemon(opts)
// TODO: Implement daemon mode with inotify
return fmt.Errorf("daemon mode not yet implemented")
} }
// Determine which snapshots to process // Determine which snapshots to process
@@ -95,7 +95,10 @@ func (v *Vaultik) CreateSnapshot(opts *SnapshotCreateOptions) error {
log.Info("Pruning enabled - deleting old snapshots and unreferenced blobs") log.Info("Pruning enabled - deleting old snapshots and unreferenced blobs")
v.printlnStdout("\nPruning old snapshots (keeping latest)...") v.printlnStdout("\nPruning old snapshots (keeping latest)...")
if err := v.PurgeSnapshots(true, "", true); err != nil { if err := v.PurgeSnapshotsWithOptions(&SnapshotPurgeOptions{
KeepLatest: true,
Force: true,
}); err != nil {
return fmt.Errorf("prune: purging old snapshots: %w", err) return fmt.Errorf("prune: purging old snapshots: %w", err)
} }
@@ -419,7 +422,7 @@ func (v *Vaultik) listRemoteSnapshotIDs() (map[string]bool, error) {
return remoteSnapshots, nil return remoteSnapshots, nil
} }
// reconcileLocalWithRemote removes local snapshots not in remote and returns the surviving local map // reconcileLocalWithRemote builds a map of local snapshots keyed by ID for cross-referencing with remote
func (v *Vaultik) reconcileLocalWithRemote(remoteSnapshots map[string]bool) (map[string]*database.Snapshot, error) { func (v *Vaultik) reconcileLocalWithRemote(remoteSnapshots map[string]bool) (map[string]*database.Snapshot, error) {
localSnapshots, err := v.Repositories.Snapshots.ListRecent(v.ctx, 10000) localSnapshots, err := v.Repositories.Snapshots.ListRecent(v.ctx, 10000)
if err != nil { if err != nil {
@@ -431,19 +434,6 @@ func (v *Vaultik) reconcileLocalWithRemote(remoteSnapshots map[string]bool) (map
localSnapshotMap[s.ID.String()] = s localSnapshotMap[s.ID.String()] = s
} }
for _, snap := range localSnapshots {
snapshotIDStr := snap.ID.String()
if !remoteSnapshots[snapshotIDStr] {
log.Info("Removing local snapshot not found in remote", "snapshot_id", snap.ID)
if err := v.deleteSnapshotFromLocalDB(snapshotIDStr); err != nil {
log.Error("Failed to delete local snapshot", "snapshot_id", snap.ID, "error", err)
} else {
log.Info("Deleted local snapshot not found in remote", "snapshot_id", snap.ID)
delete(localSnapshotMap, snapshotIDStr)
}
}
}
return localSnapshotMap, nil return localSnapshotMap, nil
} }
@@ -451,6 +441,9 @@ func (v *Vaultik) reconcileLocalWithRemote(remoteSnapshots map[string]bool) (map
func (v *Vaultik) buildSnapshotInfoList(remoteSnapshots map[string]bool, localSnapshotMap map[string]*database.Snapshot) ([]SnapshotInfo, error) { func (v *Vaultik) buildSnapshotInfoList(remoteSnapshots map[string]bool, localSnapshotMap map[string]*database.Snapshot) ([]SnapshotInfo, error) {
snapshots := make([]SnapshotInfo, 0, len(remoteSnapshots)) snapshots := make([]SnapshotInfo, 0, len(remoteSnapshots))
// remoteOnly collects snapshot IDs that need a manifest download.
var remoteOnly []string
for snapshotID := range remoteSnapshots { for snapshotID := range remoteSnapshots {
if localSnap, exists := localSnapshotMap[snapshotID]; exists && localSnap.CompletedAt != nil { if localSnap, exists := localSnapshotMap[snapshotID]; exists && localSnap.CompletedAt != nil {
totalSize, err := v.Repositories.Snapshots.GetSnapshotTotalCompressedSize(v.ctx, snapshotID) totalSize, err := v.Repositories.Snapshots.GetSnapshotTotalCompressedSize(v.ctx, snapshotID)
@@ -471,16 +464,73 @@ func (v *Vaultik) buildSnapshotInfoList(remoteSnapshots map[string]bool, localSn
continue continue
} }
totalSize, err := v.getManifestSize(snapshotID) // Pre-add with zero size; will be filled by concurrent downloads.
if err != nil {
return nil, fmt.Errorf("failed to get manifest size for %s: %w", snapshotID, err)
}
snapshots = append(snapshots, SnapshotInfo{ snapshots = append(snapshots, SnapshotInfo{
ID: types.SnapshotID(snapshotID), ID: types.SnapshotID(snapshotID),
Timestamp: timestamp, Timestamp: timestamp,
CompressedSize: totalSize, CompressedSize: 0,
}) })
remoteOnly = append(remoteOnly, snapshotID)
}
}
// Download manifests concurrently for remote-only snapshots.
if len(remoteOnly) > 0 {
// maxConcurrentManifestDownloads bounds parallel manifest fetches to
// avoid overwhelming the S3 endpoint while still being much faster
// than serial downloads.
const maxConcurrentManifestDownloads = 10
type manifestResult struct {
snapshotID string
size int64
}
var (
mu sync.Mutex
results []manifestResult
)
g, gctx := errgroup.WithContext(v.ctx)
g.SetLimit(maxConcurrentManifestDownloads)
for _, sid := range remoteOnly {
g.Go(func() error {
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", sid)
reader, err := v.Storage.Get(gctx, manifestPath)
if err != nil {
return fmt.Errorf("downloading manifest for %s: %w", sid, err)
}
defer func() { _ = reader.Close() }()
manifest, err := snapshot.DecodeManifest(reader)
if err != nil {
return fmt.Errorf("decoding manifest for %s: %w", sid, err)
}
mu.Lock()
results = append(results, manifestResult{
snapshotID: sid,
size: manifest.TotalCompressedSize,
})
mu.Unlock()
return nil
})
}
if err := g.Wait(); err != nil {
return nil, fmt.Errorf("fetching manifest sizes: %w", err)
}
// Build a lookup from results and patch the pre-added entries.
sizeMap := make(map[string]int64, len(results))
for _, r := range results {
sizeMap[r.snapshotID] = r.size
}
for i := range snapshots {
if sz, ok := sizeMap[string(snapshots[i].ID)]; ok {
snapshots[i].CompressedSize = sz
}
} }
} }
@@ -533,8 +583,19 @@ func (v *Vaultik) printSnapshotTable(snapshots []SnapshotInfo) error {
return w.Flush() return w.Flush()
} }
// PurgeSnapshots removes old snapshots based on criteria // SnapshotPurgeOptions contains options for the snapshot purge command
func (v *Vaultik) PurgeSnapshots(keepLatest bool, olderThan string, force bool) error { type SnapshotPurgeOptions struct {
KeepLatest bool
OlderThan string
Force bool
Name string // Filter purge to a specific snapshot name
}
// PurgeSnapshotsWithOptions removes old snapshots based on criteria.
// When KeepLatest is true, retention is applied per snapshot name — the latest
// snapshot for each distinct name is kept. If Name is non-empty, only snapshots
// matching that name are considered for purge.
func (v *Vaultik) PurgeSnapshotsWithOptions(opts *SnapshotPurgeOptions) error {
// Sync with remote first // Sync with remote first
if err := v.syncWithRemote(); err != nil { if err := v.syncWithRemote(); err != nil {
return fmt.Errorf("syncing with remote: %w", err) return fmt.Errorf("syncing with remote: %w", err)
@@ -558,14 +619,51 @@ func (v *Vaultik) PurgeSnapshots(keepLatest bool, olderThan string, force bool)
} }
} }
// If --name is specified, filter to only snapshots matching that name
if opts.Name != "" {
filtered := make([]SnapshotInfo, 0, len(snapshots))
for _, snap := range snapshots {
if parseSnapshotName(snap.ID.String()) == opts.Name {
filtered = append(filtered, snap)
}
}
snapshots = filtered
}
// Sort by timestamp (newest first) // Sort by timestamp (newest first)
sort.Slice(snapshots, func(i, j int) bool { sort.Slice(snapshots, func(i, j int) bool {
return snapshots[i].Timestamp.After(snapshots[j].Timestamp) return snapshots[i].Timestamp.After(snapshots[j].Timestamp)
}) })
toDelete, err := v.collectSnapshotsToPurge(snapshots, keepLatest, olderThan) var toDelete []SnapshotInfo
if opts.KeepLatest {
// Keep the latest snapshot per snapshot name
// Group snapshots by name, then mark all but the newest in each group
latestByName := make(map[string]bool) // tracks whether we've seen the latest for each name
for _, snap := range snapshots {
name := parseSnapshotName(snap.ID.String())
if latestByName[name] {
// Already kept the latest for this name — delete this one
toDelete = append(toDelete, snap)
} else {
// This is the latest (sorted newest-first) — keep it
latestByName[name] = true
}
}
} else if opts.OlderThan != "" {
// Parse duration
duration, err := parseDuration(opts.OlderThan)
if err != nil { if err != nil {
return err return fmt.Errorf("invalid duration: %w", err)
}
cutoff := time.Now().UTC().Add(-duration)
for _, snap := range snapshots {
if snap.Timestamp.Before(cutoff) {
toDelete = append(toDelete, snap)
}
}
} }
if len(toDelete) == 0 { if len(toDelete) == 0 {
@@ -573,37 +671,7 @@ func (v *Vaultik) PurgeSnapshots(keepLatest bool, olderThan string, force bool)
return nil return nil
} }
return v.confirmAndExecutePurge(toDelete, force) return v.confirmAndExecutePurge(toDelete, opts.Force)
}
// collectSnapshotsToPurge determines which snapshots to delete based on retention criteria
func (v *Vaultik) collectSnapshotsToPurge(snapshots []SnapshotInfo, keepLatest bool, olderThan string) ([]SnapshotInfo, error) {
if keepLatest {
// Keep only the most recent snapshot
if len(snapshots) > 1 {
return snapshots[1:], nil
}
return nil, nil
}
if olderThan != "" {
// Parse duration
duration, err := parseDuration(olderThan)
if err != nil {
return nil, fmt.Errorf("invalid duration: %w", err)
}
cutoff := time.Now().UTC().Add(-duration)
var toDelete []SnapshotInfo
for _, snap := range snapshots {
if snap.Timestamp.Before(cutoff) {
toDelete = append(toDelete, snap)
}
}
return toDelete, nil
}
return nil, nil
} }
// confirmAndExecutePurge shows deletion candidates, confirms with user, and deletes snapshots // confirmAndExecutePurge shows deletion candidates, confirms with user, and deletes snapshots
@@ -801,23 +869,6 @@ func (v *Vaultik) outputVerifyJSON(result *VerifyResult) error {
// Helper methods that were previously on SnapshotApp // Helper methods that were previously on SnapshotApp
func (v *Vaultik) getManifestSize(snapshotID string) (int64, error) {
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
reader, err := v.Storage.Get(v.ctx, manifestPath)
if err != nil {
return 0, fmt.Errorf("downloading manifest: %w", err)
}
defer func() { _ = reader.Close() }()
manifest, err := snapshot.DecodeManifest(reader)
if err != nil {
return 0, fmt.Errorf("decoding manifest: %w", err)
}
return manifest.TotalCompressedSize, nil
}
func (v *Vaultik) downloadManifest(snapshotID string) (*snapshot.Manifest, error) { func (v *Vaultik) downloadManifest(snapshotID string) (*snapshot.Manifest, error) {
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID) manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
@@ -872,7 +923,7 @@ func (v *Vaultik) syncWithRemote() error {
snapshotIDStr := snapshot.ID.String() snapshotIDStr := snapshot.ID.String()
if !remoteSnapshots[snapshotIDStr] { if !remoteSnapshots[snapshotIDStr] {
log.Info("Removing local snapshot not found in remote", "snapshot_id", snapshot.ID) log.Info("Removing local snapshot not found in remote", "snapshot_id", snapshot.ID)
if err := v.Repositories.Snapshots.Delete(v.ctx, snapshotIDStr); err != nil { if err := v.deleteSnapshotFromLocalDB(snapshotIDStr); err != nil {
log.Error("Failed to delete local snapshot", "snapshot_id", snapshot.ID, "error", err) log.Error("Failed to delete local snapshot", "snapshot_id", snapshot.ID, "error", err)
} else { } else {
removedCount++ removedCount++
@@ -1001,6 +1052,7 @@ func (v *Vaultik) listAllRemoteSnapshotIDs() ([]string, error) {
log.Info("Listing all snapshots") log.Info("Listing all snapshots")
objectCh := v.Storage.ListStream(v.ctx, "metadata/") objectCh := v.Storage.ListStream(v.ctx, "metadata/")
seen := make(map[string]bool)
var snapshotIDs []string var snapshotIDs []string
for object := range objectCh { for object := range objectCh {
if object.Err != nil { if object.Err != nil {
@@ -1015,14 +1067,8 @@ func (v *Vaultik) listAllRemoteSnapshotIDs() ([]string, error) {
} }
if strings.HasSuffix(object.Key, "/") || strings.Contains(object.Key, "/manifest.json.zst") { if strings.HasSuffix(object.Key, "/") || strings.Contains(object.Key, "/manifest.json.zst") {
sid := parts[1] sid := parts[1]
found := false if !seen[sid] {
for _, id := range snapshotIDs { seen[sid] = true
if id == sid {
found = true
break
}
}
if !found {
snapshotIDs = append(snapshotIDs, sid) snapshotIDs = append(snapshotIDs, sid)
} }
} }