12 Commits

Author SHA1 Message Date
user
588e84da9c feat: concurrent manifest downloads in ListSnapshots
All checks were successful
check / check (pull_request) Successful in 4m7s
Replace serial getManifestSize() calls with bounded concurrent downloads
using errgroup. For each remote snapshot not in the local DB, manifest
downloads now run in parallel (up to 10 concurrent) instead of one at a
time.

Changes:
- Use errgroup with SetLimit(10) for bounded concurrency
- Collect remote-only snapshot IDs first, pre-add entries with zero size
- Download manifests concurrently, patch sizes from results
- Remove now-unused getManifestSize helper (logic inlined into goroutines)
- Promote golang.org/x/sync from indirect to direct dependency

closes #8
2026-03-17 21:51:52 -07:00
c24e7e6360 Add make check target and CI workflow (#42)
All checks were successful
check / check (push) Successful in 4m5s
Adds a `make check` target that verifies formatting (gofmt), linting (golangci-lint), and tests (go test -race) without modifying files.

Also adds `.gitea/workflows/check.yml` CI workflow that runs on pushes and PRs to main.

`make check` passes cleanly on current main.

Co-authored-by: user <user@Mac.lan guest wan>
Co-authored-by: clawbot <clawbot@noreply.git.eeqj.de>
Co-authored-by: clawbot <clawbot@sneak.berlin>
Reviewed-on: #42
Co-authored-by: clawbot <sneak+clawbot@sneak.cloud>
Co-committed-by: clawbot <sneak+clawbot@sneak.cloud>
2026-03-17 12:39:44 +01:00
7a5943958d feat: add progress bar to restore operation (#23)
Add an interactive progress bar (using schollz/progressbar) to the file restore loop, matching the existing pattern in verify. Shows bytes restored with ETA when output is a terminal.

Fixes #20

Co-authored-by: clawbot <clawbot@eeqj.de>
Co-authored-by: clawbot <clawbot@noreply.git.eeqj.de>
Reviewed-on: #23
Co-authored-by: clawbot <clawbot@noreply.example.org>
Co-committed-by: clawbot <clawbot@noreply.example.org>
2026-03-17 11:18:18 +01:00
d8a51804d2 Merge pull request 'feat: implement --prune flag on snapshot create (closes #4)' (#37) from feature/implement-prune-flag-on-snapshot-create into main
Reviewed-on: #37
2026-02-20 11:22:12 +01:00
76f4421eb3 Merge branch 'main' into feature/implement-prune-flag-on-snapshot-create 2026-02-20 11:20:52 +01:00
53ac868c5d Merge pull request 'fix: track and report file restore failures' (#22) from fix/restore-error-handling into main
Reviewed-on: #22
2026-02-20 11:19:40 +01:00
8c4ea2b870 Merge branch 'main' into fix/restore-error-handling 2026-02-20 11:19:21 +01:00
597b560398 Merge pull request 'Return errors from deleteSnapshotFromLocalDB instead of swallowing them (closes #25)' (#30) from fix/issue-25 into main
Reviewed-on: #30
2026-02-20 11:18:30 +01:00
1e2eced092 Merge branch 'main' into fix/issue-25 2026-02-20 11:18:06 +01:00
76e047bbb2 feat: implement --prune flag on snapshot create (closes #4)
The --prune flag on 'snapshot create' was accepted but silently did nothing
(TODO stub). This connects it to actually:

1. Purge old snapshots (keeping only the latest) via PurgeSnapshots
2. Remove unreferenced blobs from storage via PruneBlobs

The pruning runs after all snapshots complete successfully, not per-snapshot.
Both operations use --force mode (no interactive confirmation) since --prune
is an explicit opt-in flag.

Moved the prune logic from createNamedSnapshot (per-snapshot) to
CreateSnapshot (after all snapshots), which is the correct location.
2026-02-20 02:11:52 -08:00
clawbot
ddc23f8057 fix: return errors from deleteSnapshotFromLocalDB instead of swallowing them
Previously, deleteSnapshotFromLocalDB logged errors but always returned nil,
causing callers to believe deletion succeeded even when it failed. This could
lead to data inconsistency where remote metadata is deleted while local
records persist.

Now returns the first error encountered, allowing callers to handle failures
appropriately.
2026-02-19 23:55:27 -08:00
cafb3d45b8 fix: track and report file restore failures
Restore previously logged errors for individual files but returned
success even if files failed. Now tracks failed files in RestoreResult,
reports them in the summary output, and returns an error if any files
failed to restore.

Fixes #21
2026-02-19 23:52:22 -08:00
8 changed files with 297 additions and 81 deletions

8
.dockerignore Normal file
View File

@@ -0,0 +1,8 @@
.git
.gitea
*.md
LICENSE
vaultik
coverage.out
coverage.html
.DS_Store

View File

@@ -0,0 +1,14 @@
name: check
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
check:
runs-on: ubuntu-latest
steps:
# actions/checkout v4, 2024-09-16
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5
- name: Build and check
run: docker build .

61
Dockerfile Normal file
View File

@@ -0,0 +1,61 @@
# Lint stage
# golangci/golangci-lint:v2.11.3-alpine, 2026-03-17
FROM golangci/golangci-lint:v2.11.3-alpine@sha256:b1c3de5862ad0a95b4e45a993b0f00415835d687e4f12c845c7493b86c13414e AS lint
RUN apk add --no-cache make build-base
WORKDIR /src
# Copy go mod files first for better layer caching
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Run formatting check and linter
RUN make fmt-check
RUN make lint
# Build stage
# golang:1.26.1-alpine, 2026-03-17
FROM golang:1.26.1-alpine@sha256:2389ebfa5b7f43eeafbd6be0c3700cc46690ef842ad962f6c5bd6be49ed82039 AS builder
# Depend on lint stage passing
COPY --from=lint /src/go.sum /dev/null
ARG VERSION=dev
# Install build dependencies for CGO (mattn/go-sqlite3) and sqlite3 CLI (tests)
RUN apk add --no-cache make build-base sqlite
WORKDIR /src
# Copy go mod files first for better layer caching
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Run tests
RUN make test
# Build with CGO enabled (required for mattn/go-sqlite3)
RUN CGO_ENABLED=1 go build -ldflags "-X 'git.eeqj.de/sneak/vaultik/internal/globals.Version=${VERSION}' -X 'git.eeqj.de/sneak/vaultik/internal/globals.Commit=$(git rev-parse HEAD 2>/dev/null || echo unknown)'" -o /vaultik ./cmd/vaultik
# Runtime stage
# alpine:3.21, 2026-02-25
FROM alpine:3.21@sha256:c3f8e73fdb79deaebaa2037150150191b9dcbfba68b4a46d70103204c53f4709
RUN apk add --no-cache ca-certificates sqlite
# Copy binary from builder
COPY --from=builder /vaultik /usr/local/bin/vaultik
# Create non-root user
RUN adduser -D -H -s /sbin/nologin vaultik
USER vaultik
ENTRYPOINT ["/usr/local/bin/vaultik"]

View File

@@ -1,4 +1,4 @@
.PHONY: test fmt lint build clean all .PHONY: test fmt lint fmt-check check build clean all docker hooks
# Version number # Version number
VERSION := 0.0.1 VERSION := 0.0.1
@@ -14,21 +14,12 @@ LDFLAGS := -X 'git.eeqj.de/sneak/vaultik/internal/globals.Version=$(VERSION)' \
all: vaultik all: vaultik
# Run tests # Run tests
test: lint fmt-check test:
@echo "Running tests..." go test -race -timeout 30s ./...
@if ! go test -v -timeout 10s ./... 2>&1; then \
echo ""; \
echo "TEST FAILURES DETECTED"; \
echo "Run 'go test -v ./internal/database' to see database test details"; \
exit 1; \
fi
# Check if code is formatted # Check if code is formatted (read-only)
fmt-check: fmt-check:
@if [ -n "$$(go fmt ./...)" ]; then \ @test -z "$$(gofmt -l .)" || (echo "Files not formatted:" && gofmt -l . && exit 1)
echo "Error: Code is not formatted. Run 'make fmt' to fix."; \
exit 1; \
fi
# Format code # Format code
fmt: fmt:
@@ -36,7 +27,7 @@ fmt:
# Run linter # Run linter
lint: lint:
golangci-lint run golangci-lint run ./...
# Build binary # Build binary
vaultik: internal/*/*.go cmd/vaultik/*.go vaultik: internal/*/*.go cmd/vaultik/*.go
@@ -47,11 +38,6 @@ clean:
rm -f vaultik rm -f vaultik
go clean go clean
# Install dependencies
deps:
go mod download
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
# Run tests with coverage # Run tests with coverage
test-coverage: test-coverage:
go test -v -coverprofile=coverage.out ./... go test -v -coverprofile=coverage.out ./...
@@ -67,3 +53,17 @@ local:
install: vaultik install: vaultik
cp ./vaultik $(HOME)/bin/ cp ./vaultik $(HOME)/bin/
# Run all checks (formatting, linting, tests) without modifying files
check: fmt-check lint test
# Build Docker image
docker:
docker build -t vaultik .
# Install pre-commit hook
hooks:
@printf '#!/bin/sh\nset -e\n' > .git/hooks/pre-commit
@printf 'go mod tidy\ngo fmt ./...\ngit diff --exit-code -- go.mod go.sum || { echo "go mod tidy changed files; please stage and retry"; exit 1; }\n' >> .git/hooks/pre-commit
@printf 'make check\n' >> .git/hooks/pre-commit
@chmod +x .git/hooks/pre-commit

4
go.mod
View File

@@ -1,6 +1,6 @@
module git.eeqj.de/sneak/vaultik module git.eeqj.de/sneak/vaultik
go 1.24.4 go 1.26.1
require ( require (
filippo.io/age v1.2.1 filippo.io/age v1.2.1
@@ -24,6 +24,7 @@ require (
github.com/spf13/cobra v1.10.1 github.com/spf13/cobra v1.10.1
github.com/stretchr/testify v1.11.1 github.com/stretchr/testify v1.11.1
go.uber.org/fx v1.24.0 go.uber.org/fx v1.24.0
golang.org/x/sync v0.18.0
golang.org/x/term v0.37.0 golang.org/x/term v0.37.0
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
modernc.org/sqlite v1.38.0 modernc.org/sqlite v1.38.0
@@ -266,7 +267,6 @@ require (
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/net v0.47.0 // indirect golang.org/x/net v0.47.0 // indirect
golang.org/x/oauth2 v0.33.0 // indirect golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.31.0 // indirect golang.org/x/text v0.31.0 // indirect
golang.org/x/time v0.14.0 // indirect golang.org/x/time v0.14.0 // indirect

View File

@@ -22,6 +22,13 @@ import (
"golang.org/x/term" "golang.org/x/term"
) )
const (
// progressBarWidth is the character width of the progress bar display.
progressBarWidth = 40
// progressBarThrottle is the minimum interval between progress bar redraws.
progressBarThrottle = 100 * time.Millisecond
)
// RestoreOptions contains options for the restore operation // RestoreOptions contains options for the restore operation
type RestoreOptions struct { type RestoreOptions struct {
SnapshotID string SnapshotID string
@@ -115,6 +122,15 @@ func (v *Vaultik) Restore(opts *RestoreOptions) error {
} }
defer func() { _ = blobCache.Close() }() defer func() { _ = blobCache.Close() }()
// Calculate total bytes for progress bar
var totalBytesExpected int64
for _, file := range files {
totalBytesExpected += file.Size
}
// Create progress bar if output is a terminal
bar := v.newProgressBar("Restoring", totalBytesExpected)
for i, file := range files { for i, file := range files {
if v.ctx.Err() != nil { if v.ctx.Err() != nil {
return v.ctx.Err() return v.ctx.Err()
@@ -122,11 +138,21 @@ func (v *Vaultik) Restore(opts *RestoreOptions) error {
if err := v.restoreFile(v.ctx, repos, file, opts.TargetDir, identity, chunkToBlobMap, blobCache, result); err != nil { if err := v.restoreFile(v.ctx, repos, file, opts.TargetDir, identity, chunkToBlobMap, blobCache, result); err != nil {
log.Error("Failed to restore file", "path", file.Path, "error", err) log.Error("Failed to restore file", "path", file.Path, "error", err)
// Continue with other files result.FilesFailed++
result.FailedFiles = append(result.FailedFiles, file.Path.String())
// Update progress bar even on failure
if bar != nil {
_ = bar.Add64(file.Size)
}
continue continue
} }
// Progress logging // Update progress bar
if bar != nil {
_ = bar.Add64(file.Size)
}
// Progress logging (for non-terminal or structured logs)
if (i+1)%100 == 0 || i+1 == len(files) { if (i+1)%100 == 0 || i+1 == len(files) {
log.Info("Restore progress", log.Info("Restore progress",
"files", fmt.Sprintf("%d/%d", i+1, len(files)), "files", fmt.Sprintf("%d/%d", i+1, len(files)),
@@ -135,6 +161,10 @@ func (v *Vaultik) Restore(opts *RestoreOptions) error {
} }
} }
if bar != nil {
_ = bar.Finish()
}
result.Duration = time.Since(startTime) result.Duration = time.Since(startTime)
log.Info("Restore complete", log.Info("Restore complete",
@@ -151,6 +181,13 @@ func (v *Vaultik) Restore(opts *RestoreOptions) error {
result.Duration.Round(time.Second), result.Duration.Round(time.Second),
) )
if result.FilesFailed > 0 {
_, _ = fmt.Fprintf(v.Stdout, "\nWARNING: %d file(s) failed to restore:\n", result.FilesFailed)
for _, path := range result.FailedFiles {
_, _ = fmt.Fprintf(v.Stdout, " - %s\n", path)
}
}
// Run verification if requested // Run verification if requested
if opts.Verify { if opts.Verify {
if err := v.verifyRestoredFiles(v.ctx, repos, files, opts.TargetDir, result); err != nil { if err := v.verifyRestoredFiles(v.ctx, repos, files, opts.TargetDir, result); err != nil {
@@ -171,6 +208,10 @@ func (v *Vaultik) Restore(opts *RestoreOptions) error {
) )
} }
if result.FilesFailed > 0 {
return fmt.Errorf("%d file(s) failed to restore", result.FilesFailed)
}
return nil return nil
} }
@@ -523,22 +564,7 @@ func (v *Vaultik) verifyRestoredFiles(
) )
// Create progress bar if output is a terminal // Create progress bar if output is a terminal
var bar *progressbar.ProgressBar bar := v.newProgressBar("Verifying", totalBytes)
if isTerminal() {
bar = progressbar.NewOptions64(
totalBytes,
progressbar.OptionSetDescription("Verifying"),
progressbar.OptionSetWriter(v.Stderr),
progressbar.OptionShowBytes(true),
progressbar.OptionShowCount(),
progressbar.OptionSetWidth(40),
progressbar.OptionThrottle(100*time.Millisecond),
progressbar.OptionOnCompletion(func() {
v.printfStderr("\n")
}),
progressbar.OptionSetRenderBlankState(true),
)
}
// Verify each file // Verify each file
for _, file := range regularFiles { for _, file := range regularFiles {
@@ -632,7 +658,37 @@ func (v *Vaultik) verifyFile(
return bytesVerified, nil return bytesVerified, nil
} }
// isTerminal returns true if stdout is a terminal // newProgressBar creates a terminal-aware progress bar with standard options.
func isTerminal() bool { // It returns nil if stdout is not a terminal.
return term.IsTerminal(int(os.Stdout.Fd())) func (v *Vaultik) newProgressBar(description string, total int64) *progressbar.ProgressBar {
if !v.isTerminal() {
return nil
}
return progressbar.NewOptions64(
total,
progressbar.OptionSetDescription(description),
progressbar.OptionSetWriter(v.Stderr),
progressbar.OptionShowBytes(true),
progressbar.OptionShowCount(),
progressbar.OptionSetWidth(progressBarWidth),
progressbar.OptionThrottle(progressBarThrottle),
progressbar.OptionOnCompletion(func() {
v.printfStderr("\n")
}),
progressbar.OptionSetRenderBlankState(true),
)
}
// isTerminal returns true if stdout is a terminal.
// It checks whether v.Stdout implements Fd() (i.e. is an *os.File),
// and falls back to false for non-file writers (e.g. in tests).
func (v *Vaultik) isTerminal() bool {
type fder interface {
Fd() uintptr
}
f, ok := v.Stdout.(fder)
if !ok {
return false
}
return term.IsTerminal(int(f.Fd()))
} }

View File

@@ -8,6 +8,7 @@ import (
"regexp" "regexp"
"sort" "sort"
"strings" "strings"
"sync"
"text/tabwriter" "text/tabwriter"
"time" "time"
@@ -16,6 +17,7 @@ import (
"git.eeqj.de/sneak/vaultik/internal/snapshot" "git.eeqj.de/sneak/vaultik/internal/snapshot"
"git.eeqj.de/sneak/vaultik/internal/types" "git.eeqj.de/sneak/vaultik/internal/types"
"github.com/dustin/go-humanize" "github.com/dustin/go-humanize"
"golang.org/x/sync/errgroup"
) )
// SnapshotCreateOptions contains options for the snapshot create command // SnapshotCreateOptions contains options for the snapshot create command
@@ -90,6 +92,24 @@ func (v *Vaultik) CreateSnapshot(opts *SnapshotCreateOptions) error {
v.printfStdout("\nAll %d snapshots completed in %s\n", len(snapshotNames), time.Since(overallStartTime).Round(time.Second)) v.printfStdout("\nAll %d snapshots completed in %s\n", len(snapshotNames), time.Since(overallStartTime).Round(time.Second))
} }
// Prune old snapshots and unreferenced blobs if --prune was specified
if opts.Prune {
log.Info("Pruning enabled - deleting old snapshots and unreferenced blobs")
v.printlnStdout("\nPruning old snapshots (keeping latest)...")
if err := v.PurgeSnapshots(true, "", true); err != nil {
return fmt.Errorf("prune: purging old snapshots: %w", err)
}
v.printlnStdout("Pruning unreferenced blobs...")
if err := v.PruneBlobs(&PruneOptions{Force: true}); err != nil {
return fmt.Errorf("prune: removing unreferenced blobs: %w", err)
}
log.Info("Pruning complete")
}
return nil return nil
} }
@@ -306,11 +326,6 @@ func (v *Vaultik) createNamedSnapshot(opts *SnapshotCreateOptions, hostname, sna
} }
v.printfStdout("Duration: %s\n", formatDuration(snapshotDuration)) v.printfStdout("Duration: %s\n", formatDuration(snapshotDuration))
if opts.Prune {
log.Info("Pruning enabled - will delete old snapshots after snapshot")
// TODO: Implement pruning
}
return nil return nil
} }
@@ -375,17 +390,19 @@ func (v *Vaultik) ListSnapshots(jsonOutput bool) error {
} }
} }
// Build final snapshot list // Build final snapshot list.
// Separate local (cheap DB lookup) from remote-only (needs manifest download).
snapshots := make([]SnapshotInfo, 0, len(remoteSnapshots)) snapshots := make([]SnapshotInfo, 0, len(remoteSnapshots))
// remoteOnly collects snapshot IDs that need a manifest download.
var remoteOnly []string
for snapshotID := range remoteSnapshots { for snapshotID := range remoteSnapshots {
// Check if we have this snapshot locally
if localSnap, exists := localSnapshotMap[snapshotID]; exists && localSnap.CompletedAt != nil { if localSnap, exists := localSnapshotMap[snapshotID]; exists && localSnap.CompletedAt != nil {
// Get total compressed size of all blobs referenced by this snapshot // Get total compressed size of all blobs referenced by this snapshot
totalSize, err := v.Repositories.Snapshots.GetSnapshotTotalCompressedSize(v.ctx, snapshotID) totalSize, err := v.Repositories.Snapshots.GetSnapshotTotalCompressedSize(v.ctx, snapshotID)
if err != nil { if err != nil {
log.Warn("Failed to get total compressed size", "id", snapshotID, "error", err) log.Warn("Failed to get total compressed size", "id", snapshotID, "error", err)
// Fall back to stored blob size
totalSize = localSnap.BlobSize totalSize = localSnap.BlobSize
} }
@@ -395,24 +412,78 @@ func (v *Vaultik) ListSnapshots(jsonOutput bool) error {
CompressedSize: totalSize, CompressedSize: totalSize,
}) })
} else { } else {
// Remote snapshot not in local DB - fetch manifest to get size
timestamp, err := parseSnapshotTimestamp(snapshotID) timestamp, err := parseSnapshotTimestamp(snapshotID)
if err != nil { if err != nil {
log.Warn("Failed to parse snapshot timestamp", "id", snapshotID, "error", err) log.Warn("Failed to parse snapshot timestamp", "id", snapshotID, "error", err)
continue continue
} }
// Pre-add with zero size; will be filled by concurrent downloads.
// Try to download manifest to get size
totalSize, err := v.getManifestSize(snapshotID)
if err != nil {
return fmt.Errorf("failed to get manifest size for %s: %w", snapshotID, err)
}
snapshots = append(snapshots, SnapshotInfo{ snapshots = append(snapshots, SnapshotInfo{
ID: types.SnapshotID(snapshotID), ID: types.SnapshotID(snapshotID),
Timestamp: timestamp, Timestamp: timestamp,
CompressedSize: totalSize, CompressedSize: 0,
}) })
remoteOnly = append(remoteOnly, snapshotID)
}
}
// Download manifests concurrently for remote-only snapshots.
if len(remoteOnly) > 0 {
// maxConcurrentManifestDownloads bounds parallel manifest fetches to
// avoid overwhelming the S3 endpoint while still being much faster
// than serial downloads.
const maxConcurrentManifestDownloads = 10
type manifestResult struct {
snapshotID string
size int64
}
var (
mu sync.Mutex
results []manifestResult
)
g, gctx := errgroup.WithContext(v.ctx)
g.SetLimit(maxConcurrentManifestDownloads)
for _, sid := range remoteOnly {
g.Go(func() error {
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", sid)
reader, err := v.Storage.Get(gctx, manifestPath)
if err != nil {
return fmt.Errorf("downloading manifest for %s: %w", sid, err)
}
defer func() { _ = reader.Close() }()
manifest, err := snapshot.DecodeManifest(reader)
if err != nil {
return fmt.Errorf("decoding manifest for %s: %w", sid, err)
}
mu.Lock()
results = append(results, manifestResult{
snapshotID: sid,
size: manifest.TotalCompressedSize,
})
mu.Unlock()
return nil
})
}
if err := g.Wait(); err != nil {
return fmt.Errorf("fetching manifest sizes: %w", err)
}
// Build a lookup from results and patch the pre-added entries.
sizeMap := make(map[string]int64, len(results))
for _, r := range results {
sizeMap[r.snapshotID] = r.size
}
for i := range snapshots {
if sz, ok := sizeMap[string(snapshots[i].ID)]; ok {
snapshots[i].CompressedSize = sz
}
} }
} }
@@ -718,23 +789,6 @@ func (v *Vaultik) outputVerifyJSON(result *VerifyResult) error {
// Helper methods that were previously on SnapshotApp // Helper methods that were previously on SnapshotApp
func (v *Vaultik) getManifestSize(snapshotID string) (int64, error) {
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
reader, err := v.Storage.Get(v.ctx, manifestPath)
if err != nil {
return 0, fmt.Errorf("downloading manifest: %w", err)
}
defer func() { _ = reader.Close() }()
manifest, err := snapshot.DecodeManifest(reader)
if err != nil {
return 0, fmt.Errorf("decoding manifest: %w", err)
}
return manifest.TotalCompressedSize, nil
}
func (v *Vaultik) downloadManifest(snapshotID string) (*snapshot.Manifest, error) { func (v *Vaultik) downloadManifest(snapshotID string) (*snapshot.Manifest, error) {
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID) manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
@@ -1004,16 +1058,16 @@ func (v *Vaultik) deleteSnapshotFromLocalDB(snapshotID string) error {
// Delete related records first to avoid foreign key constraints // Delete related records first to avoid foreign key constraints
if err := v.Repositories.Snapshots.DeleteSnapshotFiles(v.ctx, snapshotID); err != nil { if err := v.Repositories.Snapshots.DeleteSnapshotFiles(v.ctx, snapshotID); err != nil {
log.Error("Failed to delete snapshot files", "snapshot_id", snapshotID, "error", err) return fmt.Errorf("deleting snapshot files for %s: %w", snapshotID, err)
} }
if err := v.Repositories.Snapshots.DeleteSnapshotBlobs(v.ctx, snapshotID); err != nil { if err := v.Repositories.Snapshots.DeleteSnapshotBlobs(v.ctx, snapshotID); err != nil {
log.Error("Failed to delete snapshot blobs", "snapshot_id", snapshotID, "error", err) return fmt.Errorf("deleting snapshot blobs for %s: %w", snapshotID, err)
} }
if err := v.Repositories.Snapshots.DeleteSnapshotUploads(v.ctx, snapshotID); err != nil { if err := v.Repositories.Snapshots.DeleteSnapshotUploads(v.ctx, snapshotID); err != nil {
log.Error("Failed to delete snapshot uploads", "snapshot_id", snapshotID, "error", err) return fmt.Errorf("deleting snapshot uploads for %s: %w", snapshotID, err)
} }
if err := v.Repositories.Snapshots.Delete(v.ctx, snapshotID); err != nil { if err := v.Repositories.Snapshots.Delete(v.ctx, snapshotID); err != nil {
log.Error("Failed to delete snapshot record", "snapshot_id", snapshotID, "error", err) return fmt.Errorf("deleting snapshot record %s: %w", snapshotID, err)
} }
return nil return nil

View File

@@ -0,0 +1,23 @@
package vaultik
import (
"testing"
)
// TestSnapshotCreateOptions_PruneFlag verifies the Prune field exists on
// SnapshotCreateOptions and can be set.
func TestSnapshotCreateOptions_PruneFlag(t *testing.T) {
opts := &SnapshotCreateOptions{
Prune: true,
}
if !opts.Prune {
t.Error("Expected Prune to be true")
}
opts2 := &SnapshotCreateOptions{
Prune: false,
}
if opts2.Prune {
t.Error("Expected Prune to be false")
}
}