Compare commits
25 Commits
9c072166fa
...
feature/pl
| Author | SHA1 | Date | |
|---|---|---|---|
| 899448e1da | |||
| 24c5e8c5a6 | |||
| 40fff09594 | |||
| 8a8651c690 | |||
| a1d559c30d | |||
| 88e2508dc7 | |||
| c3725e745e | |||
| badc0c07e0 | |||
| cda0cf865a | |||
| 0736bd070b | |||
| d7cd9aac27 | |||
| bb38f8c5d6 | |||
| e29a995120 | |||
| 5c70405a85 | |||
| a544fa80f2 | |||
| c07d8eec0a | |||
| 0cbb5aa0a6 | |||
| fb220685a2 | |||
| 1d027bde57 | |||
| bb2292de7f | |||
| d3afa65420 | |||
| 78af626759 | |||
| 86b533d6ee | |||
| 26db096913 | |||
| 36c59cb7b3 |
380
ARCHITECTURE.md
Normal file
380
ARCHITECTURE.md
Normal file
@@ -0,0 +1,380 @@
|
||||
# Vaultik Architecture
|
||||
|
||||
This document describes the internal architecture of Vaultik, focusing on the data model, type instantiation, and the relationships between core modules.
|
||||
|
||||
## Overview
|
||||
|
||||
Vaultik is a backup system that uses content-defined chunking for deduplication and packs chunks into large, compressed, encrypted blobs for efficient cloud storage. The system is built around dependency injection using [uber-go/fx](https://github.com/uber-go/fx).
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
Source Files
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Scanner │ Walks directories, detects changed files
|
||||
└────────┬────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Chunker │ Splits files into variable-size chunks (FastCDC)
|
||||
└────────┬────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Packer │ Accumulates chunks, compresses (zstd), encrypts (age)
|
||||
└────────┬────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ S3 Client │ Uploads blobs to remote storage
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
## Data Model
|
||||
|
||||
### Core Entities
|
||||
|
||||
The database tracks five primary entities and their relationships:
|
||||
|
||||
```
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Snapshot │────▶│ File │────▶│ Chunk │
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
│ │
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────────┐ ┌──────────────┐
|
||||
│ Blob │◀─────────────────────────│ BlobChunk │
|
||||
└──────────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
### Entity Descriptions
|
||||
|
||||
#### File (`database.File`)
|
||||
Represents a file or directory in the backup system. Stores metadata needed for restoration:
|
||||
- Path, timestamps (mtime, ctime)
|
||||
- Size, mode, ownership (uid, gid)
|
||||
- Symlink target (if applicable)
|
||||
|
||||
#### Chunk (`database.Chunk`)
|
||||
A content-addressed unit of data. Files are split into variable-size chunks using the FastCDC algorithm:
|
||||
- `ChunkHash`: SHA256 hash of chunk content (primary key)
|
||||
- `Size`: Chunk size in bytes
|
||||
|
||||
Chunk sizes vary between `avgChunkSize/4` and `avgChunkSize*4` (typically 16KB-256KB for 64KB average).
|
||||
|
||||
#### FileChunk (`database.FileChunk`)
|
||||
Maps files to their constituent chunks:
|
||||
- `FileID`: Reference to the file
|
||||
- `Idx`: Position of this chunk within the file (0-indexed)
|
||||
- `ChunkHash`: Reference to the chunk
|
||||
|
||||
#### Blob (`database.Blob`)
|
||||
The final storage unit uploaded to S3. Contains many compressed and encrypted chunks:
|
||||
- `ID`: UUID assigned at creation
|
||||
- `Hash`: SHA256 of final compressed+encrypted content
|
||||
- `UncompressedSize`: Total raw chunk data before compression
|
||||
- `CompressedSize`: Size after zstd compression and age encryption
|
||||
- `CreatedTS`, `FinishedTS`, `UploadedTS`: Lifecycle timestamps
|
||||
|
||||
Blob creation process:
|
||||
1. Chunks are accumulated (up to MaxBlobSize, typically 10GB)
|
||||
2. Compressed with zstd
|
||||
3. Encrypted with age (recipients configured in config)
|
||||
4. SHA256 hash computed → becomes filename in S3
|
||||
5. Uploaded to `blobs/{hash[0:2]}/{hash[2:4]}/{hash}`
|
||||
|
||||
#### BlobChunk (`database.BlobChunk`)
|
||||
Maps chunks to their position within blobs:
|
||||
- `BlobID`: Reference to the blob
|
||||
- `ChunkHash`: Reference to the chunk
|
||||
- `Offset`: Byte offset within the uncompressed blob
|
||||
- `Length`: Chunk size
|
||||
|
||||
#### Snapshot (`database.Snapshot`)
|
||||
Represents a point-in-time backup:
|
||||
- `ID`: Format is `{hostname}-{YYYYMMDD}-{HHMMSS}Z`
|
||||
- Tracks file count, chunk count, blob count, sizes, compression ratio
|
||||
- `CompletedAt`: Null until snapshot finishes successfully
|
||||
|
||||
#### SnapshotFile / SnapshotBlob
|
||||
Join tables linking snapshots to their files and blobs.
|
||||
|
||||
### Relationship Summary
|
||||
|
||||
```
|
||||
Snapshot 1──────────▶ N SnapshotFile N ◀────────── 1 File
|
||||
Snapshot 1──────────▶ N SnapshotBlob N ◀────────── 1 Blob
|
||||
File 1──────────▶ N FileChunk N ◀────────── 1 Chunk
|
||||
Blob 1──────────▶ N BlobChunk N ◀────────── 1 Chunk
|
||||
```
|
||||
|
||||
## Type Instantiation
|
||||
|
||||
### Application Startup
|
||||
|
||||
The CLI uses fx for dependency injection. Here's the instantiation order:
|
||||
|
||||
```go
|
||||
// cli/app.go: NewApp()
|
||||
fx.New(
|
||||
fx.Supply(config.ConfigPath(opts.ConfigPath)), // 1. Config path
|
||||
fx.Supply(opts.LogOptions), // 2. Log options
|
||||
fx.Provide(globals.New), // 3. Globals
|
||||
fx.Provide(log.New), // 4. Logger config
|
||||
config.Module, // 5. Config
|
||||
database.Module, // 6. Database + Repositories
|
||||
log.Module, // 7. Logger initialization
|
||||
s3.Module, // 8. S3 client
|
||||
snapshot.Module, // 9. SnapshotManager + ScannerFactory
|
||||
fx.Provide(vaultik.New), // 10. Vaultik orchestrator
|
||||
)
|
||||
```
|
||||
|
||||
### Key Type Instantiation Points
|
||||
|
||||
#### 1. Config (`config.Config`)
|
||||
- **Created by**: `config.Module` via `config.LoadConfig()`
|
||||
- **When**: Application startup (fx DI)
|
||||
- **Contains**: All configuration from YAML file (S3 credentials, encryption keys, paths, etc.)
|
||||
|
||||
#### 2. Database (`database.DB`)
|
||||
- **Created by**: `database.Module` via `database.New()`
|
||||
- **When**: Application startup (fx DI)
|
||||
- **Contains**: SQLite connection, path reference
|
||||
|
||||
#### 3. Repositories (`database.Repositories`)
|
||||
- **Created by**: `database.Module` via `database.NewRepositories()`
|
||||
- **When**: Application startup (fx DI)
|
||||
- **Contains**: All repository interfaces (Files, Chunks, Blobs, Snapshots, etc.)
|
||||
|
||||
#### 4. Vaultik (`vaultik.Vaultik`)
|
||||
- **Created by**: `vaultik.New(VaultikParams)`
|
||||
- **When**: Application startup (fx DI)
|
||||
- **Contains**: All dependencies for backup operations
|
||||
|
||||
```go
|
||||
type Vaultik struct {
|
||||
Globals *globals.Globals
|
||||
Config *config.Config
|
||||
DB *database.DB
|
||||
Repositories *database.Repositories
|
||||
S3Client *s3.Client
|
||||
ScannerFactory snapshot.ScannerFactory
|
||||
SnapshotManager *snapshot.SnapshotManager
|
||||
Shutdowner fx.Shutdowner
|
||||
Fs afero.Fs
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
}
|
||||
```
|
||||
|
||||
#### 5. SnapshotManager (`snapshot.SnapshotManager`)
|
||||
- **Created by**: `snapshot.Module` via `snapshot.NewSnapshotManager()`
|
||||
- **When**: Application startup (fx DI)
|
||||
- **Responsibility**: Creates/completes snapshots, exports metadata to S3
|
||||
|
||||
#### 6. Scanner (`snapshot.Scanner`)
|
||||
- **Created by**: `ScannerFactory(ScannerParams)`
|
||||
- **When**: Each `CreateSnapshot()` call
|
||||
- **Contains**: Chunker, Packer, progress reporter
|
||||
|
||||
```go
|
||||
// vaultik/snapshot.go: CreateSnapshot()
|
||||
scanner := v.ScannerFactory(snapshot.ScannerParams{
|
||||
EnableProgress: !opts.Cron,
|
||||
Fs: v.Fs,
|
||||
})
|
||||
```
|
||||
|
||||
#### 7. Chunker (`chunker.Chunker`)
|
||||
- **Created by**: `chunker.NewChunker(avgChunkSize)`
|
||||
- **When**: Inside `snapshot.NewScanner()`
|
||||
- **Configuration**:
|
||||
- `avgChunkSize`: From config (typically 64KB)
|
||||
- `minChunkSize`: avgChunkSize / 4
|
||||
- `maxChunkSize`: avgChunkSize * 4
|
||||
|
||||
#### 8. Packer (`blob.Packer`)
|
||||
- **Created by**: `blob.NewPacker(PackerConfig)`
|
||||
- **When**: Inside `snapshot.NewScanner()`
|
||||
- **Configuration**:
|
||||
- `MaxBlobSize`: Maximum blob size before finalization (typically 10GB)
|
||||
- `CompressionLevel`: zstd level (1-19)
|
||||
- `Recipients`: age public keys for encryption
|
||||
|
||||
```go
|
||||
// snapshot/scanner.go: NewScanner()
|
||||
packerCfg := blob.PackerConfig{
|
||||
MaxBlobSize: cfg.MaxBlobSize,
|
||||
CompressionLevel: cfg.CompressionLevel,
|
||||
Recipients: cfg.AgeRecipients,
|
||||
Repositories: cfg.Repositories,
|
||||
Fs: cfg.FS,
|
||||
}
|
||||
packer, err := blob.NewPacker(packerCfg)
|
||||
```
|
||||
|
||||
## Module Responsibilities
|
||||
|
||||
### `internal/cli`
|
||||
Entry point for fx application. Combines all modules and handles signal interrupts.
|
||||
|
||||
Key functions:
|
||||
- `NewApp(AppOptions)` → Creates fx.App with all modules
|
||||
- `RunApp(ctx, app)` → Starts app, handles graceful shutdown
|
||||
- `RunWithApp(ctx, opts)` → Convenience wrapper
|
||||
|
||||
### `internal/vaultik`
|
||||
Main orchestrator containing all dependencies and command implementations.
|
||||
|
||||
Key methods:
|
||||
- `New(VaultikParams)` → Constructor (fx DI)
|
||||
- `CreateSnapshot(opts)` → Main backup operation
|
||||
- `ListSnapshots(jsonOutput)` → List available snapshots
|
||||
- `VerifySnapshot(id, deep)` → Verify snapshot integrity
|
||||
- `PurgeSnapshots(...)` → Remove old snapshots
|
||||
|
||||
### `internal/chunker`
|
||||
Content-defined chunking using FastCDC algorithm.
|
||||
|
||||
Key types:
|
||||
- `Chunk` → Hash, Data, Offset, Size
|
||||
- `Chunker` → avgChunkSize, minChunkSize, maxChunkSize
|
||||
|
||||
Key methods:
|
||||
- `NewChunker(avgChunkSize)` → Constructor
|
||||
- `ChunkReaderStreaming(reader, callback)` → Stream chunks with callback (preferred)
|
||||
- `ChunkReader(reader)` → Return all chunks at once (memory-intensive)
|
||||
|
||||
### `internal/blob`
|
||||
Blob packing: accumulates chunks, compresses, encrypts, tracks metadata.
|
||||
|
||||
Key types:
|
||||
- `Packer` → Thread-safe blob accumulator
|
||||
- `ChunkRef` → Hash + Data for adding to packer
|
||||
- `FinishedBlob` → Completed blob ready for upload
|
||||
- `BlobWithReader` → FinishedBlob + io.Reader for streaming upload
|
||||
|
||||
Key methods:
|
||||
- `NewPacker(PackerConfig)` → Constructor
|
||||
- `AddChunk(ChunkRef)` → Add chunk to current blob
|
||||
- `FinalizeBlob()` → Compress, encrypt, hash current blob
|
||||
- `Flush()` → Finalize any in-progress blob
|
||||
- `SetBlobHandler(func)` → Set callback for upload
|
||||
|
||||
### `internal/snapshot`
|
||||
|
||||
#### Scanner
|
||||
Orchestrates the backup process for a directory.
|
||||
|
||||
Key methods:
|
||||
- `NewScanner(ScannerConfig)` → Constructor (creates Chunker + Packer)
|
||||
- `Scan(ctx, path, snapshotID)` → Main scan operation
|
||||
|
||||
Scan phases:
|
||||
1. **Phase 0**: Detect deleted files from previous snapshots
|
||||
2. **Phase 1**: Walk directory, identify files needing processing
|
||||
3. **Phase 2**: Process files (chunk → pack → upload)
|
||||
|
||||
#### SnapshotManager
|
||||
Manages snapshot lifecycle and metadata export.
|
||||
|
||||
Key methods:
|
||||
- `CreateSnapshot(ctx, hostname, version, commit)` → Create snapshot record
|
||||
- `CompleteSnapshot(ctx, snapshotID)` → Mark snapshot complete
|
||||
- `ExportSnapshotMetadata(ctx, dbPath, snapshotID)` → Export to S3
|
||||
- `CleanupIncompleteSnapshots(ctx, hostname)` → Remove failed snapshots
|
||||
|
||||
### `internal/database`
|
||||
SQLite database for local index. Single-writer mode for thread safety.
|
||||
|
||||
Key types:
|
||||
- `DB` → Database connection wrapper
|
||||
- `Repositories` → Collection of all repository interfaces
|
||||
|
||||
Repository interfaces:
|
||||
- `FilesRepository` → CRUD for File records
|
||||
- `ChunksRepository` → CRUD for Chunk records
|
||||
- `BlobsRepository` → CRUD for Blob records
|
||||
- `SnapshotsRepository` → CRUD for Snapshot records
|
||||
- Plus join table repositories (FileChunks, BlobChunks, etc.)
|
||||
|
||||
## Snapshot Creation Flow
|
||||
|
||||
```
|
||||
CreateSnapshot(opts)
|
||||
│
|
||||
├─► CleanupIncompleteSnapshots() // Critical: avoid dedup errors
|
||||
│
|
||||
├─► SnapshotManager.CreateSnapshot() // Create DB record
|
||||
│
|
||||
├─► For each source directory:
|
||||
│ │
|
||||
│ ├─► scanner.Scan(ctx, path, snapshotID)
|
||||
│ │ │
|
||||
│ │ ├─► Phase 0: detectDeletedFiles()
|
||||
│ │ │
|
||||
│ │ ├─► Phase 1: scanPhase()
|
||||
│ │ │ Walk directory
|
||||
│ │ │ Check file metadata changes
|
||||
│ │ │ Build list of files to process
|
||||
│ │ │
|
||||
│ │ └─► Phase 2: processPhase()
|
||||
│ │ For each file:
|
||||
│ │ chunker.ChunkReaderStreaming()
|
||||
│ │ For each chunk:
|
||||
│ │ packer.AddChunk()
|
||||
│ │ If blob full → FinalizeBlob()
|
||||
│ │ → handleBlobReady()
|
||||
│ │ → s3Client.PutObjectWithProgress()
|
||||
│ │ packer.Flush() // Final blob
|
||||
│ │
|
||||
│ └─► Accumulate statistics
|
||||
│
|
||||
├─► SnapshotManager.UpdateSnapshotStatsExtended()
|
||||
│
|
||||
├─► SnapshotManager.CompleteSnapshot()
|
||||
│
|
||||
└─► SnapshotManager.ExportSnapshotMetadata()
|
||||
│
|
||||
├─► Copy database to temp file
|
||||
├─► Clean to only current snapshot data
|
||||
├─► Dump to SQL
|
||||
├─► Compress with zstd
|
||||
├─► Encrypt with age
|
||||
├─► Upload db.zst.age to S3
|
||||
└─► Upload manifest.json.zst to S3
|
||||
```
|
||||
|
||||
## Deduplication Strategy
|
||||
|
||||
1. **File-level**: Files unchanged since last backup are skipped (metadata comparison: size, mtime, mode, uid, gid)
|
||||
|
||||
2. **Chunk-level**: Chunks are content-addressed by SHA256 hash. If a chunk hash already exists in the database, the chunk data is not re-uploaded.
|
||||
|
||||
3. **Blob-level**: Blobs contain only unique chunks. Duplicate chunks within a blob are skipped.
|
||||
|
||||
## Storage Layout in S3
|
||||
|
||||
```
|
||||
bucket/
|
||||
├── blobs/
|
||||
│ └── {hash[0:2]}/
|
||||
│ └── {hash[2:4]}/
|
||||
│ └── {full-hash} # Compressed+encrypted blob
|
||||
│
|
||||
└── metadata/
|
||||
└── {snapshot-id}/
|
||||
├── db.zst.age # Encrypted database dump
|
||||
└── manifest.json.zst # Blob list (for verification)
|
||||
```
|
||||
|
||||
## Thread Safety
|
||||
|
||||
- `Packer`: Thread-safe via mutex. Multiple goroutines can call `AddChunk()`.
|
||||
- `Scanner`: Uses `packerMu` mutex to coordinate blob finalization.
|
||||
- `Database`: Single-writer mode (`MaxOpenConns=1`) ensures SQLite thread safety.
|
||||
- `Repositories.WithTx()`: Handles transaction lifecycle automatically.
|
||||
10
CLAUDE.md
10
CLAUDE.md
@@ -26,3 +26,13 @@ Read the rules in AGENTS.md and follow them.
|
||||
* Do not stop working on a task until you have reached the definition of
|
||||
done provided to you in the initial instruction. Don't do part or most of
|
||||
the work, do all of the work until the criteria for done are met.
|
||||
|
||||
* We do not need to support migrations; schema upgrades can be handled by
|
||||
deleting the local state file and doing a full backup to re-create it.
|
||||
|
||||
* When testing on a 2.5Gbit/s ethernet to an s3 server backed by 2000MB/sec SSD,
|
||||
estimate about 4 seconds per gigabyte of backup time.
|
||||
|
||||
* When running tests, don't run individual tests, or grep the output. run the entire test suite every time and read the full output.
|
||||
|
||||
* When running tests, don't run individual tests, or try to grep the output. never run "go test". only ever run "make test" to run the full test suite, and examine the full output.
|
||||
12
DESIGN.md
12
DESIGN.md
@@ -125,7 +125,8 @@ This allows pruning operations to determine which blobs are referenced without r
|
||||
|
||||
```sql
|
||||
CREATE TABLE files (
|
||||
path TEXT PRIMARY KEY,
|
||||
id TEXT PRIMARY KEY, -- UUID
|
||||
path TEXT NOT NULL UNIQUE,
|
||||
mtime INTEGER NOT NULL,
|
||||
size INTEGER NOT NULL
|
||||
);
|
||||
@@ -133,10 +134,10 @@ CREATE TABLE files (
|
||||
-- Maps files to their constituent chunks in sequence order
|
||||
-- Used for reconstructing files from chunks during restore
|
||||
CREATE TABLE file_chunks (
|
||||
path TEXT NOT NULL,
|
||||
file_id TEXT NOT NULL,
|
||||
idx INTEGER NOT NULL,
|
||||
chunk_hash TEXT NOT NULL,
|
||||
PRIMARY KEY (path, idx)
|
||||
PRIMARY KEY (file_id, idx)
|
||||
);
|
||||
|
||||
CREATE TABLE chunks (
|
||||
@@ -163,16 +164,17 @@ CREATE TABLE blob_chunks (
|
||||
-- Used for deduplication and tracking chunk usage across files
|
||||
CREATE TABLE chunk_files (
|
||||
chunk_hash TEXT NOT NULL,
|
||||
file_path TEXT NOT NULL,
|
||||
file_id TEXT NOT NULL,
|
||||
file_offset INTEGER NOT NULL,
|
||||
length INTEGER NOT NULL,
|
||||
PRIMARY KEY (chunk_hash, file_path)
|
||||
PRIMARY KEY (chunk_hash, file_id)
|
||||
);
|
||||
|
||||
CREATE TABLE snapshots (
|
||||
id TEXT PRIMARY KEY,
|
||||
hostname TEXT NOT NULL,
|
||||
vaultik_version TEXT NOT NULL,
|
||||
vaultik_git_revision TEXT NOT NULL,
|
||||
created_ts INTEGER NOT NULL,
|
||||
file_count INTEGER NOT NULL,
|
||||
chunk_count INTEGER NOT NULL,
|
||||
|
||||
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025 Jeffrey Paul sneak@sneak.berlin
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
16
Makefile
16
Makefile
@@ -1,19 +1,27 @@
|
||||
.PHONY: test fmt lint build clean all
|
||||
|
||||
# Version number
|
||||
VERSION := 0.0.1
|
||||
|
||||
# Build variables
|
||||
VERSION := $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev")
|
||||
COMMIT := $(shell git rev-parse HEAD 2>/dev/null || echo "unknown")
|
||||
GIT_REVISION := $(shell git rev-parse HEAD 2>/dev/null || echo "unknown")
|
||||
|
||||
# Linker flags
|
||||
LDFLAGS := -X 'git.eeqj.de/sneak/vaultik/internal/globals.Version=$(VERSION)' \
|
||||
-X 'git.eeqj.de/sneak/vaultik/internal/globals.Commit=$(COMMIT)'
|
||||
-X 'git.eeqj.de/sneak/vaultik/internal/globals.Commit=$(GIT_REVISION)'
|
||||
|
||||
# Default target
|
||||
all: test
|
||||
|
||||
# Run tests
|
||||
test: lint fmt-check
|
||||
go test -v ./...
|
||||
@echo "Running tests..."
|
||||
@if ! go test -v -timeout 10s ./... 2>&1; then \
|
||||
echo ""; \
|
||||
echo "TEST FAILURES DETECTED"; \
|
||||
echo "Run 'go test -v ./internal/database' to see database test details"; \
|
||||
exit 1; \
|
||||
fi
|
||||
|
||||
# Check if code is formatted
|
||||
fmt-check:
|
||||
|
||||
127
README.md
127
README.md
@@ -1,11 +1,25 @@
|
||||
# vaultik
|
||||
# vaultik (ваултик)
|
||||
|
||||
`vaultik` is a incremental backup daemon written in Go. It
|
||||
encrypts data using an `age` public key and uploads each encrypted blob
|
||||
directly to a remote S3-compatible object store. It requires no private
|
||||
keys, secrets, or credentials stored on the backed-up system.
|
||||
|
||||
---
|
||||
It includes table-stakes features such as:
|
||||
|
||||
* modern authenticated encryption
|
||||
* deduplication
|
||||
* incremental backups
|
||||
* modern multithreaded zstd compression with configurable levels
|
||||
* content-addressed immutable storage
|
||||
* local state tracking in standard SQLite database
|
||||
* inotify-based change detection
|
||||
* streaming processing of all data to not require lots of ram or temp file
|
||||
storage
|
||||
* no mutable remote metadata
|
||||
* no plaintext file paths or metadata stored in remote
|
||||
* does not create huge numbers of small files (to keep S3 operation counts
|
||||
down) even if the source system has many small files
|
||||
|
||||
## what
|
||||
|
||||
@@ -15,27 +29,29 @@ Each chunk is streamed into a blob packer. Blobs are compressed with `zstd`,
|
||||
encrypted with `age`, and uploaded directly to remote storage under a
|
||||
content-addressed S3 path.
|
||||
|
||||
No plaintext file contents ever hit disk. No private key is needed or stored
|
||||
locally. All encrypted data is streaming-processed and immediately discarded
|
||||
once uploaded. Metadata is encrypted and pushed with the same mechanism.
|
||||
No plaintext file contents ever hit disk. No private key or secret
|
||||
passphrase is needed or stored locally. All encrypted data is
|
||||
streaming-processed and immediately discarded once uploaded. Metadata is
|
||||
encrypted and pushed with the same mechanism.
|
||||
|
||||
## why
|
||||
|
||||
Existing backup software fails under one or more of these conditions:
|
||||
|
||||
* Requires secrets (passwords, private keys) on the source system
|
||||
* Requires secrets (passwords, private keys) on the source system, which
|
||||
compromises encrypted backups in the case of host system compromise
|
||||
* Depends on symmetric encryption unsuitable for zero-trust environments
|
||||
* Stages temporary archives or repositories
|
||||
* Writes plaintext metadata or plaintext file paths
|
||||
* Creates one-blob-per-file, which results in excessive S3 operation counts
|
||||
|
||||
`vaultik` addresses all of these by using:
|
||||
`vaultik` addresses these by using:
|
||||
|
||||
* Public-key-only encryption (via `age`) requires no secrets (other than
|
||||
bucket access key) on the source system
|
||||
* Blob-level deduplication and batching
|
||||
* Local state cache for incremental detection
|
||||
* S3-native chunked upload interface
|
||||
* Self-contained encrypted snapshot metadata
|
||||
remote storage api key) on the source system
|
||||
* Local state cache for incremental detection does not require reading from
|
||||
or decrypting remote storage
|
||||
* Content-addressed immutable storage allows efficient deduplication
|
||||
* Storage only of large encrypted blobs of configurable size (1G by default)
|
||||
reduces S3 operation counts and improves performance
|
||||
|
||||
## how
|
||||
|
||||
@@ -61,8 +77,9 @@ Existing backup software fails under one or more of these conditions:
|
||||
exclude:
|
||||
- '*.log'
|
||||
- '*.tmp'
|
||||
age_recipient: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
age_recipient: age1278m9q7dp3chsh2dcy82qk27v047zywyvtxwnj4cvt0z65jw6a7q5dqhfj
|
||||
s3:
|
||||
# endpoint is optional if using AWS S3, but who even does that?
|
||||
endpoint: https://s3.example.com
|
||||
bucket: vaultik-data
|
||||
prefix: host1/
|
||||
@@ -73,24 +90,30 @@ Existing backup software fails under one or more of these conditions:
|
||||
full_scan_interval: 24h # normally we use inotify to mark dirty, but
|
||||
# every 24h we do a full stat() scan
|
||||
min_time_between_run: 15m # again, only for daemon mode
|
||||
index_path: /var/lib/vaultik/index.sqlite
|
||||
#index_path: /var/lib/vaultik/index.sqlite
|
||||
chunk_size: 10MB
|
||||
blob_size_limit: 10GB
|
||||
index_prefix: index/
|
||||
```
|
||||
|
||||
4. **run**
|
||||
|
||||
```sh
|
||||
vaultik backup /etc/vaultik.yaml
|
||||
vaultik --config /etc/vaultik.yaml snapshot create
|
||||
```
|
||||
|
||||
```sh
|
||||
vaultik backup /etc/vaultik.yaml --cron # silent unless error
|
||||
vaultik --config /etc/vaultik.yaml snapshot create --cron # silent unless error
|
||||
```
|
||||
|
||||
```sh
|
||||
vaultik backup /etc/vaultik.yaml --daemon # runs in background, uses inotify
|
||||
vaultik --config /etc/vaultik.yaml snapshot daemon # runs continuously in foreground, uses inotify to detect changes
|
||||
|
||||
# TODO
|
||||
* make sure daemon mode does not make a snapshot if no files have
|
||||
changed, even if the backup_interval has passed
|
||||
* in daemon mode, if we are long enough since the last snapshot event, and we get
|
||||
an inotify event, we should schedule the next snapshot creation for 10 minutes from the
|
||||
time of the mark-dirty event.
|
||||
```
|
||||
|
||||
---
|
||||
@@ -100,26 +123,48 @@ Existing backup software fails under one or more of these conditions:
|
||||
### commands
|
||||
|
||||
```sh
|
||||
vaultik backup [--config <path>] [--cron] [--daemon]
|
||||
vaultik [--config <path>] snapshot create [--cron] [--daemon]
|
||||
vaultik [--config <path>] snapshot list [--json]
|
||||
vaultik [--config <path>] snapshot purge [--keep-latest | --older-than <duration>] [--force]
|
||||
vaultik [--config <path>] snapshot verify <snapshot-id> [--deep]
|
||||
vaultik [--config <path>] store info
|
||||
# FIXME: remove 'bucket' and 'prefix' and 'snapshot' flags. it should be
|
||||
# 'vaultik restore snapshot <snapshot> --target <dir>'. bucket and prefix are always
|
||||
# from config file.
|
||||
vaultik restore --bucket <bucket> --prefix <prefix> --snapshot <id> --target <dir>
|
||||
# FIXME: remove prune, it's the old version of "snapshot purge"
|
||||
vaultik prune --bucket <bucket> --prefix <prefix> [--dry-run]
|
||||
# FIXME: change fetch to 'vaultik restore path <snapshot> <path> --target <path>'
|
||||
vaultik fetch --bucket <bucket> --prefix <prefix> --snapshot <id> --file <path> --target <path>
|
||||
# FIXME: remove this, it's redundant with 'snapshot verify'
|
||||
vaultik verify --bucket <bucket> --prefix <prefix> [--snapshot <id>] [--quick]
|
||||
```
|
||||
|
||||
### environment
|
||||
|
||||
* `VAULTIK_PRIVATE_KEY`: Required for `restore`, `prune`, `fetch`, and `verify` commands. Contains the age private key for decryption.
|
||||
* `VAULTIK_CONFIG`: Optional path to config file. If set, `vaultik backup` can be run without specifying the config file path.
|
||||
* `VAULTIK_CONFIG`: Optional path to config file. If set, config file path doesn't need to be specified on the command line.
|
||||
|
||||
### command details
|
||||
|
||||
**backup**: Perform incremental backup of configured directories
|
||||
**snapshot create**: Perform incremental backup of configured directories
|
||||
* Config is located at `/etc/vaultik/config.yml` by default
|
||||
* `--config`: Override config file path
|
||||
* `--cron`: Silent unless error (for crontab)
|
||||
* `--daemon`: Run continuously with inotify monitoring and periodic scans
|
||||
|
||||
**snapshot list**: List all snapshots with their timestamps and sizes
|
||||
* `--json`: Output in JSON format
|
||||
|
||||
**snapshot purge**: Remove old snapshots based on criteria
|
||||
* `--keep-latest`: Keep only the most recent snapshot
|
||||
* `--older-than`: Remove snapshots older than duration (e.g., 30d, 6mo, 1y)
|
||||
* `--force`: Skip confirmation prompt
|
||||
|
||||
**snapshot verify**: Verify snapshot integrity
|
||||
* `--deep`: Download and verify blob hashes (not just existence)
|
||||
|
||||
**store info**: Display S3 bucket configuration and storage statistics
|
||||
|
||||
**restore**: Restore entire snapshot to target directory
|
||||
* Downloads and decrypts metadata
|
||||
* Fetches only required blobs
|
||||
@@ -245,38 +290,24 @@ This enables garbage collection from immutable storage.
|
||||
|
||||
---
|
||||
|
||||
## license
|
||||
## LICENSE
|
||||
|
||||
WTFPL — see LICENSE.
|
||||
[MIT](https://opensource.org/license/mit/)
|
||||
|
||||
---
|
||||
|
||||
## security considerations
|
||||
|
||||
* Source host compromise cannot decrypt backups
|
||||
* No replay attacks possible (append-only)
|
||||
* Each blob independently encrypted
|
||||
* Metadata tampering detectable via hash verification
|
||||
* S3 credentials only allow write access to backup prefix
|
||||
|
||||
## performance
|
||||
|
||||
* Streaming processing (no temp files)
|
||||
* Parallel blob uploads
|
||||
* Deduplication reduces storage and bandwidth
|
||||
* Local index enables fast incremental detection
|
||||
* Configurable compression levels
|
||||
|
||||
## requirements
|
||||
|
||||
* Go 1.24.4 or later
|
||||
* S3-compatible object storage
|
||||
* age command-line tool (for key generation)
|
||||
* SQLite3
|
||||
* Sufficient disk space for local index
|
||||
* Sufficient disk space for local index (typically <1GB)
|
||||
|
||||
## author
|
||||
|
||||
sneak
|
||||
[sneak@sneak.berlin](mailto:sneak@sneak.berlin)
|
||||
[https://sneak.berlin](https://sneak.berlin)
|
||||
Made with love and lots of expensive SOTA AI by [sneak](https://sneak.berlin) in Berlin in the summer of 2025.
|
||||
|
||||
Released as a free software gift to the world, no strings attached.
|
||||
|
||||
Contact: [sneak@sneak.berlin](mailto:sneak@sneak.berlin)
|
||||
|
||||
[https://keys.openpgp.org/vks/v1/by-fingerprint/5539AD00DE4C42F3AFE11575052443F4DF2A55C2](https://keys.openpgp.org/vks/v1/by-fingerprint/5539AD00DE4C42F3AFE11575052443F4DF2A55C2)
|
||||
|
||||
86
TODO-verify.md
Normal file
86
TODO-verify.md
Normal file
@@ -0,0 +1,86 @@
|
||||
# TODO: Implement Verify Command
|
||||
|
||||
## Overview
|
||||
Implement the `verify` command to check snapshot integrity. Both shallow and deep verification require the age_secret_key from config to decrypt the database index.
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### 1. Update Config Structure
|
||||
- Add `AgeSecretKey string` field to the Config struct in `internal/config/config.go`
|
||||
- Add corresponding `age_secret_key` YAML tag
|
||||
- Ensure the field is properly loaded from config file
|
||||
|
||||
### 2. Remove Command Line Flags
|
||||
- Remove --bucket, --prefix, and --snapshot flags from:
|
||||
- `internal/cli/verify.go`
|
||||
- `internal/cli/restore.go`
|
||||
- `internal/cli/fetch.go`
|
||||
- Update all commands to use bucket/prefix from config instead of flags
|
||||
- Update verify command to take snapshot ID as first positional argument
|
||||
|
||||
### 3. Implement Shallow Verification
|
||||
**Requires age_secret_key from config**
|
||||
|
||||
1. Download from S3:
|
||||
- `metadata/{snapshot-id}/manifest.json.zst`
|
||||
- `metadata/{snapshot-id}/db.zst.age`
|
||||
|
||||
2. Process files:
|
||||
- Decompress manifest (not encrypted)
|
||||
- Decrypt db.zst.age using age_secret_key
|
||||
- Decompress decrypted database
|
||||
- Load SQLite database from dump
|
||||
|
||||
3. Verify integrity:
|
||||
- Query snapshot_blobs table for all blobs in this snapshot
|
||||
- Compare DB blob list against manifest blob list
|
||||
- **FAIL IMMEDIATELY** if lists don't match exactly
|
||||
|
||||
4. For each blob in manifest:
|
||||
- Use S3 HeadObject to check existence
|
||||
- **FAIL IMMEDIATELY** if blob is missing
|
||||
- Verify blob hash matches filename
|
||||
- **FAIL IMMEDIATELY** if hash mismatch
|
||||
|
||||
5. Only report success if ALL checks pass
|
||||
|
||||
### 4. Implement Deep Verification
|
||||
**Requires age_secret_key from config**
|
||||
|
||||
1. Run all shallow verification first (fail on any error)
|
||||
|
||||
2. For each blob referenced in snapshot:
|
||||
- Download blob from S3
|
||||
- Decrypt using age_secret_key (streaming)
|
||||
- Decompress (streaming)
|
||||
- Parse blob structure to extract chunks
|
||||
|
||||
3. For each chunk in blob:
|
||||
- Calculate SHA256 of chunk data
|
||||
- Query database for expected chunk hash
|
||||
- **FAIL IMMEDIATELY** if calculated != expected
|
||||
- Verify chunks are ordered correctly by offset
|
||||
- **FAIL IMMEDIATELY** if chunks out of order
|
||||
|
||||
4. Progress reporting:
|
||||
- Show blob-by-blob progress
|
||||
- Show chunk verification within each blob
|
||||
- But continue only if no errors
|
||||
|
||||
5. Only report success if ALL blobs and ALL chunks verify
|
||||
|
||||
### 5. Error Handling
|
||||
|
||||
- **FAIL IMMEDIATELY** if age_secret_key missing from config
|
||||
- **FAIL IMMEDIATELY** on decryption failure
|
||||
- **FAIL IMMEDIATELY** on any verification mismatch
|
||||
- Use log.Fatal() or return error to ensure non-zero exit code
|
||||
- Provide clear error messages indicating exactly what failed
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- Verify command exits with code 0 only if ALL checks pass
|
||||
- Any failure results in non-zero exit code
|
||||
- Clear error messages for each failure type
|
||||
- Progress reporting during verification
|
||||
- Works with remote-only snapshots (not in local DB)
|
||||
109
TODO.md
109
TODO.md
@@ -1,49 +1,92 @@
|
||||
# Implementation TODO
|
||||
|
||||
## Local Index Database
|
||||
1. Implement SQLite schema creation
|
||||
1. Create Index type with all database operations
|
||||
1. Add transaction support and proper locking
|
||||
1. Implement file tracking (save, lookup, delete)
|
||||
1. Implement chunk tracking and deduplication
|
||||
1. Implement blob tracking and chunk-to-blob mapping
|
||||
1. Write tests for all index operations
|
||||
## Proposed: Store and Snapshot Commands
|
||||
|
||||
### Overview
|
||||
Reorganize commands to provide better visibility into stored data and snapshots.
|
||||
|
||||
### Command Structure
|
||||
|
||||
#### `vaultik store` - Storage information commands
|
||||
- `vaultik store info`
|
||||
- Lists S3 bucket configuration
|
||||
- Shows total number of snapshots (from metadata/ listing)
|
||||
- Shows total number of blobs (from blobs/ listing)
|
||||
- Shows total size of all blobs
|
||||
- **No decryption required** - uses S3 listing only
|
||||
|
||||
#### `vaultik snapshot` - Snapshot management commands
|
||||
- `vaultik snapshot create [path]`
|
||||
- Renamed from `vaultik backup`
|
||||
- Same functionality as current backup command
|
||||
|
||||
- `vaultik snapshot list [--json]`
|
||||
- Lists all snapshots with:
|
||||
- Snapshot ID
|
||||
- Creation timestamp (parsed from snapshot ID)
|
||||
- Compressed size (sum of referenced blob sizes from manifest)
|
||||
- **No decryption required** - uses blob manifests only
|
||||
- `--json` flag outputs in JSON format instead of table
|
||||
|
||||
- `vaultik snapshot purge`
|
||||
- Requires one of:
|
||||
- `--keep-latest` - keeps only the most recent snapshot
|
||||
- `--older-than <duration>` - removes snapshots older than duration (e.g., "30d", "6m", "1y")
|
||||
- Removes snapshot metadata and runs pruning to clean up unreferenced blobs
|
||||
- Shows what would be deleted and requires confirmation
|
||||
|
||||
- `vaultik snapshot verify [--deep] <snapshot-id>`
|
||||
- Basic mode: Verifies all blobs referenced in manifest exist in S3
|
||||
- `--deep` mode: Downloads each blob and verifies its hash matches the stored hash
|
||||
- **Stub implementation for now**
|
||||
|
||||
### Implementation Notes
|
||||
|
||||
1. **No Decryption Required**: All commands work with unencrypted blob manifests
|
||||
2. **Blob Manifests**: Located at `metadata/{snapshot-id}/manifest.json.zst`
|
||||
3. **S3 Operations**: Use S3 ListObjects to enumerate snapshots and blobs
|
||||
4. **Size Calculations**: Sum blob sizes from S3 object metadata
|
||||
5. **Timestamp Parsing**: Extract from snapshot ID format (e.g., `2024-01-15-143052-hostname`)
|
||||
6. **S3 Metadata**: Only used for `snapshot verify` command
|
||||
|
||||
### Benefits
|
||||
- Users can see storage usage without decryption keys
|
||||
- Snapshot management doesn't require access to encrypted metadata
|
||||
- Clean separation between storage info and snapshot operations
|
||||
|
||||
## Chunking and Hashing
|
||||
1. Implement Rabin fingerprint chunker
|
||||
1. Create streaming chunk processor
|
||||
1. Implement SHA256 hashing for chunks
|
||||
1. Add configurable chunk size parameters
|
||||
1. Write tests for chunking consistency
|
||||
1. ~~Implement content-defined chunking~~ (done with FastCDC)
|
||||
1. ~~Create streaming chunk processor~~ (done in chunker)
|
||||
1. ~~Implement SHA256 hashing for chunks~~ (done in scanner)
|
||||
1. ~~Add configurable chunk size parameters~~ (done in scanner)
|
||||
1. ~~Write tests for chunking consistency~~ (done)
|
||||
|
||||
## Compression and Encryption
|
||||
1. Implement zstd compression wrapper
|
||||
1. Integrate age encryption library
|
||||
1. Create Encryptor type for public key encryption
|
||||
1. Create Decryptor type for private key decryption
|
||||
1. Implement streaming encrypt/decrypt pipelines
|
||||
1. Write tests for compression and encryption
|
||||
1. ~~Implement compression~~ (done with zlib in blob packer)
|
||||
1. ~~Integrate age encryption library~~ (done in crypto package)
|
||||
1. ~~Create Encryptor type for public key encryption~~ (done)
|
||||
1. ~~Implement streaming encrypt/decrypt pipelines~~ (done in packer)
|
||||
1. ~~Write tests for compression and encryption~~ (done)
|
||||
|
||||
## Blob Packing
|
||||
1. Implement BlobWriter with size limits
|
||||
1. Add chunk accumulation and flushing
|
||||
1. Create blob hash calculation
|
||||
1. Implement proper error handling and rollback
|
||||
1. Write tests for blob packing scenarios
|
||||
1. ~~Implement BlobWriter with size limits~~ (done in packer)
|
||||
1. ~~Add chunk accumulation and flushing~~ (done)
|
||||
1. ~~Create blob hash calculation~~ (done)
|
||||
1. ~~Implement proper error handling and rollback~~ (done with transactions)
|
||||
1. ~~Write tests for blob packing scenarios~~ (done)
|
||||
|
||||
## S3 Operations
|
||||
1. Integrate MinIO client library
|
||||
1. Implement S3Client wrapper type
|
||||
1. Add multipart upload support for large blobs
|
||||
1. Implement retry logic with exponential backoff
|
||||
1. Add connection pooling and timeout handling
|
||||
1. Write tests using MinIO container
|
||||
1. ~~Integrate MinIO client library~~ (done in s3 package)
|
||||
1. ~~Implement S3Client wrapper type~~ (done)
|
||||
1. ~~Add multipart upload support for large blobs~~ (done - using standard upload)
|
||||
1. ~~Implement retry logic~~ (handled by MinIO client)
|
||||
1. ~~Write tests using MinIO container~~ (done with testcontainers)
|
||||
|
||||
## Backup Command - Basic
|
||||
1. Implement directory walking with exclusion patterns
|
||||
1. ~~Implement directory walking with exclusion patterns~~ (done with afero)
|
||||
1. Add file change detection using index
|
||||
1. Integrate chunking pipeline for changed files
|
||||
1. Implement blob upload coordination
|
||||
1. ~~Integrate chunking pipeline for changed files~~ (done in scanner)
|
||||
1. Implement blob upload coordination to S3
|
||||
1. Add progress reporting to stderr
|
||||
1. Write integration tests for backup
|
||||
|
||||
|
||||
140
config.example.yml
Normal file
140
config.example.yml
Normal file
@@ -0,0 +1,140 @@
|
||||
# vaultik configuration file example
|
||||
# This file shows all available configuration options with their default values
|
||||
# Copy this file and uncomment/modify the values you need
|
||||
|
||||
# Age recipient public key for encryption
|
||||
# This is REQUIRED - backups are encrypted to this public key
|
||||
# Generate with: age-keygen | grep "public key"
|
||||
age_recipient: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
|
||||
# List of directories to backup
|
||||
# These paths will be scanned recursively for files to backup
|
||||
# Use absolute paths
|
||||
source_dirs:
|
||||
- /
|
||||
# - /home
|
||||
# - /etc
|
||||
# - /var
|
||||
|
||||
# Patterns to exclude from backup
|
||||
# Uses glob patterns to match file paths
|
||||
# Paths are matched as absolute paths
|
||||
exclude:
|
||||
# System directories that should not be backed up
|
||||
- /proc
|
||||
- /sys
|
||||
- /dev
|
||||
- /run
|
||||
- /tmp
|
||||
- /var/tmp
|
||||
- /var/run
|
||||
- /var/lock
|
||||
- /var/cache
|
||||
- /lost+found
|
||||
- /media
|
||||
- /mnt
|
||||
# Swap files
|
||||
- /swapfile
|
||||
- /swap.img
|
||||
- "*.swap"
|
||||
- "*.swp"
|
||||
# Log files (optional - you may want to keep some logs)
|
||||
- "*.log"
|
||||
- "*.log.*"
|
||||
- /var/log
|
||||
# Package manager caches
|
||||
- /var/cache/apt
|
||||
- /var/cache/yum
|
||||
- /var/cache/dnf
|
||||
- /var/cache/pacman
|
||||
# User caches and temporary files
|
||||
- "*/.cache"
|
||||
- "*/.local/share/Trash"
|
||||
- "*/Downloads"
|
||||
- "*/.thumbnails"
|
||||
# Development artifacts
|
||||
- "**/node_modules"
|
||||
- "**/.git/objects"
|
||||
- "**/target"
|
||||
- "**/build"
|
||||
- "**/__pycache__"
|
||||
- "**/*.pyc"
|
||||
# Large files you might not want to backup
|
||||
- "*.iso"
|
||||
- "*.img"
|
||||
- "*.vmdk"
|
||||
- "*.vdi"
|
||||
- "*.qcow2"
|
||||
|
||||
# S3-compatible storage configuration
|
||||
s3:
|
||||
# S3-compatible endpoint URL
|
||||
# Examples: https://s3.amazonaws.com, https://storage.googleapis.com
|
||||
endpoint: https://s3.example.com
|
||||
|
||||
# Bucket name where backups will be stored
|
||||
bucket: my-backup-bucket
|
||||
|
||||
# Prefix (folder) within the bucket for this host's backups
|
||||
# Useful for organizing backups from multiple hosts
|
||||
# Default: empty (root of bucket)
|
||||
#prefix: "hosts/myserver/"
|
||||
|
||||
# S3 access credentials
|
||||
access_key_id: your-access-key
|
||||
secret_access_key: your-secret-key
|
||||
|
||||
# S3 region
|
||||
# Default: us-east-1
|
||||
#region: us-east-1
|
||||
|
||||
# Use SSL/TLS for S3 connections
|
||||
# Default: true
|
||||
#use_ssl: true
|
||||
|
||||
# Part size for multipart uploads
|
||||
# Minimum 5MB, affects memory usage during upload
|
||||
# Supports: 5MB, 10M, 100MiB, etc.
|
||||
# Default: 5MB
|
||||
#part_size: 5MB
|
||||
|
||||
# How often to run backups in daemon mode
|
||||
# Format: 1h, 30m, 24h, etc
|
||||
# Default: 1h
|
||||
#backup_interval: 1h
|
||||
|
||||
# How often to do a full filesystem scan in daemon mode
|
||||
# Between full scans, inotify is used to detect changes
|
||||
# Default: 24h
|
||||
#full_scan_interval: 24h
|
||||
|
||||
# Minimum time between backup runs in daemon mode
|
||||
# Prevents backups from running too frequently
|
||||
# Default: 15m
|
||||
#min_time_between_run: 15m
|
||||
|
||||
# Path to local SQLite index database
|
||||
# This database tracks file state for incremental backups
|
||||
# Default: /var/lib/vaultik/index.sqlite
|
||||
#index_path: /var/lib/vaultik/index.sqlite
|
||||
|
||||
# Average chunk size for content-defined chunking
|
||||
# Smaller chunks = better deduplication but more metadata
|
||||
# Supports: 10MB, 5M, 1GB, 500KB, 64MiB, etc.
|
||||
# Default: 10MB
|
||||
#chunk_size: 10MB
|
||||
|
||||
# Maximum blob size
|
||||
# Multiple chunks are packed into blobs up to this size
|
||||
# Supports: 1GB, 10G, 500MB, 1GiB, etc.
|
||||
# Default: 10GB
|
||||
#blob_size_limit: 10GB
|
||||
|
||||
# Compression level (1-19)
|
||||
# Higher = better compression but slower
|
||||
# Default: 3
|
||||
#compression_level: 3
|
||||
|
||||
# Hostname to use in backup metadata
|
||||
# Default: system hostname
|
||||
#hostname: myserver
|
||||
268
docs/DATAMODEL.md
Normal file
268
docs/DATAMODEL.md
Normal file
@@ -0,0 +1,268 @@
|
||||
# Vaultik Data Model
|
||||
|
||||
## Overview
|
||||
|
||||
Vaultik uses a local SQLite database to track file metadata, chunk mappings, and blob associations during the backup process. This database serves as an index for incremental backups and enables efficient deduplication.
|
||||
|
||||
**Important Notes:**
|
||||
- **No Migration Support**: Vaultik does not support database schema migrations. If the schema changes, the local database must be deleted and recreated by performing a full backup.
|
||||
- **Version Compatibility**: In rare cases, you may need to use the same version of Vaultik to restore a backup as was used to create it. This ensures compatibility with the metadata format stored in S3.
|
||||
|
||||
## Database Tables
|
||||
|
||||
### 1. `files`
|
||||
Stores metadata about files in the filesystem being backed up.
|
||||
|
||||
**Columns:**
|
||||
- `id` (TEXT PRIMARY KEY) - UUID for the file record
|
||||
- `path` (TEXT NOT NULL UNIQUE) - Absolute file path
|
||||
- `mtime` (INTEGER NOT NULL) - Modification time as Unix timestamp
|
||||
- `ctime` (INTEGER NOT NULL) - Change time as Unix timestamp
|
||||
- `size` (INTEGER NOT NULL) - File size in bytes
|
||||
- `mode` (INTEGER NOT NULL) - Unix file permissions and type
|
||||
- `uid` (INTEGER NOT NULL) - User ID of file owner
|
||||
- `gid` (INTEGER NOT NULL) - Group ID of file owner
|
||||
- `link_target` (TEXT) - Symlink target path (NULL for regular files)
|
||||
|
||||
**Indexes:**
|
||||
- `idx_files_path` on `path` for efficient lookups
|
||||
|
||||
**Purpose:** Tracks file metadata to detect changes between backup runs. Used for incremental backup decisions. The UUID primary key provides stable references that don't change if files are moved.
|
||||
|
||||
### 2. `chunks`
|
||||
Stores information about content-defined chunks created from files.
|
||||
|
||||
**Columns:**
|
||||
- `chunk_hash` (TEXT PRIMARY KEY) - SHA256 hash of chunk content
|
||||
- `size` (INTEGER NOT NULL) - Chunk size in bytes
|
||||
|
||||
**Purpose:** Enables deduplication by tracking unique chunks across all files.
|
||||
|
||||
### 3. `file_chunks`
|
||||
Maps files to their constituent chunks in order.
|
||||
|
||||
**Columns:**
|
||||
- `file_id` (TEXT) - File ID (FK to files.id)
|
||||
- `idx` (INTEGER) - Chunk index within file (0-based)
|
||||
- `chunk_hash` (TEXT) - Chunk hash (FK to chunks.chunk_hash)
|
||||
- PRIMARY KEY (`file_id`, `idx`)
|
||||
|
||||
**Purpose:** Allows reconstruction of files from chunks during restore.
|
||||
|
||||
### 4. `chunk_files`
|
||||
Reverse mapping showing which files contain each chunk.
|
||||
|
||||
**Columns:**
|
||||
- `chunk_hash` (TEXT) - Chunk hash (FK to chunks.chunk_hash)
|
||||
- `file_id` (TEXT) - File ID (FK to files.id)
|
||||
- `file_offset` (INTEGER) - Byte offset of chunk within file
|
||||
- `length` (INTEGER) - Length of chunk in bytes
|
||||
- PRIMARY KEY (`chunk_hash`, `file_id`)
|
||||
|
||||
**Purpose:** Supports efficient queries for chunk usage and deduplication statistics.
|
||||
|
||||
### 5. `blobs`
|
||||
Stores information about packed, compressed, and encrypted blob files.
|
||||
|
||||
**Columns:**
|
||||
- `id` (TEXT PRIMARY KEY) - UUID assigned when blob creation starts
|
||||
- `blob_hash` (TEXT UNIQUE) - SHA256 hash of final blob (NULL until finalized)
|
||||
- `created_ts` (INTEGER NOT NULL) - Creation timestamp
|
||||
- `finished_ts` (INTEGER) - Finalization timestamp (NULL if in progress)
|
||||
- `uncompressed_size` (INTEGER NOT NULL DEFAULT 0) - Total size of chunks before compression
|
||||
- `compressed_size` (INTEGER NOT NULL DEFAULT 0) - Size after compression and encryption
|
||||
- `uploaded_ts` (INTEGER) - Upload completion timestamp (NULL if not uploaded)
|
||||
|
||||
**Purpose:** Tracks blob lifecycle from creation through upload. The UUID primary key allows immediate association of chunks with blobs.
|
||||
|
||||
### 6. `blob_chunks`
|
||||
Maps chunks to the blobs that contain them.
|
||||
|
||||
**Columns:**
|
||||
- `blob_id` (TEXT) - Blob ID (FK to blobs.id)
|
||||
- `chunk_hash` (TEXT) - Chunk hash (FK to chunks.chunk_hash)
|
||||
- `offset` (INTEGER) - Byte offset of chunk within blob (before compression)
|
||||
- `length` (INTEGER) - Length of chunk in bytes
|
||||
- PRIMARY KEY (`blob_id`, `chunk_hash`)
|
||||
|
||||
**Purpose:** Enables chunk retrieval from blobs during restore operations.
|
||||
|
||||
### 7. `snapshots`
|
||||
Tracks backup snapshots.
|
||||
|
||||
**Columns:**
|
||||
- `id` (TEXT PRIMARY KEY) - Snapshot ID (format: hostname-YYYYMMDD-HHMMSSZ)
|
||||
- `hostname` (TEXT) - Hostname where backup was created
|
||||
- `vaultik_version` (TEXT) - Version of Vaultik used
|
||||
- `vaultik_git_revision` (TEXT) - Git revision of Vaultik used
|
||||
- `started_at` (INTEGER) - Start timestamp
|
||||
- `completed_at` (INTEGER) - Completion timestamp (NULL if in progress)
|
||||
- `file_count` (INTEGER) - Number of files in snapshot
|
||||
- `chunk_count` (INTEGER) - Number of unique chunks
|
||||
- `blob_count` (INTEGER) - Number of blobs referenced
|
||||
- `total_size` (INTEGER) - Total size of all files
|
||||
- `blob_size` (INTEGER) - Total size of all blobs (compressed)
|
||||
- `blob_uncompressed_size` (INTEGER) - Total uncompressed size of all referenced blobs
|
||||
- `compression_ratio` (REAL) - Compression ratio achieved
|
||||
- `compression_level` (INTEGER) - Compression level used for this snapshot
|
||||
- `upload_bytes` (INTEGER) - Total bytes uploaded during this snapshot
|
||||
- `upload_duration_ms` (INTEGER) - Total milliseconds spent uploading to S3
|
||||
|
||||
**Purpose:** Provides snapshot metadata and statistics including version tracking for compatibility.
|
||||
|
||||
### 8. `snapshot_files`
|
||||
Maps snapshots to the files they contain.
|
||||
|
||||
**Columns:**
|
||||
- `snapshot_id` (TEXT) - Snapshot ID (FK to snapshots.id)
|
||||
- `file_id` (TEXT) - File ID (FK to files.id)
|
||||
- PRIMARY KEY (`snapshot_id`, `file_id`)
|
||||
|
||||
**Purpose:** Records which files are included in each snapshot.
|
||||
|
||||
### 9. `snapshot_blobs`
|
||||
Maps snapshots to the blobs they reference.
|
||||
|
||||
**Columns:**
|
||||
- `snapshot_id` (TEXT) - Snapshot ID (FK to snapshots.id)
|
||||
- `blob_id` (TEXT) - Blob ID (FK to blobs.id)
|
||||
- `blob_hash` (TEXT) - Denormalized blob hash for manifest generation
|
||||
- PRIMARY KEY (`snapshot_id`, `blob_id`)
|
||||
|
||||
**Purpose:** Tracks blob dependencies for snapshots and enables manifest generation.
|
||||
|
||||
### 10. `uploads`
|
||||
Tracks blob upload metrics.
|
||||
|
||||
**Columns:**
|
||||
- `blob_hash` (TEXT PRIMARY KEY) - Hash of uploaded blob
|
||||
- `snapshot_id` (TEXT NOT NULL) - The snapshot that triggered this upload (FK to snapshots.id)
|
||||
- `uploaded_at` (INTEGER) - Upload timestamp
|
||||
- `size` (INTEGER) - Size of uploaded blob
|
||||
- `duration_ms` (INTEGER) - Upload duration in milliseconds
|
||||
|
||||
**Purpose:** Performance monitoring and tracking which blobs were newly created (uploaded) during each snapshot.
|
||||
|
||||
## Data Flow and Operations
|
||||
|
||||
### 1. Backup Process
|
||||
|
||||
1. **File Scanning**
|
||||
- `INSERT OR REPLACE INTO files` - Update file metadata
|
||||
- `SELECT * FROM files WHERE path = ?` - Check if file has changed
|
||||
- `INSERT INTO snapshot_files` - Add file to current snapshot
|
||||
|
||||
2. **Chunking** (for changed files)
|
||||
- `INSERT OR IGNORE INTO chunks` - Store new chunks
|
||||
- `INSERT INTO file_chunks` - Map chunks to file
|
||||
- `INSERT INTO chunk_files` - Create reverse mapping
|
||||
|
||||
3. **Blob Packing**
|
||||
- `INSERT INTO blobs` - Create blob record with UUID (blob_hash NULL)
|
||||
- `INSERT INTO blob_chunks` - Associate chunks with blob immediately
|
||||
- `UPDATE blobs SET blob_hash = ?, finished_ts = ?` - Finalize blob after packing
|
||||
|
||||
4. **Upload**
|
||||
- `UPDATE blobs SET uploaded_ts = ?` - Mark blob as uploaded
|
||||
- `INSERT INTO uploads` - Record upload metrics with snapshot_id
|
||||
- `INSERT INTO snapshot_blobs` - Associate blob with snapshot
|
||||
|
||||
5. **Snapshot Completion**
|
||||
- `UPDATE snapshots SET completed_at = ?, stats...` - Finalize snapshot
|
||||
- Generate and upload blob manifest from `snapshot_blobs`
|
||||
|
||||
### 2. Incremental Backup
|
||||
|
||||
1. **Change Detection**
|
||||
- `SELECT * FROM files WHERE path = ?` - Get previous file metadata
|
||||
- Compare mtime, size, mode to detect changes
|
||||
- Skip unchanged files but still add to `snapshot_files`
|
||||
|
||||
2. **Chunk Reuse**
|
||||
- `SELECT * FROM blob_chunks WHERE chunk_hash = ?` - Find existing chunks
|
||||
- `INSERT INTO snapshot_blobs` - Reference existing blobs for unchanged files
|
||||
|
||||
### 3. Snapshot Metadata Export
|
||||
|
||||
After a snapshot is completed:
|
||||
1. Copy database to temporary file
|
||||
2. Clean temporary database to contain only current snapshot data
|
||||
3. Export to SQL dump using sqlite3
|
||||
4. Compress with zstd and encrypt with age
|
||||
5. Upload to S3 as `metadata/{snapshot-id}/db.zst.age`
|
||||
6. Generate blob manifest and upload as `metadata/{snapshot-id}/manifest.json.zst`
|
||||
|
||||
### 4. Restore Process
|
||||
|
||||
The restore process doesn't use the local database. Instead:
|
||||
1. Downloads snapshot metadata from S3
|
||||
2. Downloads required blobs based on manifest
|
||||
3. Reconstructs files from decrypted and decompressed chunks
|
||||
|
||||
### 5. Pruning
|
||||
|
||||
1. **Identify Unreferenced Blobs**
|
||||
- Query blobs not referenced by any remaining snapshot
|
||||
- Delete from S3 and local database
|
||||
|
||||
### 6. Incomplete Snapshot Cleanup
|
||||
|
||||
Before each backup:
|
||||
1. Query incomplete snapshots (where `completed_at IS NULL`)
|
||||
2. Check if metadata exists in S3
|
||||
3. If no metadata, delete snapshot and all associations
|
||||
4. Clean up orphaned files, chunks, and blobs
|
||||
|
||||
## Repository Pattern
|
||||
|
||||
Vaultik uses a repository pattern for database access:
|
||||
|
||||
- `FileRepository` - CRUD operations for files and file metadata
|
||||
- `ChunkRepository` - CRUD operations for content chunks
|
||||
- `FileChunkRepository` - Manage file-to-chunk mappings
|
||||
- `ChunkFileRepository` - Manage chunk-to-file reverse mappings
|
||||
- `BlobRepository` - Manage blob lifecycle (creation, finalization, upload)
|
||||
- `BlobChunkRepository` - Manage blob-to-chunk associations
|
||||
- `SnapshotRepository` - Manage snapshots and their relationships
|
||||
- `UploadRepository` - Track blob upload metrics
|
||||
|
||||
Each repository provides methods like:
|
||||
- `Create()` - Insert new record
|
||||
- `GetByID()` / `GetByPath()` / `GetByHash()` - Retrieve records
|
||||
- `Update()` - Update existing records
|
||||
- `Delete()` - Remove records
|
||||
- Specialized queries for each entity type (e.g., `DeleteOrphaned()`, `GetIncompleteByHostname()`)
|
||||
|
||||
## Transaction Management
|
||||
|
||||
All database operations that modify multiple tables are wrapped in transactions:
|
||||
|
||||
```go
|
||||
err := repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
// Multiple repository operations using tx
|
||||
})
|
||||
```
|
||||
|
||||
This ensures consistency, especially important for operations like:
|
||||
- Creating file-chunk mappings
|
||||
- Associating chunks with blobs
|
||||
- Updating snapshot statistics
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
1. **Indexes**:
|
||||
- Primary keys are automatically indexed
|
||||
- `idx_files_path` on `files(path)` for efficient file lookups
|
||||
|
||||
2. **Prepared Statements**: All queries use prepared statements for performance and security
|
||||
|
||||
3. **Batch Operations**: Where possible, operations are batched within transactions
|
||||
|
||||
4. **Write-Ahead Logging**: SQLite WAL mode is enabled for better concurrency
|
||||
|
||||
## Data Integrity
|
||||
|
||||
1. **Foreign Keys**: Enforced through CASCADE DELETE and application-level repository methods
|
||||
2. **Unique Constraints**: Chunk hashes, file paths, and blob hashes are unique
|
||||
3. **Null Handling**: Nullable fields clearly indicate in-progress operations
|
||||
4. **Timestamp Tracking**: All major operations record timestamps for auditing
|
||||
143
docs/REPOSTRUCTURE.md
Normal file
143
docs/REPOSTRUCTURE.md
Normal file
@@ -0,0 +1,143 @@
|
||||
# Vaultik S3 Repository Structure
|
||||
|
||||
This document describes the structure and organization of data stored in the S3 bucket by Vaultik.
|
||||
|
||||
## Overview
|
||||
|
||||
Vaultik stores all backup data in an S3-compatible object store. The repository consists of two main components:
|
||||
1. **Blobs** - The actual backup data (content-addressed, encrypted)
|
||||
2. **Metadata** - Snapshot information and manifests (partially encrypted)
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
<bucket>/<prefix>/
|
||||
├── blobs/
|
||||
│ └── <hash[0:2]>/
|
||||
│ └── <hash[2:4]>/
|
||||
│ └── <full-hash>
|
||||
└── metadata/
|
||||
└── <snapshot-id>/
|
||||
├── db.zst.age
|
||||
└── manifest.json.zst
|
||||
```
|
||||
|
||||
## Blobs Directory (`blobs/`)
|
||||
|
||||
### Structure
|
||||
- **Path format**: `blobs/<first-2-chars>/<next-2-chars>/<full-hash>`
|
||||
- **Example**: `blobs/ca/fe/cafebabe1234567890abcdef1234567890abcdef1234567890abcdef12345678`
|
||||
- **Sharding**: The two-level directory structure (using the first 4 characters of the hash) prevents any single directory from containing too many objects
|
||||
|
||||
### Content
|
||||
- **What it contains**: Packed collections of content-defined chunks from files
|
||||
- **Format**: Zstandard compressed, then Age encrypted
|
||||
- **Encryption**: Always encrypted with Age using the configured recipients
|
||||
- **Naming**: Content-addressed using SHA256 hash of the encrypted blob
|
||||
|
||||
### Why Encrypted
|
||||
Blobs contain the actual file data from backups and must be encrypted for security. The content-addressing ensures deduplication while the encryption ensures privacy.
|
||||
|
||||
## Metadata Directory (`metadata/`)
|
||||
|
||||
Each snapshot has its own subdirectory named with the snapshot ID.
|
||||
|
||||
### Snapshot ID Format
|
||||
- **Format**: `<hostname>-<YYYYMMDD>-<HHMMSSZ>`
|
||||
- **Example**: `laptop-20240115-143052Z`
|
||||
- **Components**:
|
||||
- Hostname (may contain hyphens)
|
||||
- Date in YYYYMMDD format
|
||||
- Time in HHMMSSZ format (Z indicates UTC)
|
||||
|
||||
### Files in Each Snapshot Directory
|
||||
|
||||
#### `db.zst.age` - Encrypted Database Dump
|
||||
- **What it contains**: Complete SQLite database dump for this snapshot
|
||||
- **Format**: SQL dump → Zstandard compressed → Age encrypted
|
||||
- **Encryption**: Encrypted with Age
|
||||
- **Purpose**: Contains full file metadata, chunk mappings, and all relationships
|
||||
- **Why encrypted**: Contains sensitive metadata like file paths, permissions, and ownership
|
||||
|
||||
#### `manifest.json.zst` - Unencrypted Blob Manifest
|
||||
- **What it contains**: JSON list of all blob hashes referenced by this snapshot
|
||||
- **Format**: JSON → Zstandard compressed (NOT encrypted)
|
||||
- **Encryption**: NOT encrypted
|
||||
- **Purpose**: Enables pruning operations without requiring decryption keys
|
||||
- **Structure**:
|
||||
```json
|
||||
{
|
||||
"snapshot_id": "laptop-20240115-143052Z",
|
||||
"timestamp": "2024-01-15T14:30:52Z",
|
||||
"blob_count": 42,
|
||||
"blobs": [
|
||||
"cafebabe1234567890abcdef1234567890abcdef1234567890abcdef12345678",
|
||||
"deadbeef1234567890abcdef1234567890abcdef1234567890abcdef12345678",
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Why Manifest is Unencrypted
|
||||
The manifest must be readable without the private key to enable:
|
||||
1. **Pruning operations** - Identifying unreferenced blobs for deletion
|
||||
2. **Storage analysis** - Understanding space usage without decryption
|
||||
3. **Verification** - Checking blob existence without decryption
|
||||
4. **Cross-snapshot deduplication analysis** - Finding shared blobs between snapshots
|
||||
|
||||
The manifest only contains blob hashes, not file names or any other sensitive information.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### What's Encrypted
|
||||
- **All file content** (in blobs)
|
||||
- **All file metadata** (paths, permissions, timestamps, ownership in db.zst.age)
|
||||
- **File-to-chunk mappings** (in db.zst.age)
|
||||
|
||||
### What's Not Encrypted
|
||||
- **Blob hashes** (in manifest.json.zst)
|
||||
- **Snapshot IDs** (directory names)
|
||||
- **Blob count per snapshot** (in manifest.json.zst)
|
||||
|
||||
### Privacy Implications
|
||||
From the unencrypted data, an observer can determine:
|
||||
- When backups were taken (from snapshot IDs)
|
||||
- Which hostname created backups (from snapshot IDs)
|
||||
- How many blobs each snapshot references
|
||||
- Which blobs are shared between snapshots (deduplication patterns)
|
||||
- The size of each encrypted blob
|
||||
|
||||
An observer cannot determine:
|
||||
- File names or paths
|
||||
- File contents
|
||||
- File permissions or ownership
|
||||
- Directory structure
|
||||
- Which chunks belong to which files
|
||||
|
||||
## Consistency Guarantees
|
||||
|
||||
1. **Blobs are immutable** - Once written, a blob is never modified
|
||||
2. **Blobs are written before metadata** - A snapshot's metadata is only written after all its blobs are successfully uploaded
|
||||
3. **Metadata is written atomically** - Both db.zst.age and manifest.json.zst are written as complete files
|
||||
4. **Snapshots are marked complete in local DB only after metadata upload** - Ensures consistency between local and remote state
|
||||
|
||||
## Pruning Safety
|
||||
|
||||
The prune operation is safe because:
|
||||
1. It only deletes blobs not referenced in any manifest
|
||||
2. Manifests are unencrypted and can be read without keys
|
||||
3. The operation compares the latest local DB snapshot with the latest S3 snapshot to ensure consistency
|
||||
4. Pruning will fail if these don't match, preventing accidental deletion of needed blobs
|
||||
|
||||
## Restoration Requirements
|
||||
|
||||
To restore from a backup, you need:
|
||||
1. **The Age private key** - To decrypt blobs and database
|
||||
2. **The snapshot metadata** - Both files from the snapshot's metadata directory
|
||||
3. **All referenced blobs** - As listed in the manifest
|
||||
|
||||
The restoration process:
|
||||
1. Download and decrypt the database dump to understand file structure
|
||||
2. Download and decrypt the required blobs
|
||||
3. Reconstruct files from their chunks
|
||||
4. Restore file metadata (permissions, timestamps, etc.)
|
||||
145
go.mod
145
go.mod
@@ -3,26 +3,161 @@ module git.eeqj.de/sneak/vaultik
|
||||
go 1.24.4
|
||||
|
||||
require (
|
||||
filippo.io/age v1.2.1
|
||||
git.eeqj.de/sneak/smartconfig v1.0.0
|
||||
github.com/aws/aws-sdk-go-v2 v1.36.6
|
||||
github.com/aws/aws-sdk-go-v2/config v1.29.18
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.17.71
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.85
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.1
|
||||
github.com/aws/smithy-go v1.22.4
|
||||
github.com/dustin/go-humanize v1.0.1
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/johannesboyne/gofakes3 v0.0.0-20250603205740-ed9094be7668
|
||||
github.com/jotfs/fastcdc-go v0.2.0
|
||||
github.com/klauspost/compress v1.18.0
|
||||
github.com/spf13/afero v1.14.0
|
||||
github.com/spf13/cobra v1.9.1
|
||||
github.com/stretchr/testify v1.10.0
|
||||
go.uber.org/fx v1.24.0
|
||||
golang.org/x/term v0.33.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
modernc.org/sqlite v1.38.0
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
cloud.google.com/go/auth v0.16.2 // indirect
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.7.0 // indirect
|
||||
cloud.google.com/go/iam v1.5.2 // indirect
|
||||
cloud.google.com/go/secretmanager v1.15.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets v0.12.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/keyvault/internal v0.7.1 // indirect
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 // indirect
|
||||
github.com/adrg/xdg v0.5.3 // indirect
|
||||
github.com/armon/go-metrics v0.4.1 // indirect
|
||||
github.com/aws/aws-sdk-go v1.44.256 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.33 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.37 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.37 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.37 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.18 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.18 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.8 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.25.6 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.34.1 // indirect
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||
github.com/coreos/go-semver v0.3.1 // indirect
|
||||
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
|
||||
github.com/fatih/color v1.16.0 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
|
||||
github.com/go-jose/go-jose/v4 v4.0.5 // indirect
|
||||
github.com/go-logr/logr v1.4.2 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.21.0 // indirect
|
||||
github.com/go-openapi/jsonreference v0.20.2 // indirect
|
||||
github.com/go-openapi/swag v0.23.0 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
|
||||
github.com/golang/protobuf v1.5.4 // indirect
|
||||
github.com/google/gnostic-models v0.6.9 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/google/s2a-go v0.1.9 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.14.2 // indirect
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
|
||||
github.com/hashicorp/consul/api v1.32.1 // indirect
|
||||
github.com/hashicorp/errwrap v1.1.0 // indirect
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
|
||||
github.com/hashicorp/go-hclog v1.6.3 // indirect
|
||||
github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||
github.com/hashicorp/go-retryablehttp v0.7.7 // indirect
|
||||
github.com/hashicorp/go-rootcerts v1.0.2 // indirect
|
||||
github.com/hashicorp/go-secure-stdlib/parseutil v0.1.6 // indirect
|
||||
github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 // indirect
|
||||
github.com/hashicorp/go-sockaddr v1.0.2 // indirect
|
||||
github.com/hashicorp/golang-lru v0.5.4 // indirect
|
||||
github.com/hashicorp/hcl v1.0.1-vault-7 // indirect
|
||||
github.com/hashicorp/serf v0.10.1 // indirect
|
||||
github.com/hashicorp/vault/api v1.20.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/mattn/go-sqlite3 v1.14.29 // indirect
|
||||
github.com/mitchellh/go-homedir v1.1.0 // indirect
|
||||
github.com/mitchellh/mapstructure v1.5.0 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/ncruces/go-strftime v0.1.9 // indirect
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
|
||||
github.com/ryanuber/go-glob v1.0.0 // indirect
|
||||
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46 // indirect
|
||||
github.com/spf13/pflag v1.0.6 // indirect
|
||||
github.com/tidwall/gjson v1.18.0 // indirect
|
||||
github.com/tidwall/match v1.1.1 // indirect
|
||||
github.com/tidwall/pretty v1.2.0 // indirect
|
||||
github.com/x448/float16 v0.8.4 // indirect
|
||||
go.etcd.io/etcd/api/v3 v3.6.2 // indirect
|
||||
go.etcd.io/etcd/client/pkg/v3 v3.6.2 // indirect
|
||||
go.etcd.io/etcd/client/v3 v3.6.2 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
|
||||
go.opentelemetry.io/otel v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.36.0 // indirect
|
||||
go.shabbyrobe.org/gocovmerge v0.0.0-20230507111327-fa4f82cfbf4d // indirect
|
||||
go.uber.org/dig v1.19.0 // indirect
|
||||
go.uber.org/multierr v1.10.0 // indirect
|
||||
go.uber.org/zap v1.26.0 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
go.uber.org/zap v1.27.0 // indirect
|
||||
golang.org/x/crypto v0.39.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0 // indirect
|
||||
golang.org/x/sys v0.33.0 // indirect
|
||||
golang.org/x/net v0.41.0 // indirect
|
||||
golang.org/x/oauth2 v0.30.0 // indirect
|
||||
golang.org/x/sync v0.15.0 // indirect
|
||||
golang.org/x/sys v0.34.0 // indirect
|
||||
golang.org/x/text v0.26.0 // indirect
|
||||
golang.org/x/time v0.12.0 // indirect
|
||||
golang.org/x/tools v0.33.0 // indirect
|
||||
google.golang.org/api v0.237.0 // indirect
|
||||
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
|
||||
google.golang.org/grpc v1.73.0 // indirect
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
k8s.io/api v0.33.3 // indirect
|
||||
k8s.io/apimachinery v0.33.3 // indirect
|
||||
k8s.io/client-go v0.33.3 // indirect
|
||||
k8s.io/klog/v2 v2.130.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
|
||||
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
|
||||
modernc.org/libc v1.65.10 // indirect
|
||||
modernc.org/mathutil v1.7.1 // indirect
|
||||
modernc.org/memory v1.11.0 // indirect
|
||||
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
|
||||
sigs.k8s.io/randfill v1.0.0 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
|
||||
sigs.k8s.io/yaml v1.4.0 // indirect
|
||||
)
|
||||
|
||||
577
go.sum
577
go.sum
@@ -1,54 +1,590 @@
|
||||
c2sp.org/CCTV/age v0.0.0-20240306222714-3ec4d716e805 h1:u2qwJeEvnypw+OCPUHmoZE3IqwfuN5kgDfo5MLzpNM0=
|
||||
c2sp.org/CCTV/age v0.0.0-20240306222714-3ec4d716e805/go.mod h1:FomMrUJ2Lxt5jCLmZkG3FHa72zUprnhd3v/Z18Snm4w=
|
||||
cloud.google.com/go v0.120.0 h1:wc6bgG9DHyKqF5/vQvX1CiZrtHnxJjBlKUyF9nP6meA=
|
||||
cloud.google.com/go v0.120.0/go.mod h1:/beW32s8/pGRuj4IILWQNd4uuebeT4dkOhKmkfit64Q=
|
||||
cloud.google.com/go/auth v0.16.2 h1:QvBAGFPLrDeoiNjyfVunhQ10HKNYuOwZ5noee0M5df4=
|
||||
cloud.google.com/go/auth v0.16.2/go.mod h1:sRBas2Y1fB1vZTdurouM0AzuYQBMZinrUYL8EufhtEA=
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc=
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=
|
||||
cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeOCw78U8ytSU=
|
||||
cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
|
||||
cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
|
||||
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
|
||||
cloud.google.com/go/secretmanager v1.15.0 h1:RtkCMgTpaBMbzozcRUGfZe46jb9a3qh5EdEtVRUATF8=
|
||||
cloud.google.com/go/secretmanager v1.15.0/go.mod h1:1hQSAhKK7FldiYw//wbR/XPfPc08eQ81oBsnRUHEvUc=
|
||||
filippo.io/age v1.2.1 h1:X0TZjehAZylOIj4DubWYU1vWQxv9bJpo+Uu2/LGhi1o=
|
||||
filippo.io/age v1.2.1/go.mod h1:JL9ew2lTN+Pyft4RiNGguFfOpewKwSHm5ayKD/A4004=
|
||||
git.eeqj.de/sneak/smartconfig v1.0.0 h1:v3rNOo4oEdQgOR5FuVgetKpv1tTvHIFFpV1fNtmlKmg=
|
||||
git.eeqj.de/sneak/smartconfig v1.0.0/go.mod h1:h4LZ6yaSBx51tm+VKrcQcq5FgyqzrmflD+loC5npnH8=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0 h1:Gt0j3wceWMwPmiazCa8MzMA0MfhmPIz0Qp0FJ6qcM0U=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1/go.mod h1:JdM5psgjfBf5fo2uWOZhflPWyDBZ/O/CNAH9CtsuZE4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2 h1:yz1bePFlP5Vws5+8ez6T3HWXPmwOK7Yvq8QxDBD3SKY=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2/go.mod h1:Pa9ZNPuoNu/GztvBSKk9J1cDJW6vk/n0zLtV4mgd8N8=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets v0.12.0 h1:xnO4sFyG8UH2fElBkcqLTOZsAajvKfnSlgBBW8dXYjw=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets v0.12.0/go.mod h1:XD3DIOOVgBCO03OleB1fHjgktVRFxlT++KwKgIOewdM=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/keyvault/internal v0.7.1 h1:FbH3BbSb4bvGluTesZZ+ttN/MDsnMmQP36OSnDuSXqw=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/keyvault/internal v0.7.1/go.mod h1:9V2j0jn9jDEkCkv8w/bKTNppX/d0FVA1ud77xCIP4KA=
|
||||
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
|
||||
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
|
||||
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
|
||||
github.com/adrg/xdg v0.5.3 h1:xRnxJXne7+oWDatRhR1JLnvuccuIeCoBu2rtuLqQB78=
|
||||
github.com/adrg/xdg v0.5.3/go.mod h1:nlTsY+NNiCBGCK2tpm09vRqfVzrc2fLmXGpBLF0zlTQ=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
|
||||
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
|
||||
github.com/armon/go-metrics v0.4.1 h1:hR91U9KYmb6bLBYLQjyM+3j+rcd/UhE+G78SFnF8gJA=
|
||||
github.com/armon/go-metrics v0.4.1/go.mod h1:E6amYzXo6aW1tqzoZGT755KkbgrJsSdpwZ+3JqfkOG4=
|
||||
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/aws/aws-sdk-go v1.44.256 h1:O8VH+bJqgLDguqkH/xQBFz5o/YheeZqgcOYIgsTVWY4=
|
||||
github.com/aws/aws-sdk-go v1.44.256/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go-v2 v1.36.6 h1:zJqGjVbRdTPojeCGWn5IR5pbJwSQSBh5RWFTQcEQGdU=
|
||||
github.com/aws/aws-sdk-go-v2 v1.36.6/go.mod h1:EYrzvCCN9CMUTa5+6lf6MM4tq3Zjp8UhSGR/cBsjai0=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.29.18 h1:x4T1GRPnqKV8HMJOMtNktbpQMl3bIsfx8KbqmveUO2I=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.29.18/go.mod h1:bvz8oXugIsH8K7HLhBv06vDqnFv3NsGDt2Znpk7zmOU=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.17.71 h1:r2w4mQWnrTMJjOyIsZtGp3R3XGY3nqHn8C26C2lQWgA=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.17.71/go.mod h1:E7VF3acIup4GB5ckzbKFrCK0vTvEQxOxgdq4U3vcMCY=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.33 h1:D9ixiWSG4lyUBL2DDNK924Px9V/NBVpML90MHqyTADY=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.33/go.mod h1:caS/m4DI+cij2paz3rtProRBI4s/+TCiWoaWZuQ9010=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.85 h1:AfpstoiaenxGSCUheWiicgZE5XXS5Fi4CcQ4PA/x+Qw=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.85/go.mod h1:HxiF0Fd6WHWjdjOffLkCauq7JqzWqMMq0iUVLS7cPQc=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.37 h1:osMWfm/sC/L4tvEdQ65Gri5ZZDCUpuYJZbTTDrsn4I0=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.37/go.mod h1:ZV2/1fbjOPr4G4v38G3Ww5TBT4+hmsK45s/rxu1fGy0=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.37 h1:v+X21AvTb2wZ+ycg1gx+orkB/9U6L7AOp93R7qYxsxM=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.37/go.mod h1:G0uM1kyssELxmJ2VZEfG0q2npObR3BAkF3c1VsfVnfs=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.37 h1:XTZZ0I3SZUHAtBLBU6395ad+VOblE0DwQP6MuaNeics=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.37/go.mod h1:Pi6ksbniAWVwu2S8pEzcYPyhUkAcLaufxN7PfAUQjBk=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4 h1:CXV68E2dNqhuynZJPB80bhPQwAKqBWVer887figW6Jc=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4/go.mod h1:/xFi9KtvBXP97ppCz1TAEvU1Uf66qvid89rbem3wCzQ=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.5 h1:M5/B8JUaCI8+9QD+u3S/f4YHpvqE9RpSkV3rf0Iks2w=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.5/go.mod h1:Bktzci1bwdbpuLiu3AOksiNPMl/LLKmX1TWmqp2xbvs=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.18 h1:vvbXsA2TVO80/KT7ZqCbx934dt6PY+vQ8hZpUZ/cpYg=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.18/go.mod h1:m2JJHledjBGNMsLOF1g9gbAxprzq3KjC8e4lxtn+eWg=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.18 h1:OS2e0SKqsU2LiJPqL8u9x41tKc6MMEHrWjLVLn3oysg=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.18/go.mod h1:+Yrk+MDGzlNGxCXieljNeWpoZTCQUQVL+Jk9hGGJ8qM=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.1 h1:RkHXU9jP0DptGy7qKI8CBGsUJruWz0v5IgwBa2DwWcU=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.1/go.mod h1:3xAOf7tdKF+qbb+XpU+EPhNXAdun3Lu1RcDrj8KC24I=
|
||||
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.8 h1:HD6R8K10gPbN9CNqRDOs42QombXlYeLOr4KkIxe2lQs=
|
||||
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.8/go.mod h1:x66GdH8qjYTr6Kb4ik38Ewl6moLsg8igbceNsmxVxeA=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.25.6 h1:rGtWqkQbPk7Bkwuv3NzpE/scwwL9sC1Ul3tn9x83DUI=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.25.6/go.mod h1:u4ku9OLv4TO4bCPdxf4fA1upaMaJmP9ZijGk3AAOC6Q=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.4 h1:OV/pxyXh+eMA0TExHEC4jyWdumLxNbzz1P0zJoezkJc=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.4/go.mod h1:8Mm5VGYwtm+r305FfPSuc+aFkrypeylGYhFim6XEPoc=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.34.1 h1:aUrLQwJfZtwv3/ZNG2xRtEen+NqI3iesuacjP51Mv1s=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.34.1/go.mod h1:3wFBZKoWnX3r+Sm7in79i54fBmNfwhdNdQuscCw7QIk=
|
||||
github.com/aws/smithy-go v1.22.4 h1:uqXzVZNuNexwc/xrh6Tb56u89WDlJY6HS+KC0S4QSjw=
|
||||
github.com/aws/smithy-go v1.22.4/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cevatbarisyilmaz/ara v0.0.4 h1:SGH10hXpBJhhTlObuZzTuFn1rrdmjQImITXnZVPSodc=
|
||||
github.com/cevatbarisyilmaz/ara v0.0.4/go.mod h1:BfFOxnUd6Mj6xmcvRxHN3Sr21Z1T3U2MYkYOmoQe4Ts=
|
||||
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
|
||||
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
|
||||
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
|
||||
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
|
||||
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
|
||||
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
|
||||
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
|
||||
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
|
||||
github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
|
||||
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
|
||||
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
|
||||
github.com/go-jose/go-jose/v4 v4.0.5 h1:M6T8+mKZl/+fNNuFHvGIzDz7BTLQPIounk/b9dw3AaE=
|
||||
github.com/go-jose/go-jose/v4 v4.0.5/go.mod h1:s3P1lRrkT8igV8D9OjyL4WRyHvjB6a4JSllnOrmmBOA=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
|
||||
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
|
||||
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
|
||||
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
|
||||
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
|
||||
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
|
||||
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
|
||||
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
|
||||
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
|
||||
github.com/go-test/deep v1.0.2 h1:onZX1rnHT3Wv6cqNgYyFOOlgVKJrksuCMCRvJStbMYw=
|
||||
github.com/go-test/deep v1.0.2/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
|
||||
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
|
||||
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
|
||||
github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
|
||||
github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
|
||||
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
|
||||
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU9uHLo7OnF5tL52HFAgMmyrf4=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||
github.com/googleapis/gax-go/v2 v2.14.2 h1:eBLnkZ9635krYIPD+ag1USrOAI0Nr0QYF3+/3GqO0k0=
|
||||
github.com/googleapis/gax-go/v2 v2.14.2/go.mod h1:ON64QhlJkhVtSqp4v1uaK92VyZ2gmvDQsweuyLV+8+w=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
|
||||
github.com/hashicorp/consul/api v1.32.1 h1:0+osr/3t/aZNAdJX558crU3PEjVrG4x6715aZHRgceE=
|
||||
github.com/hashicorp/consul/api v1.32.1/go.mod h1:mXUWLnxftwTmDv4W3lzxYCPD199iNLLUyLfLGFJbtl4=
|
||||
github.com/hashicorp/consul/sdk v0.16.1 h1:V8TxTnImoPD5cj0U9Spl0TUxcytjcbbJeADFF07KdHg=
|
||||
github.com/hashicorp/consul/sdk v0.16.1/go.mod h1:fSXvwxB2hmh1FMZCNl6PwX0Q/1wdWtHJcZ7Ea5tns0s=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
|
||||
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
|
||||
github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=
|
||||
github.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
|
||||
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
|
||||
github.com/hashicorp/go-immutable-radix v1.3.1 h1:DKHmCUm2hRBK510BaiZlwvpD40f8bJFeZnpfm2KLowc=
|
||||
github.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
|
||||
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
|
||||
github.com/hashicorp/go-msgpack v0.5.5 h1:i9R9JSrqIz0QVLz3sz+i3YJdT7TTSLcfLLzJi9aZTuI=
|
||||
github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
|
||||
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
|
||||
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
|
||||
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
|
||||
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
|
||||
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
|
||||
github.com/hashicorp/go-retryablehttp v0.7.7 h1:C8hUCYzor8PIfXHa4UrZkU4VvK8o9ISHxT2Q8+VepXU=
|
||||
github.com/hashicorp/go-retryablehttp v0.7.7/go.mod h1:pkQpWZeYWskR+D1tR2O5OcBFOxfA7DoAO6xtkuQnHTk=
|
||||
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
|
||||
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
|
||||
github.com/hashicorp/go-secure-stdlib/parseutil v0.1.6 h1:om4Al8Oy7kCm/B86rLCLah4Dt5Aa0Fr5rYBG60OzwHQ=
|
||||
github.com/hashicorp/go-secure-stdlib/parseutil v0.1.6/go.mod h1:QmrqtbKuxxSWTN3ETMPuB+VtEiBJ/A9XhoYGv8E1uD8=
|
||||
github.com/hashicorp/go-secure-stdlib/strutil v0.1.1/go.mod h1:gKOamz3EwoIoJq7mlMIRBpVTAUn8qPCrEclOKKWhD3U=
|
||||
github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 h1:kes8mmyCpxJsI7FTwtzRqEy9CdjCtrXrXGuOpxEA7Ts=
|
||||
github.com/hashicorp/go-secure-stdlib/strutil v0.1.2/go.mod h1:Gou2R9+il93BqX25LAKCLuM+y9U2T4hlwvT1yprcna4=
|
||||
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
|
||||
github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc=
|
||||
github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A=
|
||||
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
|
||||
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
|
||||
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
|
||||
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
|
||||
github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
|
||||
github.com/hashicorp/go-version v1.2.1 h1:zEfKbn2+PDgroKdiOzqiE8rsmLqU2uwi5PB5pBJ3TkI=
|
||||
github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
|
||||
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
|
||||
github.com/hashicorp/hcl v1.0.1-vault-7 h1:ag5OxFVy3QYTFTJODRzTKVZ6xvdfLLCA1cy/Y6xGI0I=
|
||||
github.com/hashicorp/hcl v1.0.1-vault-7/go.mod h1:XYhtn6ijBSAj6n4YqAaf7RBPS4I06AItNorpy+MoQNM=
|
||||
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
|
||||
github.com/hashicorp/mdns v1.0.4/go.mod h1:mtBihi+LeNXGtG8L9dX59gAEa12BDtBQSp4v/YAJqrc=
|
||||
github.com/hashicorp/memberlist v0.5.0 h1:EtYPN8DpAURiapus508I4n9CzHs2W+8NZGbmmR/prTM=
|
||||
github.com/hashicorp/memberlist v0.5.0/go.mod h1:yvyXLpo0QaGE59Y7hDTsTzDD25JYBZ4mHgHUZ8lrOI0=
|
||||
github.com/hashicorp/serf v0.10.1 h1:Z1H2J60yRKvfDYAOZLd2MU0ND4AH/WDz7xYHDWQsIPY=
|
||||
github.com/hashicorp/serf v0.10.1/go.mod h1:yL2t6BqATOLGc5HF7qbFkTfXoPIY0WZdWHfEvMqbG+4=
|
||||
github.com/hashicorp/vault/api v1.20.0 h1:KQMHElgudOsr+IbJgmbjHnCTxEpKs9LnozA1D3nozU4=
|
||||
github.com/hashicorp/vault/api v1.20.0/go.mod h1:GZ4pcjfzoOWpkJ3ijHNpEoAxKEsBJnVljyTe3jM2Sms=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
|
||||
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
|
||||
github.com/johannesboyne/gofakes3 v0.0.0-20250603205740-ed9094be7668 h1:+Mn8Sj5VzjOTuzyBCxfUnEcS+Iky4/5piUraOC3E5qQ=
|
||||
github.com/johannesboyne/gofakes3 v0.0.0-20250603205740-ed9094be7668/go.mod h1:t6osVdP++3g4v2awHz4+HFccij23BbdT1rX3W7IijqQ=
|
||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/jotfs/fastcdc-go v0.2.0 h1:WHYIGk3k9NumGWfp4YMsemEcx/s4JKpGAa6tpCpHJOo=
|
||||
github.com/jotfs/fastcdc-go v0.2.0/go.mod h1:PGFBIloiASFbiKnkCd/hmHXxngxYDYtisyurJ/zyDNM=
|
||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/keybase/go-keychain v0.0.1 h1:way+bWYa6lDppZoZcgMbYsvC7GxljxrskdNInRtuthU=
|
||||
github.com/keybase/go-keychain v0.0.1/go.mod h1:PdEILRW3i9D8JcdM+FmY6RwkHGnhHxXwkPPMeUgOK1k=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
|
||||
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
|
||||
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
|
||||
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
|
||||
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
|
||||
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-sqlite3 v1.14.29 h1:1O6nRLJKvsi1H2Sj0Hzdfojwt8GiGKm+LOfLaBFaouQ=
|
||||
github.com/mattn/go-sqlite3 v1.14.29/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
|
||||
github.com/miekg/dns v1.1.41 h1:WMszZWJG0XmzbK9FEmzH2TVcqYzFesusSIB41b8KHxY=
|
||||
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
|
||||
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
|
||||
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
|
||||
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
|
||||
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
|
||||
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
|
||||
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
|
||||
github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
|
||||
github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4=
|
||||
github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
|
||||
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
|
||||
github.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=
|
||||
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
|
||||
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
|
||||
github.com/redis/go-redis/v9 v9.8.0 h1:q3nRvjrlge/6UD7eTu/DSg2uYiU2mCL0G/uzBWqhicI=
|
||||
github.com/redis/go-redis/v9 v9.8.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
||||
github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
||||
github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk=
|
||||
github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc=
|
||||
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46 h1:GHRpF1pTW19a8tTFrMLUcfWwyC0pnifVo2ClaLq+hP8=
|
||||
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46/go.mod h1:uAQ5PCi+MFsC7HjREoAz1BU+Mq60+05gifQSsHSDG/8=
|
||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
|
||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
|
||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/spf13/afero v1.2.1/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
|
||||
github.com/spf13/afero v1.14.0 h1:9tH6MapGnn/j0eb0yIXiLjERO8RB6xIVZRDCX7PtqWA=
|
||||
github.com/spf13/afero v1.14.0/go.mod h1:acJQ8t0ohCGuMN3O+Pv0V0hgMxNYDlvdk+VTfyZmbYo=
|
||||
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
|
||||
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
|
||||
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
|
||||
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
|
||||
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
|
||||
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
|
||||
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
|
||||
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
|
||||
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
||||
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
|
||||
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
|
||||
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
|
||||
go.etcd.io/etcd/api/v3 v3.6.2 h1:25aCkIMjUmiiOtnBIp6PhNj4KdcURuBak0hU2P1fgRc=
|
||||
go.etcd.io/etcd/api/v3 v3.6.2/go.mod h1:eFhhvfR8Px1P6SEuLT600v+vrhdDTdcfMzmnxVXXSbk=
|
||||
go.etcd.io/etcd/client/pkg/v3 v3.6.2 h1:zw+HRghi/G8fKpgKdOcEKpnBTE4OO39T6MegA0RopVU=
|
||||
go.etcd.io/etcd/client/pkg/v3 v3.6.2/go.mod h1:sbdzr2cl3HzVmxNw//PH7aLGVtY4QySjQFuaCgcRFAI=
|
||||
go.etcd.io/etcd/client/v3 v3.6.2 h1:RgmcLJxkpHqpFvgKNwAQHX3K+wsSARMXKgjmUSpoSKQ=
|
||||
go.etcd.io/etcd/client/v3 v3.6.2/go.mod h1:PL7e5QMKzjybn0FosgiWvCUDzvdChpo5UgGR4Sk4Gzc=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
||||
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
|
||||
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
|
||||
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
|
||||
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
|
||||
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
|
||||
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
|
||||
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
|
||||
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
|
||||
go.shabbyrobe.org/gocovmerge v0.0.0-20230507111327-fa4f82cfbf4d h1:Ns9kd1Rwzw7t0BR8XMphenji4SmIoNZPn8zhYmaVKP8=
|
||||
go.shabbyrobe.org/gocovmerge v0.0.0-20230507111327-fa4f82cfbf4d/go.mod h1:92Uoe3l++MlthCm+koNi0tcUCX3anayogF0Pa/sp24k=
|
||||
go.uber.org/dig v1.19.0 h1:BACLhebsYdpQ7IROQ1AGPjrXcP5dF80U3gKoFzbaq/4=
|
||||
go.uber.org/dig v1.19.0/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE=
|
||||
go.uber.org/fx v1.24.0 h1:wE8mruvpg2kiiL1Vqd0CC+tr0/24XIB10Iwp2lLWzkg=
|
||||
go.uber.org/fx v1.24.0/go.mod h1:AmDeGyS+ZARGKM4tlH4FY2Jr63VjbEDJHtqXTGP5hbo=
|
||||
go.uber.org/goleak v1.2.0 h1:xqgm/S+aQvhWFTtR0XK3Jvg7z8kGV8P4X14IzwN3Eqk=
|
||||
go.uber.org/goleak v1.2.0/go.mod h1:XJYK+MuIchqpmGmUSAzotztawfKvYLUIgg7guXrwVUo=
|
||||
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
|
||||
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
go.uber.org/zap v1.26.0 h1:sI7k6L95XOKS281NhVKOFCUNIvv9e0w4BF8N3u+tCRo=
|
||||
go.uber.org/zap v1.26.0/go.mod h1:dtElttAiwGvoJ/vj4IwHBS/gXsEu/pZ50mUIRWuG0so=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
|
||||
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
|
||||
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
|
||||
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0 h1:R84qjqJb5nVJMxqWYb3np9L5ZsaDtB+a39EqjV0JSUM=
|
||||
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0/go.mod h1:S9Xr4PYopiDyqSyp5NjCrhFrqg6A5zA2E/iPHPhqnS8=
|
||||
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
|
||||
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
|
||||
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
|
||||
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
|
||||
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
||||
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
|
||||
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
|
||||
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
|
||||
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
|
||||
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
|
||||
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
|
||||
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
|
||||
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
|
||||
golang.org/x/term v0.33.0 h1:NuFncQrRcaRvVmgRkvM3j/F00gWIAlcmlB8ACEKmGIg=
|
||||
golang.org/x/term v0.33.0/go.mod h1:s18+ql9tYWp1IfpV9DmCtQDDSRBUjKaw9M1eAv5UeF0=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
|
||||
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
|
||||
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
|
||||
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190829051458-42f498d34c4d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||
golang.org/x/tools v0.8.0/go.mod h1:JxBZ99ISMI5ViVkT1tr6tdNmXeTrcpVSD3vZ1RsRdN4=
|
||||
golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc=
|
||||
golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.237.0 h1:MP7XVsGZesOsx3Q8WVa4sUdbrsTvDSOERd3Vh4xj/wc=
|
||||
google.golang.org/api v0.237.0/go.mod h1:cOVEm2TpdAGHL2z+UwyS+kmlGr3bVWQQ6sYEqkKje50=
|
||||
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2 h1:1tXaIXCracvtsRxSBsYDiSBN0cuJvM7QYW+MrpIRY78=
|
||||
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2/go.mod h1:49MsLSx0oWMOZqcpB3uL8ZOkAh1+TndpJ8ONoCBWiZk=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 h1:oWVWY3NzT7KJppx2UKhKmzPq4SRe0LdCijVRwvGeikY=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
|
||||
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
|
||||
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
|
||||
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
|
||||
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
|
||||
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
|
||||
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
||||
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||
gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
k8s.io/api v0.33.3 h1:SRd5t//hhkI1buzxb288fy2xvjubstenEKL9K51KBI8=
|
||||
k8s.io/api v0.33.3/go.mod h1:01Y/iLUjNBM3TAvypct7DIj0M0NIZc+PzAHCIo0CYGE=
|
||||
k8s.io/apimachinery v0.33.3 h1:4ZSrmNa0c/ZpZJhAgRdcsFcZOw1PQU1bALVQ0B3I5LA=
|
||||
k8s.io/apimachinery v0.33.3/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM=
|
||||
k8s.io/client-go v0.33.3 h1:M5AfDnKfYmVJif92ngN532gFqakcGi6RvaOF16efrpA=
|
||||
k8s.io/client-go v0.33.3/go.mod h1:luqKBQggEf3shbxHY4uVENAxrDISLOarxpTKMiUuujg=
|
||||
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
|
||||
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
|
||||
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff h1:/usPimJzUKKu+m+TE36gUyGcf03XZEP0ZIKgKj35LS4=
|
||||
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8=
|
||||
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro=
|
||||
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
|
||||
modernc.org/cc/v4 v4.26.1 h1:+X5NtzVBn0KgsBCBe+xkDC7twLb/jNVj9FPgiwSQO3s=
|
||||
modernc.org/cc/v4 v4.26.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
|
||||
modernc.org/ccgo/v4 v4.28.0 h1:rjznn6WWehKq7dG4JtLRKxb52Ecv8OUGah8+Z/SfpNU=
|
||||
@@ -73,3 +609,12 @@ modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
|
||||
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
|
||||
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
|
||||
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
|
||||
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
|
||||
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
|
||||
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
|
||||
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
|
||||
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 h1:IUA9nvMmnKWcj5jl84xn+T5MnlZKThmUW1TdblaLVAc=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
|
||||
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
|
||||
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
|
||||
|
||||
6
internal/blob/errors.go
Normal file
6
internal/blob/errors.go
Normal file
@@ -0,0 +1,6 @@
|
||||
package blob
|
||||
|
||||
import "errors"
|
||||
|
||||
// ErrBlobSizeLimitExceeded is returned when adding a chunk would exceed the blob size limit
|
||||
var ErrBlobSizeLimitExceeded = errors.New("adding chunk would exceed blob size limit")
|
||||
501
internal/blob/packer.go
Normal file
501
internal/blob/packer.go
Normal file
@@ -0,0 +1,501 @@
|
||||
// Package blob handles the creation of blobs - the final storage units for Vaultik.
|
||||
// A blob is a large file (up to 10GB) containing many compressed and encrypted chunks
|
||||
// from multiple source files. Blobs are content-addressed, meaning their filename
|
||||
// is derived from the SHA256 hash of their compressed and encrypted content.
|
||||
//
|
||||
// The blob creation process:
|
||||
// 1. Chunks are accumulated from multiple files
|
||||
// 2. The collection is compressed using zstd
|
||||
// 3. The compressed data is encrypted using age
|
||||
// 4. The encrypted blob is hashed to create its content-addressed name
|
||||
// 5. The blob is uploaded to S3 using the hash as the filename
|
||||
//
|
||||
// This design optimizes storage efficiency by batching many small chunks into
|
||||
// larger blobs, reducing the number of S3 operations and associated costs.
|
||||
package blob
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/blobgen"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"github.com/google/uuid"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
// BlobHandler is a callback function invoked when a blob is finalized and ready for upload.
|
||||
// The handler receives a BlobWithReader containing the blob metadata and a reader for
|
||||
// the compressed and encrypted blob content. The handler is responsible for uploading
|
||||
// the blob to storage and cleaning up any temporary files.
|
||||
type BlobHandler func(blob *BlobWithReader) error
|
||||
|
||||
// PackerConfig holds configuration for creating a Packer.
|
||||
// All fields except BlobHandler are required.
|
||||
type PackerConfig struct {
|
||||
MaxBlobSize int64 // Maximum size of a blob before forcing finalization
|
||||
CompressionLevel int // Zstd compression level (1-19, higher = better compression)
|
||||
Recipients []string // Age recipients for encryption
|
||||
Repositories *database.Repositories // Database repositories for tracking blob metadata
|
||||
BlobHandler BlobHandler // Optional callback when blob is ready for upload
|
||||
Fs afero.Fs // Filesystem for temporary files
|
||||
}
|
||||
|
||||
// Packer accumulates chunks and packs them into blobs.
|
||||
// It handles compression, encryption, and coordination with the database
|
||||
// to track blob metadata. Packer is thread-safe.
|
||||
type Packer struct {
|
||||
maxBlobSize int64
|
||||
compressionLevel int
|
||||
recipients []string // Age recipients for encryption
|
||||
blobHandler BlobHandler // Called when blob is ready
|
||||
repos *database.Repositories // For creating blob records
|
||||
fs afero.Fs // Filesystem for temporary files
|
||||
|
||||
// Mutex for thread-safe blob creation
|
||||
mu sync.Mutex
|
||||
|
||||
// Current blob being packed
|
||||
currentBlob *blobInProgress
|
||||
finishedBlobs []*FinishedBlob // Only used if no handler provided
|
||||
}
|
||||
|
||||
// blobInProgress represents a blob being assembled
|
||||
type blobInProgress struct {
|
||||
id string // UUID of the blob
|
||||
chunks []*chunkInfo // Track chunk metadata
|
||||
chunkSet map[string]bool // Track unique chunks in this blob
|
||||
tempFile afero.File // Temporary file for encrypted compressed data
|
||||
writer *blobgen.Writer // Unified compression/encryption/hashing writer
|
||||
startTime time.Time
|
||||
size int64 // Current uncompressed size
|
||||
}
|
||||
|
||||
// ChunkRef represents a chunk to be added to a blob.
|
||||
// The Hash is the content-addressed identifier (SHA256) of the chunk,
|
||||
// and Data contains the raw chunk bytes. After adding to a blob,
|
||||
// the Data can be safely discarded as it's written to the blob immediately.
|
||||
type ChunkRef struct {
|
||||
Hash string // SHA256 hash of the chunk data
|
||||
Data []byte // Raw chunk content
|
||||
}
|
||||
|
||||
// chunkInfo tracks chunk metadata in a blob
|
||||
type chunkInfo struct {
|
||||
Hash string
|
||||
Offset int64
|
||||
Size int64
|
||||
}
|
||||
|
||||
// FinishedBlob represents a completed blob ready for storage
|
||||
type FinishedBlob struct {
|
||||
ID string
|
||||
Hash string
|
||||
Data []byte // Compressed data
|
||||
Chunks []*BlobChunkRef
|
||||
CreatedTS time.Time
|
||||
Uncompressed int64
|
||||
Compressed int64
|
||||
}
|
||||
|
||||
// BlobChunkRef represents a chunk's position within a blob
|
||||
type BlobChunkRef struct {
|
||||
ChunkHash string
|
||||
Offset int64
|
||||
Length int64
|
||||
}
|
||||
|
||||
// BlobWithReader wraps a FinishedBlob with its data reader
|
||||
type BlobWithReader struct {
|
||||
*FinishedBlob
|
||||
Reader io.ReadSeeker
|
||||
TempFile afero.File // Optional, only set for disk-based blobs
|
||||
}
|
||||
|
||||
// NewPacker creates a new blob packer that accumulates chunks into blobs.
|
||||
// The packer will automatically finalize blobs when they reach MaxBlobSize.
|
||||
// Returns an error if required configuration fields are missing or invalid.
|
||||
func NewPacker(cfg PackerConfig) (*Packer, error) {
|
||||
if len(cfg.Recipients) == 0 {
|
||||
return nil, fmt.Errorf("recipients are required - blobs must be encrypted")
|
||||
}
|
||||
if cfg.MaxBlobSize <= 0 {
|
||||
return nil, fmt.Errorf("max blob size must be positive")
|
||||
}
|
||||
if cfg.Fs == nil {
|
||||
return nil, fmt.Errorf("filesystem is required")
|
||||
}
|
||||
return &Packer{
|
||||
maxBlobSize: cfg.MaxBlobSize,
|
||||
compressionLevel: cfg.CompressionLevel,
|
||||
recipients: cfg.Recipients,
|
||||
blobHandler: cfg.BlobHandler,
|
||||
repos: cfg.Repositories,
|
||||
fs: cfg.Fs,
|
||||
finishedBlobs: make([]*FinishedBlob, 0),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// SetBlobHandler sets the handler to be called when a blob is finalized.
|
||||
// The handler is responsible for uploading the blob to storage.
|
||||
// If no handler is set, finalized blobs are stored in memory and can be
|
||||
// retrieved with GetFinishedBlobs().
|
||||
func (p *Packer) SetBlobHandler(handler BlobHandler) {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
p.blobHandler = handler
|
||||
}
|
||||
|
||||
// AddChunk adds a chunk to the current blob being packed.
|
||||
// If adding the chunk would exceed MaxBlobSize, returns ErrBlobSizeLimitExceeded.
|
||||
// In this case, the caller should finalize the current blob and retry.
|
||||
// The chunk data is written immediately and can be garbage collected after this call.
|
||||
// Thread-safe.
|
||||
func (p *Packer) AddChunk(chunk *ChunkRef) error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
|
||||
// Initialize new blob if needed
|
||||
if p.currentBlob == nil {
|
||||
if err := p.startNewBlob(); err != nil {
|
||||
return fmt.Errorf("starting new blob: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Check if adding this chunk would exceed blob size limit
|
||||
// Use conservative estimate: assume no compression
|
||||
// Skip size check if chunk already exists in blob
|
||||
if !p.currentBlob.chunkSet[chunk.Hash] {
|
||||
currentSize := p.currentBlob.size
|
||||
newSize := currentSize + int64(len(chunk.Data))
|
||||
|
||||
if newSize > p.maxBlobSize && len(p.currentBlob.chunks) > 0 {
|
||||
// Return error indicating size limit would be exceeded
|
||||
return ErrBlobSizeLimitExceeded
|
||||
}
|
||||
}
|
||||
|
||||
// Add chunk to current blob
|
||||
if err := p.addChunkToCurrentBlob(chunk); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Flush finalizes any in-progress blob, compressing, encrypting, and hashing it.
|
||||
// This should be called after all chunks have been added to ensure no data is lost.
|
||||
// If a BlobHandler is set, it will be called with the finalized blob.
|
||||
// Thread-safe.
|
||||
func (p *Packer) Flush() error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
|
||||
if p.currentBlob != nil && len(p.currentBlob.chunks) > 0 {
|
||||
if err := p.finalizeCurrentBlob(); err != nil {
|
||||
return fmt.Errorf("finalizing blob: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// FinalizeBlob finalizes the current blob being assembled.
|
||||
// This compresses the accumulated chunks, encrypts the result, and computes
|
||||
// the content-addressed hash. The finalized blob is either passed to the
|
||||
// BlobHandler (if set) or stored internally.
|
||||
// Caller must handle retrying any chunk that triggered size limit exceeded.
|
||||
// Not thread-safe - caller must hold the lock.
|
||||
func (p *Packer) FinalizeBlob() error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
|
||||
if p.currentBlob == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return p.finalizeCurrentBlob()
|
||||
}
|
||||
|
||||
// GetFinishedBlobs returns all completed blobs and clears the internal list.
|
||||
// This is only used when no BlobHandler is set. After calling this method,
|
||||
// the caller is responsible for uploading the blobs to storage.
|
||||
// Thread-safe.
|
||||
func (p *Packer) GetFinishedBlobs() []*FinishedBlob {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
|
||||
blobs := p.finishedBlobs
|
||||
p.finishedBlobs = make([]*FinishedBlob, 0)
|
||||
return blobs
|
||||
}
|
||||
|
||||
// startNewBlob initializes a new blob (must be called with lock held)
|
||||
func (p *Packer) startNewBlob() error {
|
||||
// Generate UUID for the blob
|
||||
blobID := uuid.New().String()
|
||||
|
||||
// Create blob record in database
|
||||
if p.repos != nil {
|
||||
blob := &database.Blob{
|
||||
ID: blobID,
|
||||
Hash: "temp-placeholder-" + blobID, // Temporary placeholder until finalized
|
||||
CreatedTS: time.Now().UTC(),
|
||||
FinishedTS: nil,
|
||||
UncompressedSize: 0,
|
||||
CompressedSize: 0,
|
||||
UploadedTS: nil,
|
||||
}
|
||||
err := p.repos.WithTx(context.Background(), func(ctx context.Context, tx *sql.Tx) error {
|
||||
return p.repos.Blobs.Create(ctx, tx, blob)
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating blob record: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create temporary file
|
||||
tempFile, err := afero.TempFile(p.fs, "", "vaultik-blob-*.tmp")
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating temp file: %w", err)
|
||||
}
|
||||
|
||||
// Create blobgen writer for unified compression/encryption/hashing
|
||||
writer, err := blobgen.NewWriter(tempFile, p.compressionLevel, p.recipients)
|
||||
if err != nil {
|
||||
_ = tempFile.Close()
|
||||
_ = p.fs.Remove(tempFile.Name())
|
||||
return fmt.Errorf("creating blobgen writer: %w", err)
|
||||
}
|
||||
|
||||
p.currentBlob = &blobInProgress{
|
||||
id: blobID,
|
||||
chunks: make([]*chunkInfo, 0),
|
||||
chunkSet: make(map[string]bool),
|
||||
startTime: time.Now().UTC(),
|
||||
tempFile: tempFile,
|
||||
writer: writer,
|
||||
size: 0,
|
||||
}
|
||||
|
||||
log.Debug("Created new blob container", "blob_id", blobID, "temp_file", tempFile.Name())
|
||||
return nil
|
||||
}
|
||||
|
||||
// addChunkToCurrentBlob adds a chunk to the current blob (must be called with lock held)
|
||||
func (p *Packer) addChunkToCurrentBlob(chunk *ChunkRef) error {
|
||||
// Skip if chunk already in current blob
|
||||
if p.currentBlob.chunkSet[chunk.Hash] {
|
||||
log.Debug("Skipping duplicate chunk already in current blob", "chunk_hash", chunk.Hash)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Track offset before writing
|
||||
offset := p.currentBlob.size
|
||||
|
||||
// Write to the blobgen writer (compression -> encryption -> disk)
|
||||
if _, err := p.currentBlob.writer.Write(chunk.Data); err != nil {
|
||||
return fmt.Errorf("writing to blob stream: %w", err)
|
||||
}
|
||||
|
||||
// Track chunk info
|
||||
chunkSize := int64(len(chunk.Data))
|
||||
chunkInfo := &chunkInfo{
|
||||
Hash: chunk.Hash,
|
||||
Offset: offset,
|
||||
Size: chunkSize,
|
||||
}
|
||||
p.currentBlob.chunks = append(p.currentBlob.chunks, chunkInfo)
|
||||
p.currentBlob.chunkSet[chunk.Hash] = true
|
||||
|
||||
// Store blob-chunk association in database immediately
|
||||
if p.repos != nil {
|
||||
blobChunk := &database.BlobChunk{
|
||||
BlobID: p.currentBlob.id,
|
||||
ChunkHash: chunk.Hash,
|
||||
Offset: offset,
|
||||
Length: chunkSize,
|
||||
}
|
||||
err := p.repos.WithTx(context.Background(), func(ctx context.Context, tx *sql.Tx) error {
|
||||
return p.repos.BlobChunks.Create(ctx, tx, blobChunk)
|
||||
})
|
||||
if err != nil {
|
||||
log.Error("Failed to store blob-chunk association in database", "error", err,
|
||||
"blob_id", p.currentBlob.id, "chunk_hash", chunk.Hash)
|
||||
// Continue anyway - we can reconstruct this later if needed
|
||||
}
|
||||
}
|
||||
|
||||
// Update total size
|
||||
p.currentBlob.size += chunkSize
|
||||
|
||||
log.Debug("Added chunk to blob container",
|
||||
"blob_id", p.currentBlob.id,
|
||||
"chunk_hash", chunk.Hash,
|
||||
"chunk_size", len(chunk.Data),
|
||||
"offset", offset,
|
||||
"blob_chunks", len(p.currentBlob.chunks),
|
||||
"uncompressed_size", p.currentBlob.size)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// finalizeCurrentBlob completes the current blob (must be called with lock held)
|
||||
func (p *Packer) finalizeCurrentBlob() error {
|
||||
if p.currentBlob == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Close blobgen writer to flush all data
|
||||
if err := p.currentBlob.writer.Close(); err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("closing blobgen writer: %w", err)
|
||||
}
|
||||
|
||||
// Sync file to ensure all data is written
|
||||
if err := p.currentBlob.tempFile.Sync(); err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("syncing temp file: %w", err)
|
||||
}
|
||||
|
||||
// Get the final size (encrypted if applicable)
|
||||
finalSize, err := p.currentBlob.tempFile.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("getting file size: %w", err)
|
||||
}
|
||||
|
||||
// Reset to beginning for reading
|
||||
if _, err := p.currentBlob.tempFile.Seek(0, io.SeekStart); err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("seeking to start: %w", err)
|
||||
}
|
||||
|
||||
// Get hash from blobgen writer (of final encrypted data)
|
||||
finalHash := p.currentBlob.writer.Sum256()
|
||||
blobHash := hex.EncodeToString(finalHash)
|
||||
|
||||
// Create chunk references with offsets
|
||||
chunkRefs := make([]*BlobChunkRef, 0, len(p.currentBlob.chunks))
|
||||
|
||||
for _, chunk := range p.currentBlob.chunks {
|
||||
chunkRefs = append(chunkRefs, &BlobChunkRef{
|
||||
ChunkHash: chunk.Hash,
|
||||
Offset: chunk.Offset,
|
||||
Length: chunk.Size,
|
||||
})
|
||||
}
|
||||
|
||||
// Update blob record in database with hash and sizes
|
||||
if p.repos != nil {
|
||||
err := p.repos.WithTx(context.Background(), func(ctx context.Context, tx *sql.Tx) error {
|
||||
return p.repos.Blobs.UpdateFinished(ctx, tx, p.currentBlob.id, blobHash,
|
||||
p.currentBlob.size, finalSize)
|
||||
})
|
||||
if err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("updating blob record: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create finished blob
|
||||
finished := &FinishedBlob{
|
||||
ID: p.currentBlob.id,
|
||||
Hash: blobHash,
|
||||
Data: nil, // We don't load data into memory anymore
|
||||
Chunks: chunkRefs,
|
||||
CreatedTS: p.currentBlob.startTime,
|
||||
Uncompressed: p.currentBlob.size,
|
||||
Compressed: finalSize,
|
||||
}
|
||||
|
||||
compressionRatio := float64(finished.Compressed) / float64(finished.Uncompressed)
|
||||
log.Info("Finalized blob (compressed and encrypted)",
|
||||
"hash", blobHash,
|
||||
"chunks", len(chunkRefs),
|
||||
"uncompressed", finished.Uncompressed,
|
||||
"compressed", finished.Compressed,
|
||||
"ratio", fmt.Sprintf("%.2f", compressionRatio),
|
||||
"duration", time.Since(p.currentBlob.startTime))
|
||||
|
||||
// Call blob handler if set
|
||||
if p.blobHandler != nil {
|
||||
// Reset file position for handler
|
||||
if _, err := p.currentBlob.tempFile.Seek(0, io.SeekStart); err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("seeking for handler: %w", err)
|
||||
}
|
||||
|
||||
// Create a blob reader that includes the data stream
|
||||
blobWithReader := &BlobWithReader{
|
||||
FinishedBlob: finished,
|
||||
Reader: p.currentBlob.tempFile,
|
||||
TempFile: p.currentBlob.tempFile,
|
||||
}
|
||||
|
||||
if err := p.blobHandler(blobWithReader); err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("blob handler failed: %w", err)
|
||||
}
|
||||
// Note: blob handler is responsible for closing/cleaning up temp file
|
||||
p.currentBlob = nil
|
||||
} else {
|
||||
log.Debug("No blob handler callback configured", "blob_hash", blobHash[:8]+"...")
|
||||
// No handler, need to read data for legacy behavior
|
||||
if _, err := p.currentBlob.tempFile.Seek(0, io.SeekStart); err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("seeking to read data: %w", err)
|
||||
}
|
||||
|
||||
data, err := io.ReadAll(p.currentBlob.tempFile)
|
||||
if err != nil {
|
||||
p.cleanupTempFile()
|
||||
return fmt.Errorf("reading blob data: %w", err)
|
||||
}
|
||||
finished.Data = data
|
||||
|
||||
p.finishedBlobs = append(p.finishedBlobs, finished)
|
||||
|
||||
// Cleanup
|
||||
p.cleanupTempFile()
|
||||
p.currentBlob = nil
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// cleanupTempFile removes the temporary file
|
||||
func (p *Packer) cleanupTempFile() {
|
||||
if p.currentBlob != nil && p.currentBlob.tempFile != nil {
|
||||
name := p.currentBlob.tempFile.Name()
|
||||
_ = p.currentBlob.tempFile.Close()
|
||||
_ = p.fs.Remove(name)
|
||||
}
|
||||
}
|
||||
|
||||
// PackChunks is a convenience method to pack multiple chunks at once
|
||||
func (p *Packer) PackChunks(chunks []*ChunkRef) error {
|
||||
for _, chunk := range chunks {
|
||||
err := p.AddChunk(chunk)
|
||||
if err == ErrBlobSizeLimitExceeded {
|
||||
// Finalize current blob and retry
|
||||
if err := p.FinalizeBlob(); err != nil {
|
||||
return fmt.Errorf("finalizing blob before retry: %w", err)
|
||||
}
|
||||
// Retry the chunk
|
||||
if err := p.AddChunk(chunk); err != nil {
|
||||
return fmt.Errorf("adding chunk %s after finalize: %w", chunk.Hash, err)
|
||||
}
|
||||
} else if err != nil {
|
||||
return fmt.Errorf("adding chunk %s: %w", chunk.Hash, err)
|
||||
}
|
||||
}
|
||||
|
||||
return p.Flush()
|
||||
}
|
||||
384
internal/blob/packer_test.go
Normal file
384
internal/blob/packer_test.go
Normal file
@@ -0,0 +1,384 @@
|
||||
package blob
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"database/sql"
|
||||
"encoding/hex"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"filippo.io/age"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"github.com/klauspost/compress/zstd"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
const (
|
||||
// Test key from test/insecure-integration-test.key
|
||||
testPrivateKey = "AGE-SECRET-KEY-19CR5YSFW59HM4TLD6GXVEDMZFTVVF7PPHKUT68TXSFPK7APHXA2QS2NJA5"
|
||||
testPublicKey = "age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"
|
||||
)
|
||||
|
||||
func TestPacker(t *testing.T) {
|
||||
// Initialize logger for tests
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
// Parse test identity
|
||||
identity, err := age.ParseX25519Identity(testPrivateKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to parse test identity: %v", err)
|
||||
}
|
||||
|
||||
t.Run("single chunk creates single blob", func(t *testing.T) {
|
||||
// Create test database
|
||||
db, err := database.NewTestDB()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test db: %v", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
cfg := PackerConfig{
|
||||
MaxBlobSize: 10 * 1024 * 1024, // 10MB
|
||||
CompressionLevel: 3,
|
||||
Recipients: []string{testPublicKey},
|
||||
Repositories: repos,
|
||||
Fs: afero.NewMemMapFs(),
|
||||
}
|
||||
packer, err := NewPacker(cfg)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create packer: %v", err)
|
||||
}
|
||||
|
||||
// Create a chunk
|
||||
data := []byte("Hello, World!")
|
||||
hash := sha256.Sum256(data)
|
||||
hashStr := hex.EncodeToString(hash[:])
|
||||
|
||||
// Create chunk in database first
|
||||
dbChunk := &database.Chunk{
|
||||
ChunkHash: hashStr,
|
||||
Size: int64(len(data)),
|
||||
}
|
||||
err = repos.WithTx(context.Background(), func(ctx context.Context, tx *sql.Tx) error {
|
||||
return repos.Chunks.Create(ctx, tx, dbChunk)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk in db: %v", err)
|
||||
}
|
||||
|
||||
chunk := &ChunkRef{
|
||||
Hash: hashStr,
|
||||
Data: data,
|
||||
}
|
||||
|
||||
// Add chunk
|
||||
if err := packer.AddChunk(chunk); err != nil {
|
||||
t.Fatalf("failed to add chunk: %v", err)
|
||||
}
|
||||
|
||||
// Flush
|
||||
if err := packer.Flush(); err != nil {
|
||||
t.Fatalf("failed to flush: %v", err)
|
||||
}
|
||||
|
||||
// Get finished blobs
|
||||
blobs := packer.GetFinishedBlobs()
|
||||
if len(blobs) != 1 {
|
||||
t.Fatalf("expected 1 blob, got %d", len(blobs))
|
||||
}
|
||||
|
||||
blob := blobs[0]
|
||||
if len(blob.Chunks) != 1 {
|
||||
t.Errorf("expected 1 chunk in blob, got %d", len(blob.Chunks))
|
||||
}
|
||||
|
||||
// Note: Very small data may not compress well
|
||||
t.Logf("Compression: %d -> %d bytes", blob.Uncompressed, blob.Compressed)
|
||||
|
||||
// Decrypt the blob data
|
||||
decrypted, err := age.Decrypt(bytes.NewReader(blob.Data), identity)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to decrypt blob: %v", err)
|
||||
}
|
||||
|
||||
// Decompress the decrypted data
|
||||
reader, err := zstd.NewReader(decrypted)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create decompressor: %v", err)
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
var decompressed bytes.Buffer
|
||||
if _, err := io.Copy(&decompressed, reader); err != nil {
|
||||
t.Fatalf("failed to decompress: %v", err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(decompressed.Bytes(), data) {
|
||||
t.Error("decompressed data doesn't match original")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("multiple chunks packed together", func(t *testing.T) {
|
||||
// Create test database
|
||||
db, err := database.NewTestDB()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test db: %v", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
cfg := PackerConfig{
|
||||
MaxBlobSize: 10 * 1024 * 1024, // 10MB
|
||||
CompressionLevel: 3,
|
||||
Recipients: []string{testPublicKey},
|
||||
Repositories: repos,
|
||||
Fs: afero.NewMemMapFs(),
|
||||
}
|
||||
packer, err := NewPacker(cfg)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create packer: %v", err)
|
||||
}
|
||||
|
||||
// Create multiple small chunks
|
||||
chunks := make([]*ChunkRef, 10)
|
||||
for i := 0; i < 10; i++ {
|
||||
data := bytes.Repeat([]byte{byte(i)}, 1000)
|
||||
hash := sha256.Sum256(data)
|
||||
hashStr := hex.EncodeToString(hash[:])
|
||||
|
||||
// Create chunk in database first
|
||||
dbChunk := &database.Chunk{
|
||||
ChunkHash: hashStr,
|
||||
Size: int64(len(data)),
|
||||
}
|
||||
err = repos.WithTx(context.Background(), func(ctx context.Context, tx *sql.Tx) error {
|
||||
return repos.Chunks.Create(ctx, tx, dbChunk)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk in db: %v", err)
|
||||
}
|
||||
|
||||
chunks[i] = &ChunkRef{
|
||||
Hash: hashStr,
|
||||
Data: data,
|
||||
}
|
||||
}
|
||||
|
||||
// Add all chunks
|
||||
for _, chunk := range chunks {
|
||||
err := packer.AddChunk(chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to add chunk: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Flush
|
||||
if err := packer.Flush(); err != nil {
|
||||
t.Fatalf("failed to flush: %v", err)
|
||||
}
|
||||
|
||||
// Should have one blob with all chunks
|
||||
blobs := packer.GetFinishedBlobs()
|
||||
if len(blobs) != 1 {
|
||||
t.Fatalf("expected 1 blob, got %d", len(blobs))
|
||||
}
|
||||
|
||||
if len(blobs[0].Chunks) != 10 {
|
||||
t.Errorf("expected 10 chunks in blob, got %d", len(blobs[0].Chunks))
|
||||
}
|
||||
|
||||
// Verify offsets are correct
|
||||
expectedOffset := int64(0)
|
||||
for i, chunkRef := range blobs[0].Chunks {
|
||||
if chunkRef.Offset != expectedOffset {
|
||||
t.Errorf("chunk %d: expected offset %d, got %d", i, expectedOffset, chunkRef.Offset)
|
||||
}
|
||||
if chunkRef.Length != 1000 {
|
||||
t.Errorf("chunk %d: expected length 1000, got %d", i, chunkRef.Length)
|
||||
}
|
||||
expectedOffset += chunkRef.Length
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("blob size limit enforced", func(t *testing.T) {
|
||||
// Create test database
|
||||
db, err := database.NewTestDB()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test db: %v", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Small blob size limit to force multiple blobs
|
||||
cfg := PackerConfig{
|
||||
MaxBlobSize: 5000, // 5KB max
|
||||
CompressionLevel: 3,
|
||||
Recipients: []string{testPublicKey},
|
||||
Repositories: repos,
|
||||
Fs: afero.NewMemMapFs(),
|
||||
}
|
||||
packer, err := NewPacker(cfg)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create packer: %v", err)
|
||||
}
|
||||
|
||||
// Create chunks that will exceed the limit
|
||||
chunks := make([]*ChunkRef, 10)
|
||||
for i := 0; i < 10; i++ {
|
||||
data := bytes.Repeat([]byte{byte(i)}, 1000) // 1KB each
|
||||
hash := sha256.Sum256(data)
|
||||
hashStr := hex.EncodeToString(hash[:])
|
||||
|
||||
// Create chunk in database first
|
||||
dbChunk := &database.Chunk{
|
||||
ChunkHash: hashStr,
|
||||
Size: int64(len(data)),
|
||||
}
|
||||
err = repos.WithTx(context.Background(), func(ctx context.Context, tx *sql.Tx) error {
|
||||
return repos.Chunks.Create(ctx, tx, dbChunk)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk in db: %v", err)
|
||||
}
|
||||
|
||||
chunks[i] = &ChunkRef{
|
||||
Hash: hashStr,
|
||||
Data: data,
|
||||
}
|
||||
}
|
||||
|
||||
blobCount := 0
|
||||
|
||||
// Add chunks and handle size limit errors
|
||||
for _, chunk := range chunks {
|
||||
err := packer.AddChunk(chunk)
|
||||
if err == ErrBlobSizeLimitExceeded {
|
||||
// Finalize current blob
|
||||
if err := packer.FinalizeBlob(); err != nil {
|
||||
t.Fatalf("failed to finalize blob: %v", err)
|
||||
}
|
||||
blobCount++
|
||||
// Retry adding the chunk
|
||||
if err := packer.AddChunk(chunk); err != nil {
|
||||
t.Fatalf("failed to add chunk after finalize: %v", err)
|
||||
}
|
||||
} else if err != nil {
|
||||
t.Fatalf("failed to add chunk: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Flush remaining
|
||||
if err := packer.Flush(); err != nil {
|
||||
t.Fatalf("failed to flush: %v", err)
|
||||
}
|
||||
|
||||
// Get all blobs
|
||||
blobs := packer.GetFinishedBlobs()
|
||||
totalBlobs := blobCount + len(blobs)
|
||||
|
||||
// Should have multiple blobs due to size limit
|
||||
if totalBlobs < 2 {
|
||||
t.Errorf("expected multiple blobs due to size limit, got %d", totalBlobs)
|
||||
}
|
||||
|
||||
// Verify each blob respects size limit (approximately)
|
||||
for _, blob := range blobs {
|
||||
if blob.Compressed > 6000 { // Allow some overhead
|
||||
t.Errorf("blob size %d exceeds limit", blob.Compressed)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("with encryption", func(t *testing.T) {
|
||||
// Create test database
|
||||
db, err := database.NewTestDB()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test db: %v", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Generate test identity (using the one from parent test)
|
||||
cfg := PackerConfig{
|
||||
MaxBlobSize: 10 * 1024 * 1024, // 10MB
|
||||
CompressionLevel: 3,
|
||||
Recipients: []string{testPublicKey},
|
||||
Repositories: repos,
|
||||
Fs: afero.NewMemMapFs(),
|
||||
}
|
||||
packer, err := NewPacker(cfg)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create packer: %v", err)
|
||||
}
|
||||
|
||||
// Create test data
|
||||
data := bytes.Repeat([]byte("Test data for encryption!"), 100)
|
||||
hash := sha256.Sum256(data)
|
||||
hashStr := hex.EncodeToString(hash[:])
|
||||
|
||||
// Create chunk in database first
|
||||
dbChunk := &database.Chunk{
|
||||
ChunkHash: hashStr,
|
||||
Size: int64(len(data)),
|
||||
}
|
||||
err = repos.WithTx(context.Background(), func(ctx context.Context, tx *sql.Tx) error {
|
||||
return repos.Chunks.Create(ctx, tx, dbChunk)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk in db: %v", err)
|
||||
}
|
||||
|
||||
chunk := &ChunkRef{
|
||||
Hash: hashStr,
|
||||
Data: data,
|
||||
}
|
||||
|
||||
// Add chunk and flush
|
||||
if err := packer.AddChunk(chunk); err != nil {
|
||||
t.Fatalf("failed to add chunk: %v", err)
|
||||
}
|
||||
if err := packer.Flush(); err != nil {
|
||||
t.Fatalf("failed to flush: %v", err)
|
||||
}
|
||||
|
||||
// Get blob
|
||||
blobs := packer.GetFinishedBlobs()
|
||||
if len(blobs) != 1 {
|
||||
t.Fatalf("expected 1 blob, got %d", len(blobs))
|
||||
}
|
||||
|
||||
blob := blobs[0]
|
||||
|
||||
// Decrypt the blob
|
||||
decrypted, err := age.Decrypt(bytes.NewReader(blob.Data), identity)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to decrypt blob: %v", err)
|
||||
}
|
||||
|
||||
var decryptedData bytes.Buffer
|
||||
if _, err := decryptedData.ReadFrom(decrypted); err != nil {
|
||||
t.Fatalf("failed to read decrypted data: %v", err)
|
||||
}
|
||||
|
||||
// Decompress
|
||||
reader, err := zstd.NewReader(&decryptedData)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create decompressor: %v", err)
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
var decompressed bytes.Buffer
|
||||
if _, err := decompressed.ReadFrom(reader); err != nil {
|
||||
t.Fatalf("failed to decompress: %v", err)
|
||||
}
|
||||
|
||||
// Verify data
|
||||
if !bytes.Equal(decompressed.Bytes(), data) {
|
||||
t.Error("decrypted and decompressed data doesn't match original")
|
||||
}
|
||||
})
|
||||
}
|
||||
67
internal/blobgen/compress.go
Normal file
67
internal/blobgen/compress.go
Normal file
@@ -0,0 +1,67 @@
|
||||
package blobgen
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// CompressResult contains the results of compression
|
||||
type CompressResult struct {
|
||||
Data []byte
|
||||
UncompressedSize int64
|
||||
CompressedSize int64
|
||||
SHA256 string
|
||||
}
|
||||
|
||||
// CompressData compresses and encrypts data, returning the result with hash
|
||||
func CompressData(data []byte, compressionLevel int, recipients []string) (*CompressResult, error) {
|
||||
var buf bytes.Buffer
|
||||
|
||||
// Create writer
|
||||
w, err := NewWriter(&buf, compressionLevel, recipients)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating writer: %w", err)
|
||||
}
|
||||
|
||||
// Write data
|
||||
if _, err := w.Write(data); err != nil {
|
||||
_ = w.Close()
|
||||
return nil, fmt.Errorf("writing data: %w", err)
|
||||
}
|
||||
|
||||
// Close to flush
|
||||
if err := w.Close(); err != nil {
|
||||
return nil, fmt.Errorf("closing writer: %w", err)
|
||||
}
|
||||
|
||||
return &CompressResult{
|
||||
Data: buf.Bytes(),
|
||||
UncompressedSize: int64(len(data)),
|
||||
CompressedSize: int64(buf.Len()),
|
||||
SHA256: hex.EncodeToString(w.Sum256()),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// CompressStream compresses and encrypts from reader to writer, returning hash
|
||||
func CompressStream(dst io.Writer, src io.Reader, compressionLevel int, recipients []string) (written int64, hash string, err error) {
|
||||
// Create writer
|
||||
w, err := NewWriter(dst, compressionLevel, recipients)
|
||||
if err != nil {
|
||||
return 0, "", fmt.Errorf("creating writer: %w", err)
|
||||
}
|
||||
defer func() { _ = w.Close() }()
|
||||
|
||||
// Copy data
|
||||
if _, err := io.Copy(w, src); err != nil {
|
||||
return 0, "", fmt.Errorf("copying data: %w", err)
|
||||
}
|
||||
|
||||
// Close to flush
|
||||
if err := w.Close(); err != nil {
|
||||
return 0, "", fmt.Errorf("closing writer: %w", err)
|
||||
}
|
||||
|
||||
return w.BytesWritten(), hex.EncodeToString(w.Sum256()), nil
|
||||
}
|
||||
73
internal/blobgen/reader.go
Normal file
73
internal/blobgen/reader.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package blobgen
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"fmt"
|
||||
"hash"
|
||||
"io"
|
||||
|
||||
"filippo.io/age"
|
||||
"github.com/klauspost/compress/zstd"
|
||||
)
|
||||
|
||||
// Reader wraps decompression and decryption with SHA256 verification
|
||||
type Reader struct {
|
||||
reader io.Reader
|
||||
decompressor *zstd.Decoder
|
||||
decryptor io.Reader
|
||||
hasher hash.Hash
|
||||
teeReader io.Reader
|
||||
bytesRead int64
|
||||
}
|
||||
|
||||
// NewReader creates a new Reader that decrypts, decompresses, and verifies data
|
||||
func NewReader(r io.Reader, identity age.Identity) (*Reader, error) {
|
||||
// Create decryption reader
|
||||
decReader, err := age.Decrypt(r, identity)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating decryption reader: %w", err)
|
||||
}
|
||||
|
||||
// Create decompression reader
|
||||
decompressor, err := zstd.NewReader(decReader)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating decompression reader: %w", err)
|
||||
}
|
||||
|
||||
// Create SHA256 hasher
|
||||
hasher := sha256.New()
|
||||
|
||||
// Create tee reader that reads from decompressor and writes to hasher
|
||||
teeReader := io.TeeReader(decompressor, hasher)
|
||||
|
||||
return &Reader{
|
||||
reader: r,
|
||||
decompressor: decompressor,
|
||||
decryptor: decReader,
|
||||
hasher: hasher,
|
||||
teeReader: teeReader,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Read implements io.Reader
|
||||
func (r *Reader) Read(p []byte) (n int, err error) {
|
||||
n, err = r.teeReader.Read(p)
|
||||
r.bytesRead += int64(n)
|
||||
return n, err
|
||||
}
|
||||
|
||||
// Close closes the decompressor
|
||||
func (r *Reader) Close() error {
|
||||
r.decompressor.Close()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sum256 returns the SHA256 hash of all data read
|
||||
func (r *Reader) Sum256() []byte {
|
||||
return r.hasher.Sum(nil)
|
||||
}
|
||||
|
||||
// BytesRead returns the number of uncompressed bytes read
|
||||
func (r *Reader) BytesRead() int64 {
|
||||
return r.bytesRead
|
||||
}
|
||||
112
internal/blobgen/writer.go
Normal file
112
internal/blobgen/writer.go
Normal file
@@ -0,0 +1,112 @@
|
||||
package blobgen
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"fmt"
|
||||
"hash"
|
||||
"io"
|
||||
|
||||
"filippo.io/age"
|
||||
"github.com/klauspost/compress/zstd"
|
||||
)
|
||||
|
||||
// Writer wraps compression and encryption with SHA256 hashing
|
||||
type Writer struct {
|
||||
writer io.Writer // Final destination
|
||||
compressor *zstd.Encoder // Compression layer
|
||||
encryptor io.WriteCloser // Encryption layer
|
||||
hasher hash.Hash // SHA256 hasher
|
||||
teeWriter io.Writer // Tees data to hasher
|
||||
compressionLevel int
|
||||
bytesWritten int64
|
||||
}
|
||||
|
||||
// NewWriter creates a new Writer that compresses, encrypts, and hashes data
|
||||
func NewWriter(w io.Writer, compressionLevel int, recipients []string) (*Writer, error) {
|
||||
// Validate compression level
|
||||
if err := validateCompressionLevel(compressionLevel); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create SHA256 hasher
|
||||
hasher := sha256.New()
|
||||
|
||||
// Parse recipients
|
||||
var ageRecipients []age.Recipient
|
||||
for _, recipient := range recipients {
|
||||
r, err := age.ParseX25519Recipient(recipient)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing recipient %s: %w", recipient, err)
|
||||
}
|
||||
ageRecipients = append(ageRecipients, r)
|
||||
}
|
||||
|
||||
// Create encryption writer
|
||||
encWriter, err := age.Encrypt(w, ageRecipients...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating encryption writer: %w", err)
|
||||
}
|
||||
|
||||
// Create compression writer with encryption as destination
|
||||
compressor, err := zstd.NewWriter(encWriter,
|
||||
zstd.WithEncoderLevel(zstd.EncoderLevelFromZstd(compressionLevel)),
|
||||
zstd.WithEncoderConcurrency(1), // Use single thread for streaming
|
||||
)
|
||||
if err != nil {
|
||||
_ = encWriter.Close()
|
||||
return nil, fmt.Errorf("creating compression writer: %w", err)
|
||||
}
|
||||
|
||||
// Create tee writer that writes to both compressor and hasher
|
||||
teeWriter := io.MultiWriter(compressor, hasher)
|
||||
|
||||
return &Writer{
|
||||
writer: w,
|
||||
compressor: compressor,
|
||||
encryptor: encWriter,
|
||||
hasher: hasher,
|
||||
teeWriter: teeWriter,
|
||||
compressionLevel: compressionLevel,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Write implements io.Writer
|
||||
func (w *Writer) Write(p []byte) (n int, err error) {
|
||||
n, err = w.teeWriter.Write(p)
|
||||
w.bytesWritten += int64(n)
|
||||
return n, err
|
||||
}
|
||||
|
||||
// Close closes all layers and returns any errors
|
||||
func (w *Writer) Close() error {
|
||||
// Close compressor first
|
||||
if err := w.compressor.Close(); err != nil {
|
||||
return fmt.Errorf("closing compressor: %w", err)
|
||||
}
|
||||
|
||||
// Then close encryptor
|
||||
if err := w.encryptor.Close(); err != nil {
|
||||
return fmt.Errorf("closing encryptor: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sum256 returns the SHA256 hash of all data written
|
||||
func (w *Writer) Sum256() []byte {
|
||||
return w.hasher.Sum(nil)
|
||||
}
|
||||
|
||||
// BytesWritten returns the number of uncompressed bytes written
|
||||
func (w *Writer) BytesWritten() int64 {
|
||||
return w.bytesWritten
|
||||
}
|
||||
|
||||
func validateCompressionLevel(level int) error {
|
||||
// Zstd compression levels: 1-19 (default is 3)
|
||||
// SpeedFastest = 1, SpeedDefault = 3, SpeedBetterCompression = 7, SpeedBestCompression = 11
|
||||
if level < 1 || level > 19 {
|
||||
return fmt.Errorf("invalid compression level %d: must be between 1 and 19", level)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
172
internal/chunker/chunker.go
Normal file
172
internal/chunker/chunker.go
Normal file
@@ -0,0 +1,172 @@
|
||||
package chunker
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
|
||||
"github.com/jotfs/fastcdc-go"
|
||||
)
|
||||
|
||||
// Chunk represents a single chunk of data produced by the content-defined chunking algorithm.
|
||||
// Each chunk is identified by its SHA256 hash and contains the raw data along with
|
||||
// its position and size information from the original file.
|
||||
type Chunk struct {
|
||||
Hash string // Content hash of the chunk
|
||||
Data []byte // Chunk data
|
||||
Offset int64 // Offset in the original file
|
||||
Size int64 // Size of the chunk
|
||||
}
|
||||
|
||||
// Chunker provides content-defined chunking using the FastCDC algorithm.
|
||||
// It splits data into variable-sized chunks based on content patterns, ensuring
|
||||
// that identical data sequences produce identical chunks regardless of their
|
||||
// position in the file. This enables efficient deduplication.
|
||||
type Chunker struct {
|
||||
avgChunkSize int
|
||||
minChunkSize int
|
||||
maxChunkSize int
|
||||
}
|
||||
|
||||
// NewChunker creates a new chunker with the specified average chunk size.
|
||||
// The actual chunk sizes will vary between avgChunkSize/4 and avgChunkSize*4
|
||||
// as recommended by the FastCDC algorithm. Typical values for avgChunkSize
|
||||
// are 64KB (65536), 256KB (262144), or 1MB (1048576).
|
||||
func NewChunker(avgChunkSize int64) *Chunker {
|
||||
// FastCDC recommends min = avg/4 and max = avg*4
|
||||
return &Chunker{
|
||||
avgChunkSize: int(avgChunkSize),
|
||||
minChunkSize: int(avgChunkSize / 4),
|
||||
maxChunkSize: int(avgChunkSize * 4),
|
||||
}
|
||||
}
|
||||
|
||||
// ChunkReader splits the reader into content-defined chunks and returns all chunks at once.
|
||||
// This method loads all chunk data into memory, so it should only be used for
|
||||
// reasonably sized inputs. For large files or streams, use ChunkReaderStreaming instead.
|
||||
// Returns an error if chunking fails or if reading from the input fails.
|
||||
func (c *Chunker) ChunkReader(r io.Reader) ([]Chunk, error) {
|
||||
opts := fastcdc.Options{
|
||||
MinSize: c.minChunkSize,
|
||||
AverageSize: c.avgChunkSize,
|
||||
MaxSize: c.maxChunkSize,
|
||||
}
|
||||
|
||||
chunker, err := fastcdc.NewChunker(r, opts)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating chunker: %w", err)
|
||||
}
|
||||
|
||||
var chunks []Chunk
|
||||
offset := int64(0)
|
||||
|
||||
for {
|
||||
chunk, err := chunker.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("reading chunk: %w", err)
|
||||
}
|
||||
|
||||
// Calculate hash
|
||||
hash := sha256.Sum256(chunk.Data)
|
||||
|
||||
// Make a copy of the data since FastCDC reuses the buffer
|
||||
chunkData := make([]byte, len(chunk.Data))
|
||||
copy(chunkData, chunk.Data)
|
||||
|
||||
chunks = append(chunks, Chunk{
|
||||
Hash: hex.EncodeToString(hash[:]),
|
||||
Data: chunkData,
|
||||
Offset: offset,
|
||||
Size: int64(len(chunk.Data)),
|
||||
})
|
||||
|
||||
offset += int64(len(chunk.Data))
|
||||
}
|
||||
|
||||
return chunks, nil
|
||||
}
|
||||
|
||||
// ChunkCallback is a function called for each chunk as it's processed.
|
||||
// The callback receives a Chunk containing the hash, data, offset, and size.
|
||||
// If the callback returns an error, chunk processing stops and the error is propagated.
|
||||
type ChunkCallback func(chunk Chunk) error
|
||||
|
||||
// ChunkReaderStreaming splits the reader into chunks and calls the callback for each chunk.
|
||||
// This is the preferred method for processing large files or streams as it doesn't
|
||||
// accumulate all chunks in memory. The callback is invoked for each chunk as it's
|
||||
// produced, allowing for streaming processing and immediate storage or transmission.
|
||||
// Returns the SHA256 hash of the entire file content and an error if chunking fails,
|
||||
// reading fails, or if the callback returns an error.
|
||||
func (c *Chunker) ChunkReaderStreaming(r io.Reader, callback ChunkCallback) (string, error) {
|
||||
// Create a tee reader to calculate full file hash while chunking
|
||||
fileHasher := sha256.New()
|
||||
teeReader := io.TeeReader(r, fileHasher)
|
||||
|
||||
opts := fastcdc.Options{
|
||||
MinSize: c.minChunkSize,
|
||||
AverageSize: c.avgChunkSize,
|
||||
MaxSize: c.maxChunkSize,
|
||||
}
|
||||
|
||||
chunker, err := fastcdc.NewChunker(teeReader, opts)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("creating chunker: %w", err)
|
||||
}
|
||||
|
||||
offset := int64(0)
|
||||
|
||||
for {
|
||||
chunk, err := chunker.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("reading chunk: %w", err)
|
||||
}
|
||||
|
||||
// Calculate chunk hash
|
||||
hash := sha256.Sum256(chunk.Data)
|
||||
|
||||
// Make a copy of the data since FastCDC reuses the buffer
|
||||
chunkData := make([]byte, len(chunk.Data))
|
||||
copy(chunkData, chunk.Data)
|
||||
|
||||
if err := callback(Chunk{
|
||||
Hash: hex.EncodeToString(hash[:]),
|
||||
Data: chunkData,
|
||||
Offset: offset,
|
||||
Size: int64(len(chunk.Data)),
|
||||
}); err != nil {
|
||||
return "", fmt.Errorf("callback error: %w", err)
|
||||
}
|
||||
|
||||
offset += int64(len(chunk.Data))
|
||||
}
|
||||
|
||||
// Return the full file hash
|
||||
return hex.EncodeToString(fileHasher.Sum(nil)), nil
|
||||
}
|
||||
|
||||
// ChunkFile splits a file into content-defined chunks by reading the entire file.
|
||||
// This is a convenience method that opens the file and passes it to ChunkReader.
|
||||
// For large files, consider using ChunkReaderStreaming with a file handle instead.
|
||||
// Returns an error if the file cannot be opened or if chunking fails.
|
||||
func (c *Chunker) ChunkFile(path string) ([]Chunk, error) {
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening file: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := file.Close(); err != nil && err.Error() != "invalid argument" {
|
||||
// Log error or handle as needed
|
||||
_ = err
|
||||
}
|
||||
}()
|
||||
|
||||
return c.ChunkReader(file)
|
||||
}
|
||||
77
internal/chunker/chunker_isolated_test.go
Normal file
77
internal/chunker/chunker_isolated_test.go
Normal file
@@ -0,0 +1,77 @@
|
||||
package chunker
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestChunkerExpectedChunkCount(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
fileSize int
|
||||
avgChunkSize int64
|
||||
minExpected int
|
||||
maxExpected int
|
||||
}{
|
||||
{
|
||||
name: "1MB file with 64KB average",
|
||||
fileSize: 1024 * 1024,
|
||||
avgChunkSize: 64 * 1024,
|
||||
minExpected: 8, // At least half the expected count
|
||||
maxExpected: 32, // At most double the expected count
|
||||
},
|
||||
{
|
||||
name: "10MB file with 256KB average",
|
||||
fileSize: 10 * 1024 * 1024,
|
||||
avgChunkSize: 256 * 1024,
|
||||
minExpected: 10, // FastCDC may produce larger chunks
|
||||
maxExpected: 80,
|
||||
},
|
||||
{
|
||||
name: "512KB file with 64KB average",
|
||||
fileSize: 512 * 1024,
|
||||
avgChunkSize: 64 * 1024,
|
||||
minExpected: 4, // ~8 expected
|
||||
maxExpected: 16,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
chunker := NewChunker(tt.avgChunkSize)
|
||||
|
||||
// Create data with some variation to trigger chunk boundaries
|
||||
data := make([]byte, tt.fileSize)
|
||||
for i := 0; i < len(data); i++ {
|
||||
// Use a pattern that should create boundaries
|
||||
data[i] = byte((i * 17) ^ (i >> 5))
|
||||
}
|
||||
|
||||
chunks, err := chunker.ChunkReader(bytes.NewReader(data))
|
||||
if err != nil {
|
||||
t.Fatalf("chunking failed: %v", err)
|
||||
}
|
||||
|
||||
t.Logf("Created %d chunks for %d bytes with %d average chunk size",
|
||||
len(chunks), tt.fileSize, tt.avgChunkSize)
|
||||
|
||||
if len(chunks) < tt.minExpected {
|
||||
t.Errorf("too few chunks: got %d, expected at least %d",
|
||||
len(chunks), tt.minExpected)
|
||||
}
|
||||
if len(chunks) > tt.maxExpected {
|
||||
t.Errorf("too many chunks: got %d, expected at most %d",
|
||||
len(chunks), tt.maxExpected)
|
||||
}
|
||||
|
||||
// Verify chunks reconstruct to original
|
||||
var reconstructed []byte
|
||||
for _, chunk := range chunks {
|
||||
reconstructed = append(reconstructed, chunk.Data...)
|
||||
}
|
||||
if !bytes.Equal(data, reconstructed) {
|
||||
t.Error("reconstructed data doesn't match original")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
128
internal/chunker/chunker_test.go
Normal file
128
internal/chunker/chunker_test.go
Normal file
@@ -0,0 +1,128 @@
|
||||
package chunker
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestChunker(t *testing.T) {
|
||||
t.Run("small file produces single chunk", func(t *testing.T) {
|
||||
chunker := NewChunker(1024 * 1024) // 1MB average
|
||||
data := bytes.Repeat([]byte("hello"), 100) // 500 bytes
|
||||
|
||||
chunks, err := chunker.ChunkReader(bytes.NewReader(data))
|
||||
if err != nil {
|
||||
t.Fatalf("chunking failed: %v", err)
|
||||
}
|
||||
|
||||
if len(chunks) != 1 {
|
||||
t.Errorf("expected 1 chunk, got %d", len(chunks))
|
||||
}
|
||||
|
||||
if chunks[0].Size != int64(len(data)) {
|
||||
t.Errorf("expected chunk size %d, got %d", len(data), chunks[0].Size)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("large file produces multiple chunks", func(t *testing.T) {
|
||||
chunker := NewChunker(256 * 1024) // 256KB average chunk size
|
||||
|
||||
// Generate 2MB of random data
|
||||
data := make([]byte, 2*1024*1024)
|
||||
if _, err := rand.Read(data); err != nil {
|
||||
t.Fatalf("failed to generate random data: %v", err)
|
||||
}
|
||||
|
||||
chunks, err := chunker.ChunkReader(bytes.NewReader(data))
|
||||
if err != nil {
|
||||
t.Fatalf("chunking failed: %v", err)
|
||||
}
|
||||
|
||||
// Should produce multiple chunks - with FastCDC we expect around 8 chunks for 2MB with 256KB average
|
||||
if len(chunks) < 4 || len(chunks) > 16 {
|
||||
t.Errorf("expected 4-16 chunks, got %d", len(chunks))
|
||||
}
|
||||
|
||||
// Verify chunks reconstruct original data
|
||||
var reconstructed []byte
|
||||
for _, chunk := range chunks {
|
||||
reconstructed = append(reconstructed, chunk.Data...)
|
||||
}
|
||||
|
||||
if !bytes.Equal(data, reconstructed) {
|
||||
t.Error("reconstructed data doesn't match original")
|
||||
}
|
||||
|
||||
// Verify offsets
|
||||
var expectedOffset int64
|
||||
for i, chunk := range chunks {
|
||||
if chunk.Offset != expectedOffset {
|
||||
t.Errorf("chunk %d: expected offset %d, got %d", i, expectedOffset, chunk.Offset)
|
||||
}
|
||||
expectedOffset += chunk.Size
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("deterministic chunking", func(t *testing.T) {
|
||||
chunker1 := NewChunker(256 * 1024)
|
||||
chunker2 := NewChunker(256 * 1024)
|
||||
|
||||
// Use deterministic data
|
||||
data := bytes.Repeat([]byte("abcdefghijklmnopqrstuvwxyz"), 20000) // ~520KB
|
||||
|
||||
chunks1, err := chunker1.ChunkReader(bytes.NewReader(data))
|
||||
if err != nil {
|
||||
t.Fatalf("chunking failed: %v", err)
|
||||
}
|
||||
|
||||
chunks2, err := chunker2.ChunkReader(bytes.NewReader(data))
|
||||
if err != nil {
|
||||
t.Fatalf("chunking failed: %v", err)
|
||||
}
|
||||
|
||||
// Should produce same chunks
|
||||
if len(chunks1) != len(chunks2) {
|
||||
t.Fatalf("different number of chunks: %d vs %d", len(chunks1), len(chunks2))
|
||||
}
|
||||
|
||||
for i := range chunks1 {
|
||||
if chunks1[i].Hash != chunks2[i].Hash {
|
||||
t.Errorf("chunk %d: different hashes", i)
|
||||
}
|
||||
if chunks1[i].Size != chunks2[i].Size {
|
||||
t.Errorf("chunk %d: different sizes", i)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestChunkBoundaries(t *testing.T) {
|
||||
chunker := NewChunker(256 * 1024) // 256KB average
|
||||
|
||||
// FastCDC uses avg/4 for min and avg*4 for max
|
||||
avgSize := int64(256 * 1024)
|
||||
minSize := avgSize / 4
|
||||
maxSize := avgSize * 4
|
||||
|
||||
// Test that minimum chunk size is respected
|
||||
data := make([]byte, minSize+1024)
|
||||
if _, err := rand.Read(data); err != nil {
|
||||
t.Fatalf("failed to generate random data: %v", err)
|
||||
}
|
||||
|
||||
chunks, err := chunker.ChunkReader(bytes.NewReader(data))
|
||||
if err != nil {
|
||||
t.Fatalf("chunking failed: %v", err)
|
||||
}
|
||||
|
||||
for i, chunk := range chunks {
|
||||
// Last chunk can be smaller than minimum
|
||||
if i < len(chunks)-1 && chunk.Size < minSize {
|
||||
t.Errorf("chunk %d size %d is below minimum %d", i, chunk.Size, minSize)
|
||||
}
|
||||
if chunk.Size > maxSize {
|
||||
t.Errorf("chunk %d size %d exceeds maximum %d", i, chunk.Size, maxSize)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2,28 +2,63 @@ package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/globals"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/pidlock"
|
||||
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
||||
"git.eeqj.de/sneak/vaultik/internal/storage"
|
||||
"git.eeqj.de/sneak/vaultik/internal/vaultik"
|
||||
"github.com/adrg/xdg"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// AppOptions contains common options for creating the fx application
|
||||
// AppOptions contains common options for creating the fx application.
|
||||
// It includes the configuration file path, logging options, and additional
|
||||
// fx modules and invocations that should be included in the application.
|
||||
type AppOptions struct {
|
||||
ConfigPath string
|
||||
LogOptions log.LogOptions
|
||||
Modules []fx.Option
|
||||
Invokes []fx.Option
|
||||
}
|
||||
|
||||
// NewApp creates a new fx application with common modules
|
||||
// setupGlobals sets up the globals with application startup time
|
||||
func setupGlobals(lc fx.Lifecycle, g *globals.Globals) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
g.StartTime = time.Now().UTC()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// NewApp creates a new fx application with common modules.
|
||||
// It sets up the base modules (config, database, logging, globals) and
|
||||
// combines them with any additional modules specified in the options.
|
||||
// The returned fx.App is ready to be started with RunApp.
|
||||
func NewApp(opts AppOptions) *fx.App {
|
||||
baseModules := []fx.Option{
|
||||
fx.Supply(config.ConfigPath(opts.ConfigPath)),
|
||||
fx.Supply(opts.LogOptions),
|
||||
fx.Provide(globals.New),
|
||||
fx.Provide(log.New),
|
||||
config.Module,
|
||||
database.Module,
|
||||
log.Module,
|
||||
storage.Module,
|
||||
snapshot.Module,
|
||||
fx.Provide(vaultik.New),
|
||||
fx.Invoke(setupGlobals),
|
||||
fx.NopLogger,
|
||||
}
|
||||
|
||||
@@ -33,24 +68,77 @@ func NewApp(opts AppOptions) *fx.App {
|
||||
return fx.New(allOptions...)
|
||||
}
|
||||
|
||||
// RunApp starts and stops the fx application within the given context
|
||||
// RunApp starts and stops the fx application within the given context.
|
||||
// It handles graceful shutdown on interrupt signals (SIGINT, SIGTERM) and
|
||||
// ensures the application stops cleanly. The function blocks until the
|
||||
// application completes or is interrupted. Returns an error if startup fails.
|
||||
func RunApp(ctx context.Context, app *fx.App) error {
|
||||
// Set up signal handling for graceful shutdown
|
||||
sigChan := make(chan os.Signal, 1)
|
||||
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
|
||||
|
||||
// Create a context that will be cancelled on signal
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
// Start the app
|
||||
if err := app.Start(ctx); err != nil {
|
||||
return fmt.Errorf("failed to start app: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := app.Stop(ctx); err != nil {
|
||||
fmt.Printf("error stopping app: %v\n", err)
|
||||
|
||||
// Handle shutdown
|
||||
shutdownComplete := make(chan struct{})
|
||||
go func() {
|
||||
defer close(shutdownComplete)
|
||||
<-sigChan
|
||||
log.Notice("Received interrupt signal, shutting down gracefully...")
|
||||
|
||||
// Create a timeout context for shutdown
|
||||
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer shutdownCancel()
|
||||
|
||||
if err := app.Stop(shutdownCtx); err != nil {
|
||||
log.Error("Error during shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Wait for context cancellation
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
// Wait for either the signal handler to complete shutdown or the app to request shutdown
|
||||
select {
|
||||
case <-shutdownComplete:
|
||||
// Shutdown completed via signal
|
||||
return nil
|
||||
case <-ctx.Done():
|
||||
// Context cancelled (shouldn't happen in normal operation)
|
||||
if err := app.Stop(context.Background()); err != nil {
|
||||
log.Error("Error stopping app", "error", err)
|
||||
}
|
||||
return ctx.Err()
|
||||
case <-app.Done():
|
||||
// App finished running (e.g., backup completed)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// RunWithApp is a helper that creates and runs an fx app with the given options
|
||||
// RunWithApp is a helper that creates and runs an fx app with the given options.
|
||||
// It combines NewApp and RunApp into a single convenient function. This is the
|
||||
// preferred way to run CLI commands that need the full application context.
|
||||
// It acquires a PID lock before starting to prevent concurrent instances.
|
||||
func RunWithApp(ctx context.Context, opts AppOptions) error {
|
||||
// Acquire PID lock to prevent concurrent instances
|
||||
lockDir := filepath.Join(xdg.DataHome, "berlin.sneak.app.vaultik")
|
||||
lock, err := pidlock.Acquire(lockDir)
|
||||
if err != nil {
|
||||
if errors.Is(err, pidlock.ErrAlreadyRunning) {
|
||||
return fmt.Errorf("cannot start: %w", err)
|
||||
}
|
||||
return fmt.Errorf("failed to acquire lock: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := lock.Release(); err != nil {
|
||||
log.Warn("Failed to release PID lock", "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
app := NewApp(opts)
|
||||
return RunApp(ctx, app)
|
||||
}
|
||||
|
||||
@@ -1,83 +0,0 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/globals"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// BackupOptions contains options for the backup command
|
||||
type BackupOptions struct {
|
||||
ConfigPath string
|
||||
Daemon bool
|
||||
Cron bool
|
||||
Prune bool
|
||||
}
|
||||
|
||||
// NewBackupCommand creates the backup command
|
||||
func NewBackupCommand() *cobra.Command {
|
||||
opts := &BackupOptions{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "backup",
|
||||
Short: "Perform incremental backup",
|
||||
Long: `Backup configured directories using incremental deduplication and encryption.
|
||||
|
||||
Config is located at /etc/vaultik/config.yml, but can be overridden by specifying
|
||||
a path using --config or by setting VAULTIK_CONFIG to a path.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// If --config not specified, check environment variable
|
||||
if opts.ConfigPath == "" {
|
||||
opts.ConfigPath = os.Getenv("VAULTIK_CONFIG")
|
||||
}
|
||||
// If still not specified, use default
|
||||
if opts.ConfigPath == "" {
|
||||
defaultConfig := "/etc/vaultik/config.yml"
|
||||
if _, err := os.Stat(defaultConfig); err == nil {
|
||||
opts.ConfigPath = defaultConfig
|
||||
} else {
|
||||
return fmt.Errorf("no config file specified, VAULTIK_CONFIG not set, and %s not found", defaultConfig)
|
||||
}
|
||||
}
|
||||
return runBackup(cmd.Context(), opts)
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&opts.ConfigPath, "config", "", "Path to config file")
|
||||
cmd.Flags().BoolVar(&opts.Daemon, "daemon", false, "Run in daemon mode with inotify monitoring")
|
||||
cmd.Flags().BoolVar(&opts.Cron, "cron", false, "Run in cron mode (silent unless error)")
|
||||
cmd.Flags().BoolVar(&opts.Prune, "prune", false, "Delete all previous snapshots and unreferenced blobs after backup")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runBackup(ctx context.Context, opts *BackupOptions) error {
|
||||
return RunWithApp(ctx, AppOptions{
|
||||
ConfigPath: opts.ConfigPath,
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(g *globals.Globals, cfg *config.Config, repos *database.Repositories) error {
|
||||
// TODO: Implement backup logic
|
||||
fmt.Printf("Running backup with config: %s\n", opts.ConfigPath)
|
||||
fmt.Printf("Version: %s, Commit: %s\n", g.Version, g.Commit)
|
||||
fmt.Printf("Index path: %s\n", cfg.IndexPath)
|
||||
if opts.Daemon {
|
||||
fmt.Println("Running in daemon mode")
|
||||
}
|
||||
if opts.Cron {
|
||||
fmt.Println("Running in cron mode")
|
||||
}
|
||||
if opts.Prune {
|
||||
fmt.Println("Pruning enabled - will delete old snapshots after backup")
|
||||
}
|
||||
return nil
|
||||
}),
|
||||
},
|
||||
})
|
||||
}
|
||||
94
internal/cli/duration.go
Normal file
94
internal/cli/duration.go
Normal file
@@ -0,0 +1,94 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// parseDuration parses duration strings. Supports standard Go duration format
|
||||
// (e.g., "3h30m", "1h45m30s") as well as extended units:
|
||||
// - d: days (e.g., "30d", "7d")
|
||||
// - w: weeks (e.g., "2w", "4w")
|
||||
// - mo: months (30 days) (e.g., "6mo", "1mo")
|
||||
// - y: years (365 days) (e.g., "1y", "2y")
|
||||
//
|
||||
// Can combine units: "1y6mo", "2w3d", "1d12h30m"
|
||||
func parseDuration(s string) (time.Duration, error) {
|
||||
// First try standard Go duration parsing
|
||||
if d, err := time.ParseDuration(s); err == nil {
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// Extended duration parsing
|
||||
// Check for negative values
|
||||
if strings.HasPrefix(strings.TrimSpace(s), "-") {
|
||||
return 0, fmt.Errorf("negative durations are not supported")
|
||||
}
|
||||
|
||||
// Pattern matches: number + unit, repeated
|
||||
re := regexp.MustCompile(`(\d+(?:\.\d+)?)\s*([a-zA-Z]+)`)
|
||||
matches := re.FindAllStringSubmatch(s, -1)
|
||||
|
||||
if len(matches) == 0 {
|
||||
return 0, fmt.Errorf("invalid duration format: %q", s)
|
||||
}
|
||||
|
||||
var total time.Duration
|
||||
|
||||
for _, match := range matches {
|
||||
valueStr := match[1]
|
||||
unit := strings.ToLower(match[2])
|
||||
|
||||
value, err := strconv.ParseFloat(valueStr, 64)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("invalid number %q: %w", valueStr, err)
|
||||
}
|
||||
|
||||
var d time.Duration
|
||||
switch unit {
|
||||
// Standard time units
|
||||
case "ns", "nanosecond", "nanoseconds":
|
||||
d = time.Duration(value)
|
||||
case "us", "µs", "microsecond", "microseconds":
|
||||
d = time.Duration(value * float64(time.Microsecond))
|
||||
case "ms", "millisecond", "milliseconds":
|
||||
d = time.Duration(value * float64(time.Millisecond))
|
||||
case "s", "sec", "second", "seconds":
|
||||
d = time.Duration(value * float64(time.Second))
|
||||
case "m", "min", "minute", "minutes":
|
||||
d = time.Duration(value * float64(time.Minute))
|
||||
case "h", "hr", "hour", "hours":
|
||||
d = time.Duration(value * float64(time.Hour))
|
||||
// Extended units
|
||||
case "d", "day", "days":
|
||||
d = time.Duration(value * float64(24*time.Hour))
|
||||
case "w", "week", "weeks":
|
||||
d = time.Duration(value * float64(7*24*time.Hour))
|
||||
case "mo", "month", "months":
|
||||
// Using 30 days as approximation
|
||||
d = time.Duration(value * float64(30*24*time.Hour))
|
||||
case "y", "year", "years":
|
||||
// Using 365 days as approximation
|
||||
d = time.Duration(value * float64(365*24*time.Hour))
|
||||
default:
|
||||
// Try parsing as standard Go duration unit
|
||||
testStr := fmt.Sprintf("1%s", unit)
|
||||
if _, err := time.ParseDuration(testStr); err == nil {
|
||||
// It's a valid Go duration unit, parse the full value
|
||||
fullStr := fmt.Sprintf("%g%s", value, unit)
|
||||
if d, err = time.ParseDuration(fullStr); err != nil {
|
||||
return 0, fmt.Errorf("invalid duration %q: %w", fullStr, err)
|
||||
}
|
||||
} else {
|
||||
return 0, fmt.Errorf("unknown time unit %q", unit)
|
||||
}
|
||||
}
|
||||
|
||||
total += d
|
||||
}
|
||||
|
||||
return total, nil
|
||||
}
|
||||
263
internal/cli/duration_test.go
Normal file
263
internal/cli/duration_test.go
Normal file
@@ -0,0 +1,263 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestParseDuration(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
expected time.Duration
|
||||
wantErr bool
|
||||
}{
|
||||
// Standard Go durations
|
||||
{
|
||||
name: "standard seconds",
|
||||
input: "30s",
|
||||
expected: 30 * time.Second,
|
||||
},
|
||||
{
|
||||
name: "standard minutes",
|
||||
input: "45m",
|
||||
expected: 45 * time.Minute,
|
||||
},
|
||||
{
|
||||
name: "standard hours",
|
||||
input: "2h",
|
||||
expected: 2 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "standard combined",
|
||||
input: "3h30m",
|
||||
expected: 3*time.Hour + 30*time.Minute,
|
||||
},
|
||||
{
|
||||
name: "standard complex",
|
||||
input: "1h45m30s",
|
||||
expected: 1*time.Hour + 45*time.Minute + 30*time.Second,
|
||||
},
|
||||
{
|
||||
name: "standard with milliseconds",
|
||||
input: "1s500ms",
|
||||
expected: 1*time.Second + 500*time.Millisecond,
|
||||
},
|
||||
// Extended units - days
|
||||
{
|
||||
name: "single day",
|
||||
input: "1d",
|
||||
expected: 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "multiple days",
|
||||
input: "7d",
|
||||
expected: 7 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "fractional days",
|
||||
input: "1.5d",
|
||||
expected: 36 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "days spelled out",
|
||||
input: "3days",
|
||||
expected: 3 * 24 * time.Hour,
|
||||
},
|
||||
// Extended units - weeks
|
||||
{
|
||||
name: "single week",
|
||||
input: "1w",
|
||||
expected: 7 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "multiple weeks",
|
||||
input: "4w",
|
||||
expected: 4 * 7 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "weeks spelled out",
|
||||
input: "2weeks",
|
||||
expected: 2 * 7 * 24 * time.Hour,
|
||||
},
|
||||
// Extended units - months
|
||||
{
|
||||
name: "single month",
|
||||
input: "1mo",
|
||||
expected: 30 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "multiple months",
|
||||
input: "6mo",
|
||||
expected: 6 * 30 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "months spelled out",
|
||||
input: "3months",
|
||||
expected: 3 * 30 * 24 * time.Hour,
|
||||
},
|
||||
// Extended units - years
|
||||
{
|
||||
name: "single year",
|
||||
input: "1y",
|
||||
expected: 365 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "multiple years",
|
||||
input: "2y",
|
||||
expected: 2 * 365 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
name: "years spelled out",
|
||||
input: "1year",
|
||||
expected: 365 * 24 * time.Hour,
|
||||
},
|
||||
// Combined extended units
|
||||
{
|
||||
name: "weeks and days",
|
||||
input: "2w3d",
|
||||
expected: 2*7*24*time.Hour + 3*24*time.Hour,
|
||||
},
|
||||
{
|
||||
name: "years and months",
|
||||
input: "1y6mo",
|
||||
expected: 365*24*time.Hour + 6*30*24*time.Hour,
|
||||
},
|
||||
{
|
||||
name: "days and hours",
|
||||
input: "1d12h",
|
||||
expected: 24*time.Hour + 12*time.Hour,
|
||||
},
|
||||
{
|
||||
name: "complex combination",
|
||||
input: "1y2mo3w4d5h6m7s",
|
||||
expected: 365*24*time.Hour + 2*30*24*time.Hour + 3*7*24*time.Hour + 4*24*time.Hour + 5*time.Hour + 6*time.Minute + 7*time.Second,
|
||||
},
|
||||
{
|
||||
name: "with spaces",
|
||||
input: "1d 12h 30m",
|
||||
expected: 24*time.Hour + 12*time.Hour + 30*time.Minute,
|
||||
},
|
||||
// Edge cases
|
||||
{
|
||||
name: "zero duration",
|
||||
input: "0s",
|
||||
expected: 0,
|
||||
},
|
||||
{
|
||||
name: "large duration",
|
||||
input: "10y",
|
||||
expected: 10 * 365 * 24 * time.Hour,
|
||||
},
|
||||
// Error cases
|
||||
{
|
||||
name: "empty string",
|
||||
input: "",
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "invalid format",
|
||||
input: "abc",
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "unknown unit",
|
||||
input: "5x",
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "invalid number",
|
||||
input: "xyzd",
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "negative not supported",
|
||||
input: "-5d",
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := parseDuration(tt.input)
|
||||
|
||||
if tt.wantErr {
|
||||
assert.Error(t, err, "expected error for input %q", tt.input)
|
||||
return
|
||||
}
|
||||
|
||||
assert.NoError(t, err, "unexpected error for input %q", tt.input)
|
||||
assert.Equal(t, tt.expected, got, "duration mismatch for input %q", tt.input)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDurationSpecialCases(t *testing.T) {
|
||||
// Test that standard Go durations work exactly as expected
|
||||
standardDurations := []string{
|
||||
"300ms",
|
||||
"1.5h",
|
||||
"2h45m",
|
||||
"72h",
|
||||
"1us",
|
||||
"1µs",
|
||||
"1ns",
|
||||
}
|
||||
|
||||
for _, d := range standardDurations {
|
||||
expected, err := time.ParseDuration(d)
|
||||
assert.NoError(t, err)
|
||||
|
||||
got, err := parseDuration(d)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, expected, got, "standard duration %q should parse identically", d)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDurationRealWorldExamples(t *testing.T) {
|
||||
// Test real-world snapshot purge scenarios
|
||||
tests := []struct {
|
||||
description string
|
||||
input string
|
||||
olderThan time.Duration
|
||||
}{
|
||||
{
|
||||
description: "keep snapshots from last 30 days",
|
||||
input: "30d",
|
||||
olderThan: 30 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
description: "keep snapshots from last 6 months",
|
||||
input: "6mo",
|
||||
olderThan: 6 * 30 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
description: "keep snapshots from last year",
|
||||
input: "1y",
|
||||
olderThan: 365 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
description: "keep snapshots from last week and a half",
|
||||
input: "1w3d",
|
||||
olderThan: 10 * 24 * time.Hour,
|
||||
},
|
||||
{
|
||||
description: "keep snapshots from last 90 days",
|
||||
input: "90d",
|
||||
olderThan: 90 * 24 * time.Hour,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.description, func(t *testing.T) {
|
||||
got, err := parseDuration(tt.input)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, tt.olderThan, got)
|
||||
|
||||
// Verify the duration makes sense for snapshot purging
|
||||
assert.Greater(t, got, time.Hour, "snapshot purge duration should be at least an hour")
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -4,7 +4,9 @@ import (
|
||||
"os"
|
||||
)
|
||||
|
||||
// CLIEntry is the main entry point for the CLI application
|
||||
// CLIEntry is the main entry point for the CLI application.
|
||||
// It creates the root command, executes it, and exits with status 1
|
||||
// if an error occurs. This function should be called from main().
|
||||
func CLIEntry() {
|
||||
rootCmd := NewRootCommand()
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
|
||||
@@ -18,7 +18,7 @@ func TestCLIEntry(t *testing.T) {
|
||||
}
|
||||
|
||||
// Verify all subcommands are registered
|
||||
expectedCommands := []string{"backup", "restore", "prune", "verify", "fetch"}
|
||||
expectedCommands := []string{"snapshot", "store", "restore", "prune", "verify", "fetch"}
|
||||
for _, expected := range expectedCommands {
|
||||
found := false
|
||||
for _, cmd := range cmd.Commands() {
|
||||
@@ -32,19 +32,24 @@ func TestCLIEntry(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// Verify backup command has proper flags
|
||||
backupCmd, _, err := cmd.Find([]string{"backup"})
|
||||
// Verify snapshot command has subcommands
|
||||
snapshotCmd, _, err := cmd.Find([]string{"snapshot"})
|
||||
if err != nil {
|
||||
t.Errorf("Failed to find backup command: %v", err)
|
||||
t.Errorf("Failed to find snapshot command: %v", err)
|
||||
} else {
|
||||
if backupCmd.Flag("config") == nil {
|
||||
t.Error("Backup command missing --config flag")
|
||||
}
|
||||
if backupCmd.Flag("daemon") == nil {
|
||||
t.Error("Backup command missing --daemon flag")
|
||||
}
|
||||
if backupCmd.Flag("cron") == nil {
|
||||
t.Error("Backup command missing --cron flag")
|
||||
// Check snapshot subcommands
|
||||
expectedSubCommands := []string{"create", "list", "purge", "verify"}
|
||||
for _, expected := range expectedSubCommands {
|
||||
found := false
|
||||
for _, subcmd := range snapshotCmd.Commands() {
|
||||
if subcmd.Use == expected || subcmd.Name() == expected {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("Expected snapshot subcommand '%s' not found", expected)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,20 +3,29 @@ package cli
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/globals"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
||||
"git.eeqj.de/sneak/vaultik/internal/storage"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// FetchOptions contains options for the fetch command
|
||||
type FetchOptions struct {
|
||||
Bucket string
|
||||
Prefix string
|
||||
SnapshotID string
|
||||
FilePath string
|
||||
Target string
|
||||
}
|
||||
|
||||
// FetchApp contains all dependencies needed for fetch
|
||||
type FetchApp struct {
|
||||
Globals *globals.Globals
|
||||
Config *config.Config
|
||||
Repositories *database.Repositories
|
||||
Storage storage.Storer
|
||||
DB *database.DB
|
||||
Shutdowner fx.Shutdowner
|
||||
}
|
||||
|
||||
// NewFetchCommand creates the fetch command
|
||||
@@ -24,65 +33,106 @@ func NewFetchCommand() *cobra.Command {
|
||||
opts := &FetchOptions{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "fetch",
|
||||
Use: "fetch <snapshot-id> <file-path> <target-path>",
|
||||
Short: "Extract single file from backup",
|
||||
Long: `Download and decrypt a single file from a backup snapshot`,
|
||||
Args: cobra.NoArgs,
|
||||
Long: `Download and decrypt a single file from a backup snapshot.
|
||||
|
||||
This command extracts a specific file from the snapshot and saves it to the target path.
|
||||
The age_secret_key must be configured in the config file for decryption.`,
|
||||
Args: cobra.ExactArgs(3),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// Validate required flags
|
||||
if opts.Bucket == "" {
|
||||
return fmt.Errorf("--bucket is required")
|
||||
snapshotID := args[0]
|
||||
filePath := args[1]
|
||||
targetPath := args[2]
|
||||
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if opts.Prefix == "" {
|
||||
return fmt.Errorf("--prefix is required")
|
||||
}
|
||||
if opts.SnapshotID == "" {
|
||||
return fmt.Errorf("--snapshot is required")
|
||||
}
|
||||
if opts.FilePath == "" {
|
||||
return fmt.Errorf("--file is required")
|
||||
}
|
||||
if opts.Target == "" {
|
||||
return fmt.Errorf("--target is required")
|
||||
}
|
||||
return runFetch(cmd.Context(), opts)
|
||||
|
||||
// Use the app framework like other commands
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{
|
||||
snapshot.Module,
|
||||
fx.Provide(fx.Annotate(
|
||||
func(g *globals.Globals, cfg *config.Config, repos *database.Repositories,
|
||||
storer storage.Storer, db *database.DB, shutdowner fx.Shutdowner) *FetchApp {
|
||||
return &FetchApp{
|
||||
Globals: g,
|
||||
Config: cfg,
|
||||
Repositories: repos,
|
||||
Storage: storer,
|
||||
DB: db,
|
||||
Shutdowner: shutdowner,
|
||||
}
|
||||
},
|
||||
)),
|
||||
},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(app *FetchApp, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
// Start the fetch operation in a goroutine
|
||||
go func() {
|
||||
// Run the fetch operation
|
||||
if err := app.runFetch(ctx, snapshotID, filePath, targetPath, opts); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Fetch operation failed", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown the app when fetch completes
|
||||
if err := app.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
log.Debug("Stopping fetch operation")
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&opts.Bucket, "bucket", "", "S3 bucket name")
|
||||
cmd.Flags().StringVar(&opts.Prefix, "prefix", "", "S3 prefix")
|
||||
cmd.Flags().StringVar(&opts.SnapshotID, "snapshot", "", "Snapshot ID")
|
||||
cmd.Flags().StringVar(&opts.FilePath, "file", "", "Path of file to extract from backup")
|
||||
cmd.Flags().StringVar(&opts.Target, "target", "", "Target path for extracted file")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runFetch(ctx context.Context, opts *FetchOptions) error {
|
||||
if os.Getenv("VAULTIK_PRIVATE_KEY") == "" {
|
||||
return fmt.Errorf("VAULTIK_PRIVATE_KEY environment variable must be set")
|
||||
// runFetch executes the fetch operation
|
||||
func (app *FetchApp) runFetch(ctx context.Context, snapshotID, filePath, targetPath string, opts *FetchOptions) error {
|
||||
// Check for age_secret_key
|
||||
if app.Config.AgeSecretKey == "" {
|
||||
return fmt.Errorf("age_secret_key missing from config - required for fetch")
|
||||
}
|
||||
|
||||
app := fx.New(
|
||||
fx.Supply(opts),
|
||||
fx.Provide(globals.New),
|
||||
// Additional modules will be added here
|
||||
fx.Invoke(func(g *globals.Globals) error {
|
||||
// TODO: Implement fetch logic
|
||||
fmt.Printf("Fetching %s from snapshot %s to %s\n", opts.FilePath, opts.SnapshotID, opts.Target)
|
||||
return nil
|
||||
}),
|
||||
fx.NopLogger,
|
||||
log.Info("Starting fetch operation",
|
||||
"snapshot_id", snapshotID,
|
||||
"file_path", filePath,
|
||||
"target_path", targetPath,
|
||||
"bucket", app.Config.S3.Bucket,
|
||||
"prefix", app.Config.S3.Prefix,
|
||||
)
|
||||
|
||||
if err := app.Start(ctx); err != nil {
|
||||
return fmt.Errorf("failed to start fetch: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := app.Stop(ctx); err != nil {
|
||||
fmt.Printf("error stopping app: %v\n", err)
|
||||
}
|
||||
}()
|
||||
// TODO: Implement fetch logic
|
||||
// 1. Download and decrypt database from S3
|
||||
// 2. Find the file metadata and chunk list
|
||||
// 3. Download and decrypt only the necessary blobs
|
||||
// 4. Reconstruct the file from chunks
|
||||
// 5. Write file to target path with proper metadata
|
||||
|
||||
fmt.Printf("Fetching %s from snapshot %s to %s\n", filePath, snapshotID, targetPath)
|
||||
fmt.Println("TODO: Implement fetch logic")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
70
internal/cli/info.go
Normal file
70
internal/cli/info.go
Normal file
@@ -0,0 +1,70 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/vaultik"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// NewInfoCommand creates the info command
|
||||
func NewInfoCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "info",
|
||||
Short: "Display system and configuration information",
|
||||
Long: `Shows information about the current vaultik configuration, including:
|
||||
- System details (OS, architecture, version)
|
||||
- Storage configuration (S3 bucket, endpoint)
|
||||
- Backup settings (source directories, compression)
|
||||
- Encryption configuration (recipients)
|
||||
- Local database statistics`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Use the app framework
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(v *vaultik.Vaultik, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
go func() {
|
||||
if err := v.ShowInfo(); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Failed to show info", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := v.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
v.Cancel()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -2,77 +2,79 @@ package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/globals"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/vaultik"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// PruneOptions contains options for the prune command
|
||||
type PruneOptions struct {
|
||||
Bucket string
|
||||
Prefix string
|
||||
DryRun bool
|
||||
}
|
||||
|
||||
// NewPruneCommand creates the prune command
|
||||
func NewPruneCommand() *cobra.Command {
|
||||
opts := &PruneOptions{}
|
||||
opts := &vaultik.PruneOptions{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "prune",
|
||||
Short: "Remove unreferenced blobs",
|
||||
Long: `Delete blobs that are no longer referenced by any snapshot`,
|
||||
Args: cobra.NoArgs,
|
||||
Long: `Removes blobs that are not referenced by any snapshot.
|
||||
|
||||
This command scans all snapshots and their manifests to build a list of
|
||||
referenced blobs, then removes any blobs in storage that are not in this list.
|
||||
|
||||
Use this command after deleting snapshots with 'vaultik purge' to reclaim
|
||||
storage space.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// Validate required flags
|
||||
if opts.Bucket == "" {
|
||||
return fmt.Errorf("--bucket is required")
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if opts.Prefix == "" {
|
||||
return fmt.Errorf("--prefix is required")
|
||||
}
|
||||
return runPrune(cmd.Context(), opts)
|
||||
|
||||
// Use the app framework like other commands
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(v *vaultik.Vaultik, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
// Start the prune operation in a goroutine
|
||||
go func() {
|
||||
// Run the prune operation
|
||||
if err := v.PruneBlobs(opts); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Prune operation failed", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown the app when prune completes
|
||||
if err := v.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
log.Debug("Stopping prune operation")
|
||||
v.Cancel()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&opts.Bucket, "bucket", "", "S3 bucket name")
|
||||
cmd.Flags().StringVar(&opts.Prefix, "prefix", "", "S3 prefix")
|
||||
cmd.Flags().BoolVar(&opts.DryRun, "dry-run", false, "Show what would be deleted without actually deleting")
|
||||
cmd.Flags().BoolVar(&opts.Force, "force", false, "Skip confirmation prompt")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runPrune(ctx context.Context, opts *PruneOptions) error {
|
||||
if os.Getenv("VAULTIK_PRIVATE_KEY") == "" {
|
||||
return fmt.Errorf("VAULTIK_PRIVATE_KEY environment variable must be set")
|
||||
}
|
||||
|
||||
app := fx.New(
|
||||
fx.Supply(opts),
|
||||
fx.Provide(globals.New),
|
||||
// Additional modules will be added here
|
||||
fx.Invoke(func(g *globals.Globals) error {
|
||||
// TODO: Implement prune logic
|
||||
fmt.Printf("Pruning bucket %s with prefix %s\n", opts.Bucket, opts.Prefix)
|
||||
if opts.DryRun {
|
||||
fmt.Println("Running in dry-run mode")
|
||||
}
|
||||
return nil
|
||||
}),
|
||||
fx.NopLogger,
|
||||
)
|
||||
|
||||
if err := app.Start(ctx); err != nil {
|
||||
return fmt.Errorf("failed to start prune: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := app.Stop(ctx); err != nil {
|
||||
fmt.Printf("error stopping app: %v\n", err)
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
99
internal/cli/purge.go
Normal file
99
internal/cli/purge.go
Normal file
@@ -0,0 +1,99 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/vaultik"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// PurgeOptions contains options for the purge command
|
||||
type PurgeOptions struct {
|
||||
KeepLatest bool
|
||||
OlderThan string
|
||||
Force bool
|
||||
}
|
||||
|
||||
// NewPurgeCommand creates the purge command
|
||||
func NewPurgeCommand() *cobra.Command {
|
||||
opts := &PurgeOptions{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "purge",
|
||||
Short: "Purge old snapshots",
|
||||
Long: `Removes snapshots based on age or count criteria.
|
||||
|
||||
This command allows you to:
|
||||
- Keep only the latest snapshot (--keep-latest)
|
||||
- Remove snapshots older than a specific duration (--older-than)
|
||||
|
||||
Config is located at /etc/vaultik/config.yml by default, but can be overridden by
|
||||
specifying a path using --config or by setting VAULTIK_CONFIG to a path.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// Validate flags
|
||||
if !opts.KeepLatest && opts.OlderThan == "" {
|
||||
return fmt.Errorf("must specify either --keep-latest or --older-than")
|
||||
}
|
||||
if opts.KeepLatest && opts.OlderThan != "" {
|
||||
return fmt.Errorf("cannot specify both --keep-latest and --older-than")
|
||||
}
|
||||
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Use the app framework like other commands
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(v *vaultik.Vaultik, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
// Start the purge operation in a goroutine
|
||||
go func() {
|
||||
// Run the purge operation
|
||||
if err := v.PurgeSnapshots(opts.KeepLatest, opts.OlderThan, opts.Force); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Purge operation failed", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown the app when purge completes
|
||||
if err := v.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
log.Debug("Stopping purge operation")
|
||||
v.Cancel()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().BoolVar(&opts.KeepLatest, "keep-latest", false, "Keep only the latest snapshot")
|
||||
cmd.Flags().StringVar(&opts.OlderThan, "older-than", "", "Remove snapshots older than duration (e.g. 30d, 6m, 1y)")
|
||||
cmd.Flags().BoolVar(&opts.Force, "force", false, "Skip confirmation prompts")
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -3,19 +3,30 @@ package cli
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/globals"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
||||
"git.eeqj.de/sneak/vaultik/internal/storage"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// RestoreOptions contains options for the restore command
|
||||
type RestoreOptions struct {
|
||||
Bucket string
|
||||
Prefix string
|
||||
SnapshotID string
|
||||
TargetDir string
|
||||
TargetDir string
|
||||
}
|
||||
|
||||
// RestoreApp contains all dependencies needed for restore
|
||||
type RestoreApp struct {
|
||||
Globals *globals.Globals
|
||||
Config *config.Config
|
||||
Repositories *database.Repositories
|
||||
Storage storage.Storer
|
||||
DB *database.DB
|
||||
Shutdowner fx.Shutdowner
|
||||
}
|
||||
|
||||
// NewRestoreCommand creates the restore command
|
||||
@@ -23,61 +34,103 @@ func NewRestoreCommand() *cobra.Command {
|
||||
opts := &RestoreOptions{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "restore",
|
||||
Use: "restore <snapshot-id> <target-dir>",
|
||||
Short: "Restore files from backup",
|
||||
Long: `Download and decrypt files from a backup snapshot`,
|
||||
Args: cobra.NoArgs,
|
||||
Long: `Download and decrypt files from a backup snapshot.
|
||||
|
||||
This command will restore all files from the specified snapshot to the target directory.
|
||||
The age_secret_key must be configured in the config file for decryption.`,
|
||||
Args: cobra.ExactArgs(2),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// Validate required flags
|
||||
if opts.Bucket == "" {
|
||||
return fmt.Errorf("--bucket is required")
|
||||
snapshotID := args[0]
|
||||
opts.TargetDir = args[1]
|
||||
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if opts.Prefix == "" {
|
||||
return fmt.Errorf("--prefix is required")
|
||||
}
|
||||
if opts.SnapshotID == "" {
|
||||
return fmt.Errorf("--snapshot is required")
|
||||
}
|
||||
if opts.TargetDir == "" {
|
||||
return fmt.Errorf("--target is required")
|
||||
}
|
||||
return runRestore(cmd.Context(), opts)
|
||||
|
||||
// Use the app framework like other commands
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{
|
||||
snapshot.Module,
|
||||
fx.Provide(fx.Annotate(
|
||||
func(g *globals.Globals, cfg *config.Config, repos *database.Repositories,
|
||||
storer storage.Storer, db *database.DB, shutdowner fx.Shutdowner) *RestoreApp {
|
||||
return &RestoreApp{
|
||||
Globals: g,
|
||||
Config: cfg,
|
||||
Repositories: repos,
|
||||
Storage: storer,
|
||||
DB: db,
|
||||
Shutdowner: shutdowner,
|
||||
}
|
||||
},
|
||||
)),
|
||||
},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(app *RestoreApp, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
// Start the restore operation in a goroutine
|
||||
go func() {
|
||||
// Run the restore operation
|
||||
if err := app.runRestore(ctx, snapshotID, opts); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Restore operation failed", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown the app when restore completes
|
||||
if err := app.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
log.Debug("Stopping restore operation")
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&opts.Bucket, "bucket", "", "S3 bucket name")
|
||||
cmd.Flags().StringVar(&opts.Prefix, "prefix", "", "S3 prefix")
|
||||
cmd.Flags().StringVar(&opts.SnapshotID, "snapshot", "", "Snapshot ID to restore")
|
||||
cmd.Flags().StringVar(&opts.TargetDir, "target", "", "Target directory for restore")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runRestore(ctx context.Context, opts *RestoreOptions) error {
|
||||
if os.Getenv("VAULTIK_PRIVATE_KEY") == "" {
|
||||
return fmt.Errorf("VAULTIK_PRIVATE_KEY environment variable must be set")
|
||||
// runRestore executes the restore operation
|
||||
func (app *RestoreApp) runRestore(ctx context.Context, snapshotID string, opts *RestoreOptions) error {
|
||||
// Check for age_secret_key
|
||||
if app.Config.AgeSecretKey == "" {
|
||||
return fmt.Errorf("age_secret_key missing from config - required for restore")
|
||||
}
|
||||
|
||||
app := fx.New(
|
||||
fx.Supply(opts),
|
||||
fx.Provide(globals.New),
|
||||
// Additional modules will be added here
|
||||
fx.Invoke(func(g *globals.Globals) error {
|
||||
// TODO: Implement restore logic
|
||||
fmt.Printf("Restoring snapshot %s to %s\n", opts.SnapshotID, opts.TargetDir)
|
||||
return nil
|
||||
}),
|
||||
fx.NopLogger,
|
||||
log.Info("Starting restore operation",
|
||||
"snapshot_id", snapshotID,
|
||||
"target_dir", opts.TargetDir,
|
||||
"bucket", app.Config.S3.Bucket,
|
||||
"prefix", app.Config.S3.Prefix,
|
||||
)
|
||||
|
||||
if err := app.Start(ctx); err != nil {
|
||||
return fmt.Errorf("failed to start restore: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := app.Stop(ctx); err != nil {
|
||||
fmt.Printf("error stopping app: %v\n", err)
|
||||
}
|
||||
}()
|
||||
// TODO: Implement restore logic
|
||||
// 1. Download and decrypt database from S3
|
||||
// 2. Download and decrypt blobs
|
||||
// 3. Reconstruct files from chunks
|
||||
// 4. Write files to target directory with proper metadata
|
||||
|
||||
fmt.Printf("Restoring snapshot %s to %s\n", snapshotID, opts.TargetDir)
|
||||
fmt.Println("TODO: Implement restore logic")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,10 +1,25 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// NewRootCommand creates the root cobra command
|
||||
// RootFlags holds global flags that apply to all commands.
|
||||
// These flags are defined on the root command and inherited by all subcommands.
|
||||
type RootFlags struct {
|
||||
ConfigPath string
|
||||
Verbose bool
|
||||
Debug bool
|
||||
}
|
||||
|
||||
var rootFlags RootFlags
|
||||
|
||||
// NewRootCommand creates the root cobra command for the vaultik CLI.
|
||||
// It sets up the command structure, global flags, and adds all subcommands.
|
||||
// This is the main entry point for the CLI command hierarchy.
|
||||
func NewRootCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "vaultik",
|
||||
@@ -15,15 +30,51 @@ on the source system.`,
|
||||
SilenceUsage: true,
|
||||
}
|
||||
|
||||
// Add global flags
|
||||
cmd.PersistentFlags().StringVar(&rootFlags.ConfigPath, "config", "", "Path to config file (default: $VAULTIK_CONFIG or /etc/vaultik/config.yml)")
|
||||
cmd.PersistentFlags().BoolVarP(&rootFlags.Verbose, "verbose", "v", false, "Enable verbose output")
|
||||
cmd.PersistentFlags().BoolVar(&rootFlags.Debug, "debug", false, "Enable debug output")
|
||||
|
||||
// Add subcommands
|
||||
cmd.AddCommand(
|
||||
NewBackupCommand(),
|
||||
NewRestoreCommand(),
|
||||
NewPruneCommand(),
|
||||
NewVerifyCommand(),
|
||||
NewFetchCommand(),
|
||||
SnapshotCmd(),
|
||||
NewStoreCommand(),
|
||||
NewSnapshotCommand(),
|
||||
NewInfoCommand(),
|
||||
)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
// GetRootFlags returns the global flags that were parsed from the command line.
|
||||
// This allows subcommands to access global flag values like verbosity and config path.
|
||||
func GetRootFlags() RootFlags {
|
||||
return rootFlags
|
||||
}
|
||||
|
||||
// ResolveConfigPath resolves the config file path from flags, environment, or default.
|
||||
// It checks in order: 1) --config flag, 2) VAULTIK_CONFIG environment variable,
|
||||
// 3) default location /etc/vaultik/config.yml. Returns an error if no valid
|
||||
// config file can be found through any of these methods.
|
||||
func ResolveConfigPath() (string, error) {
|
||||
// First check global flag
|
||||
if rootFlags.ConfigPath != "" {
|
||||
return rootFlags.ConfigPath, nil
|
||||
}
|
||||
|
||||
// Then check environment variable
|
||||
if envPath := os.Getenv("VAULTIK_CONFIG"); envPath != "" {
|
||||
return envPath, nil
|
||||
}
|
||||
|
||||
// Finally check default location
|
||||
defaultPath := "/etc/vaultik/config.yml"
|
||||
if _, err := os.Stat(defaultPath); err == nil {
|
||||
return defaultPath, nil
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("no config file specified, VAULTIK_CONFIG not set, and %s not found", defaultPath)
|
||||
}
|
||||
|
||||
@@ -1,90 +1,283 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/vaultik"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
func SnapshotCmd() *cobra.Command {
|
||||
// NewSnapshotCommand creates the snapshot command and subcommands
|
||||
func NewSnapshotCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "snapshot",
|
||||
Short: "Manage snapshots",
|
||||
Long: "Commands for listing, removing, and querying snapshots",
|
||||
Short: "Snapshot management commands",
|
||||
Long: "Commands for creating, listing, and managing snapshots",
|
||||
}
|
||||
|
||||
cmd.AddCommand(snapshotListCmd())
|
||||
cmd.AddCommand(snapshotRmCmd())
|
||||
cmd.AddCommand(snapshotLatestCmd())
|
||||
// Add subcommands
|
||||
cmd.AddCommand(newSnapshotCreateCommand())
|
||||
cmd.AddCommand(newSnapshotListCommand())
|
||||
cmd.AddCommand(newSnapshotPurgeCommand())
|
||||
cmd.AddCommand(newSnapshotVerifyCommand())
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func snapshotListCmd() *cobra.Command {
|
||||
var (
|
||||
bucket string
|
||||
prefix string
|
||||
limit int
|
||||
)
|
||||
// newSnapshotCreateCommand creates the 'snapshot create' subcommand
|
||||
func newSnapshotCreateCommand() *cobra.Command {
|
||||
opts := &vaultik.SnapshotCreateOptions{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "create",
|
||||
Short: "Create a new snapshot",
|
||||
Long: `Creates a new snapshot of the configured directories.
|
||||
|
||||
Config is located at /etc/vaultik/config.yml by default, but can be overridden by
|
||||
specifying a path using --config or by setting VAULTIK_CONFIG to a path.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Use the backup functionality from cli package
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
Cron: opts.Cron,
|
||||
},
|
||||
Modules: []fx.Option{},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(v *vaultik.Vaultik, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
// Start the snapshot creation in a goroutine
|
||||
go func() {
|
||||
// Run the snapshot creation
|
||||
if err := v.CreateSnapshot(opts); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Snapshot creation failed", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown the app when snapshot completes
|
||||
if err := v.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
log.Debug("Stopping snapshot creation")
|
||||
// Cancel the Vaultik context
|
||||
v.Cancel()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().BoolVar(&opts.Daemon, "daemon", false, "Run in daemon mode with inotify monitoring")
|
||||
cmd.Flags().BoolVar(&opts.Cron, "cron", false, "Run in cron mode (silent unless error)")
|
||||
cmd.Flags().BoolVar(&opts.Prune, "prune", false, "Delete all previous snapshots and unreferenced blobs after backup")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
// newSnapshotListCommand creates the 'snapshot list' subcommand
|
||||
func newSnapshotListCommand() *cobra.Command {
|
||||
var jsonOutput bool
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List snapshots",
|
||||
Long: "List all snapshots in the bucket, sorted by timestamp",
|
||||
Short: "List all snapshots",
|
||||
Long: "Lists all snapshots with their ID, timestamp, and compressed size",
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
panic("unimplemented")
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(v *vaultik.Vaultik, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
go func() {
|
||||
if err := v.ListSnapshots(jsonOutput); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Failed to list snapshots", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := v.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
v.Cancel()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&bucket, "bucket", "", "S3 bucket name")
|
||||
cmd.Flags().StringVar(&prefix, "prefix", "", "S3 prefix")
|
||||
cmd.Flags().IntVar(&limit, "limit", 10, "Maximum number of snapshots to list")
|
||||
cmd.MarkFlagRequired("bucket")
|
||||
cmd.Flags().BoolVar(&jsonOutput, "json", false, "Output in JSON format")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func snapshotRmCmd() *cobra.Command {
|
||||
var (
|
||||
bucket string
|
||||
prefix string
|
||||
snapshot string
|
||||
)
|
||||
// newSnapshotPurgeCommand creates the 'snapshot purge' subcommand
|
||||
func newSnapshotPurgeCommand() *cobra.Command {
|
||||
var keepLatest bool
|
||||
var olderThan string
|
||||
var force bool
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "rm",
|
||||
Short: "Remove a snapshot",
|
||||
Long: "Remove a snapshot and optionally its associated blobs",
|
||||
Use: "purge",
|
||||
Short: "Purge old snapshots",
|
||||
Long: "Removes snapshots based on age or count criteria",
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
panic("unimplemented")
|
||||
// Validate flags
|
||||
if !keepLatest && olderThan == "" {
|
||||
return fmt.Errorf("must specify either --keep-latest or --older-than")
|
||||
}
|
||||
if keepLatest && olderThan != "" {
|
||||
return fmt.Errorf("cannot specify both --keep-latest and --older-than")
|
||||
}
|
||||
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(v *vaultik.Vaultik, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
go func() {
|
||||
if err := v.PurgeSnapshots(keepLatest, olderThan, force); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Failed to purge snapshots", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := v.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
v.Cancel()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&bucket, "bucket", "", "S3 bucket name")
|
||||
cmd.Flags().StringVar(&prefix, "prefix", "", "S3 prefix")
|
||||
cmd.Flags().StringVar(&snapshot, "snapshot", "", "Snapshot ID to remove")
|
||||
cmd.MarkFlagRequired("bucket")
|
||||
cmd.MarkFlagRequired("snapshot")
|
||||
cmd.Flags().BoolVar(&keepLatest, "keep-latest", false, "Keep only the latest snapshot")
|
||||
cmd.Flags().StringVar(&olderThan, "older-than", "", "Remove snapshots older than duration (e.g., 30d, 6m, 1y)")
|
||||
cmd.Flags().BoolVar(&force, "force", false, "Skip confirmation prompt")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func snapshotLatestCmd() *cobra.Command {
|
||||
var (
|
||||
bucket string
|
||||
prefix string
|
||||
)
|
||||
// newSnapshotVerifyCommand creates the 'snapshot verify' subcommand
|
||||
func newSnapshotVerifyCommand() *cobra.Command {
|
||||
var deep bool
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "latest",
|
||||
Short: "Get the latest snapshot ID",
|
||||
Long: "Display the ID of the most recent snapshot",
|
||||
Use: "verify <snapshot-id>",
|
||||
Short: "Verify snapshot integrity",
|
||||
Long: "Verifies that all blobs referenced in a snapshot exist",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
panic("unimplemented")
|
||||
snapshotID := args[0]
|
||||
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(v *vaultik.Vaultik, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
go func() {
|
||||
if err := v.VerifySnapshot(snapshotID, deep); err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Verification failed", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := v.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
v.Cancel()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&bucket, "bucket", "", "S3 bucket name")
|
||||
cmd.Flags().StringVar(&prefix, "prefix", "", "S3 prefix")
|
||||
cmd.MarkFlagRequired("bucket")
|
||||
cmd.Flags().BoolVar(&deep, "deep", false, "Download and verify blob hashes")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
157
internal/cli/store.go
Normal file
157
internal/cli/store.go
Normal file
@@ -0,0 +1,157 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/storage"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// StoreApp contains dependencies for store commands
|
||||
type StoreApp struct {
|
||||
Storage storage.Storer
|
||||
Shutdowner fx.Shutdowner
|
||||
}
|
||||
|
||||
// NewStoreCommand creates the store command and subcommands
|
||||
func NewStoreCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "store",
|
||||
Short: "Storage information commands",
|
||||
Long: "Commands for viewing information about the S3 storage backend",
|
||||
}
|
||||
|
||||
// Add subcommands
|
||||
cmd.AddCommand(newStoreInfoCommand())
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
// newStoreInfoCommand creates the 'store info' subcommand
|
||||
func newStoreInfoCommand() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "info",
|
||||
Short: "Display storage information",
|
||||
Long: "Shows S3 bucket configuration and storage statistics including snapshots and blobs",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runWithApp(cmd.Context(), func(app *StoreApp) error {
|
||||
return app.Info(cmd.Context())
|
||||
})
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Info displays storage information
|
||||
func (app *StoreApp) Info(ctx context.Context) error {
|
||||
// Get storage info
|
||||
storageInfo := app.Storage.Info()
|
||||
|
||||
fmt.Printf("Storage Information\n")
|
||||
fmt.Printf("==================\n\n")
|
||||
fmt.Printf("Storage Configuration:\n")
|
||||
fmt.Printf(" Type: %s\n", storageInfo.Type)
|
||||
fmt.Printf(" Location: %s\n\n", storageInfo.Location)
|
||||
|
||||
// Count snapshots by listing metadata/ prefix
|
||||
snapshotCount := 0
|
||||
snapshotCh := app.Storage.ListStream(ctx, "metadata/")
|
||||
snapshotDirs := make(map[string]bool)
|
||||
|
||||
for object := range snapshotCh {
|
||||
if object.Err != nil {
|
||||
return fmt.Errorf("listing snapshots: %w", object.Err)
|
||||
}
|
||||
// Extract snapshot ID from path like metadata/2024-01-15-143052-hostname/
|
||||
parts := strings.Split(object.Key, "/")
|
||||
if len(parts) >= 2 && parts[0] == "metadata" && parts[1] != "" {
|
||||
snapshotDirs[parts[1]] = true
|
||||
}
|
||||
}
|
||||
snapshotCount = len(snapshotDirs)
|
||||
|
||||
// Count blobs and calculate total size by listing blobs/ prefix
|
||||
blobCount := 0
|
||||
var totalSize int64
|
||||
|
||||
blobCh := app.Storage.ListStream(ctx, "blobs/")
|
||||
for object := range blobCh {
|
||||
if object.Err != nil {
|
||||
return fmt.Errorf("listing blobs: %w", object.Err)
|
||||
}
|
||||
if !strings.HasSuffix(object.Key, "/") { // Skip directories
|
||||
blobCount++
|
||||
totalSize += object.Size
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("Storage Statistics:\n")
|
||||
fmt.Printf(" Snapshots: %d\n", snapshotCount)
|
||||
fmt.Printf(" Blobs: %d\n", blobCount)
|
||||
fmt.Printf(" Total Size: %s\n", formatBytes(totalSize))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// formatBytes formats bytes into human-readable format
|
||||
func formatBytes(bytes int64) string {
|
||||
const unit = 1024
|
||||
if bytes < unit {
|
||||
return fmt.Sprintf("%d B", bytes)
|
||||
}
|
||||
div, exp := int64(unit), 0
|
||||
for n := bytes / unit; n >= unit; n /= unit {
|
||||
div *= unit
|
||||
exp++
|
||||
}
|
||||
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||
}
|
||||
|
||||
// runWithApp creates the FX app and runs the given function
|
||||
func runWithApp(ctx context.Context, fn func(*StoreApp) error) error {
|
||||
var result error
|
||||
rootFlags := GetRootFlags()
|
||||
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = RunWithApp(ctx, AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{
|
||||
fx.Provide(func(storer storage.Storer, shutdowner fx.Shutdowner) *StoreApp {
|
||||
return &StoreApp{
|
||||
Storage: storer,
|
||||
Shutdowner: shutdowner,
|
||||
}
|
||||
}),
|
||||
},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(app *StoreApp, shutdowner fx.Shutdowner) {
|
||||
result = fn(app)
|
||||
// Shutdown after command completes
|
||||
go func() {
|
||||
time.Sleep(100 * time.Millisecond) // Brief delay to ensure clean shutdown
|
||||
if err := shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
}),
|
||||
},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return result
|
||||
}
|
||||
10
internal/cli/vaultik_snapshot_types.go
Normal file
10
internal/cli/vaultik_snapshot_types.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package cli
|
||||
|
||||
import "time"
|
||||
|
||||
// SnapshotInfo represents snapshot information for listing
|
||||
type SnapshotInfo struct {
|
||||
ID string `json:"id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
CompressedSize int64 `json:"compressed_size"`
|
||||
}
|
||||
@@ -2,85 +2,93 @@ package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/globals"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/vaultik"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// VerifyOptions contains options for the verify command
|
||||
type VerifyOptions struct {
|
||||
Bucket string
|
||||
Prefix string
|
||||
SnapshotID string
|
||||
Quick bool
|
||||
}
|
||||
|
||||
// NewVerifyCommand creates the verify command
|
||||
func NewVerifyCommand() *cobra.Command {
|
||||
opts := &VerifyOptions{}
|
||||
opts := &vaultik.VerifyOptions{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "verify",
|
||||
Short: "Verify backup integrity",
|
||||
Long: `Check that all referenced blobs exist and verify metadata integrity`,
|
||||
Args: cobra.NoArgs,
|
||||
Use: "verify <snapshot-id>",
|
||||
Short: "Verify snapshot integrity",
|
||||
Long: `Verifies that all blobs referenced in a snapshot exist and optionally verifies their contents.
|
||||
|
||||
Shallow verification (default):
|
||||
- Downloads and decompresses manifest
|
||||
- Checks existence of all blobs in S3
|
||||
- Reports missing blobs
|
||||
|
||||
Deep verification (--deep):
|
||||
- Downloads and decrypts database
|
||||
- Verifies blob lists match between manifest and database
|
||||
- Downloads, decrypts, and decompresses each blob
|
||||
- Verifies SHA256 hash of each chunk matches database
|
||||
- Ensures chunks are ordered correctly
|
||||
|
||||
The command will fail immediately on any verification error and exit with non-zero status.`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// Validate required flags
|
||||
if opts.Bucket == "" {
|
||||
return fmt.Errorf("--bucket is required")
|
||||
snapshotID := args[0]
|
||||
|
||||
// Use unified config resolution
|
||||
configPath, err := ResolveConfigPath()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if opts.Prefix == "" {
|
||||
return fmt.Errorf("--prefix is required")
|
||||
}
|
||||
return runVerify(cmd.Context(), opts)
|
||||
|
||||
// Use the app framework for all verification
|
||||
rootFlags := GetRootFlags()
|
||||
return RunWithApp(cmd.Context(), AppOptions{
|
||||
ConfigPath: configPath,
|
||||
LogOptions: log.LogOptions{
|
||||
Verbose: rootFlags.Verbose,
|
||||
Debug: rootFlags.Debug,
|
||||
},
|
||||
Modules: []fx.Option{},
|
||||
Invokes: []fx.Option{
|
||||
fx.Invoke(func(v *vaultik.Vaultik, lc fx.Lifecycle) {
|
||||
lc.Append(fx.Hook{
|
||||
OnStart: func(ctx context.Context) error {
|
||||
// Run the verify operation directly
|
||||
go func() {
|
||||
var err error
|
||||
if opts.Deep {
|
||||
err = v.RunDeepVerify(snapshotID, opts)
|
||||
} else {
|
||||
err = v.VerifySnapshot(snapshotID, false)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
if err != context.Canceled {
|
||||
log.Error("Verification failed", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := v.Shutdowner.Shutdown(); err != nil {
|
||||
log.Error("Failed to shutdown", "error", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
},
|
||||
OnStop: func(ctx context.Context) error {
|
||||
log.Debug("Stopping verify operation")
|
||||
v.Cancel()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
}),
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&opts.Bucket, "bucket", "", "S3 bucket name")
|
||||
cmd.Flags().StringVar(&opts.Prefix, "prefix", "", "S3 prefix")
|
||||
cmd.Flags().StringVar(&opts.SnapshotID, "snapshot", "", "Snapshot ID to verify (optional, defaults to latest)")
|
||||
cmd.Flags().BoolVar(&opts.Quick, "quick", false, "Perform quick verification by checking blob existence and S3 content hashes without downloading")
|
||||
cmd.Flags().BoolVar(&opts.Deep, "deep", false, "Perform deep verification by downloading and verifying all blob contents")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runVerify(ctx context.Context, opts *VerifyOptions) error {
|
||||
if os.Getenv("VAULTIK_PRIVATE_KEY") == "" {
|
||||
return fmt.Errorf("VAULTIK_PRIVATE_KEY environment variable must be set")
|
||||
}
|
||||
|
||||
app := fx.New(
|
||||
fx.Supply(opts),
|
||||
fx.Provide(globals.New),
|
||||
// Additional modules will be added here
|
||||
fx.Invoke(func(g *globals.Globals) error {
|
||||
// TODO: Implement verify logic
|
||||
if opts.SnapshotID == "" {
|
||||
fmt.Printf("Verifying latest snapshot in bucket %s with prefix %s\n", opts.Bucket, opts.Prefix)
|
||||
} else {
|
||||
fmt.Printf("Verifying snapshot %s in bucket %s with prefix %s\n", opts.SnapshotID, opts.Bucket, opts.Prefix)
|
||||
}
|
||||
if opts.Quick {
|
||||
fmt.Println("Performing quick verification")
|
||||
} else {
|
||||
fmt.Println("Performing deep verification")
|
||||
}
|
||||
return nil
|
||||
}),
|
||||
fx.NopLogger,
|
||||
)
|
||||
|
||||
if err := app.Start(ctx); err != nil {
|
||||
return fmt.Errorf("failed to start verify: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := app.Stop(ctx); err != nil {
|
||||
fmt.Printf("error stopping app: %v\n", err)
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -3,30 +3,71 @@ package config
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/smartconfig"
|
||||
"github.com/adrg/xdg"
|
||||
"go.uber.org/fx"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Config represents the application configuration
|
||||
const appName = "berlin.sneak.app.vaultik"
|
||||
|
||||
// expandTilde expands ~ at the start of a path to the user's home directory.
|
||||
func expandTilde(path string) string {
|
||||
if path == "~" {
|
||||
home, _ := os.UserHomeDir()
|
||||
return home
|
||||
}
|
||||
if strings.HasPrefix(path, "~/") {
|
||||
home, _ := os.UserHomeDir()
|
||||
return filepath.Join(home, path[2:])
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
// expandTildeInURL expands ~ in file:// URLs.
|
||||
func expandTildeInURL(url string) string {
|
||||
if strings.HasPrefix(url, "file://~/") {
|
||||
home, _ := os.UserHomeDir()
|
||||
return "file://" + filepath.Join(home, url[9:])
|
||||
}
|
||||
return url
|
||||
}
|
||||
|
||||
// Config represents the application configuration for Vaultik.
|
||||
// It defines all settings for backup operations, including source directories,
|
||||
// encryption recipients, storage configuration, and performance tuning parameters.
|
||||
// Configuration is typically loaded from a YAML file.
|
||||
type Config struct {
|
||||
AgeRecipient string `yaml:"age_recipient"`
|
||||
AgeRecipients []string `yaml:"age_recipients"`
|
||||
AgeSecretKey string `yaml:"age_secret_key"`
|
||||
BackupInterval time.Duration `yaml:"backup_interval"`
|
||||
BlobSizeLimit int64 `yaml:"blob_size_limit"`
|
||||
ChunkSize int64 `yaml:"chunk_size"`
|
||||
BlobSizeLimit Size `yaml:"blob_size_limit"`
|
||||
ChunkSize Size `yaml:"chunk_size"`
|
||||
Exclude []string `yaml:"exclude"`
|
||||
FullScanInterval time.Duration `yaml:"full_scan_interval"`
|
||||
Hostname string `yaml:"hostname"`
|
||||
IndexPath string `yaml:"index_path"`
|
||||
IndexPrefix string `yaml:"index_prefix"`
|
||||
MinTimeBetweenRun time.Duration `yaml:"min_time_between_run"`
|
||||
S3 S3Config `yaml:"s3"`
|
||||
SourceDirs []string `yaml:"source_dirs"`
|
||||
CompressionLevel int `yaml:"compression_level"`
|
||||
|
||||
// StorageURL specifies the storage backend using a URL format.
|
||||
// Takes precedence over S3Config if set.
|
||||
// Supported formats:
|
||||
// - s3://bucket/prefix?endpoint=host®ion=us-east-1
|
||||
// - file:///path/to/backup
|
||||
// For S3 URLs, credentials are still read from s3.access_key_id and s3.secret_access_key.
|
||||
StorageURL string `yaml:"storage_url"`
|
||||
}
|
||||
|
||||
// S3Config represents S3 storage configuration
|
||||
// S3Config represents S3 storage configuration for backup storage.
|
||||
// It supports both AWS S3 and S3-compatible storage services.
|
||||
// All fields except UseSSL and PartSize are required.
|
||||
type S3Config struct {
|
||||
Endpoint string `yaml:"endpoint"`
|
||||
Bucket string `yaml:"bucket"`
|
||||
@@ -35,13 +76,17 @@ type S3Config struct {
|
||||
SecretAccessKey string `yaml:"secret_access_key"`
|
||||
Region string `yaml:"region"`
|
||||
UseSSL bool `yaml:"use_ssl"`
|
||||
PartSize int64 `yaml:"part_size"`
|
||||
PartSize Size `yaml:"part_size"`
|
||||
}
|
||||
|
||||
// ConfigPath wraps the config file path for fx injection
|
||||
// ConfigPath wraps the config file path for fx dependency injection.
|
||||
// This type allows the config file path to be injected as a distinct type
|
||||
// rather than a plain string, avoiding conflicts with other string dependencies.
|
||||
type ConfigPath string
|
||||
|
||||
// New creates a new Config instance
|
||||
// New creates a new Config instance by loading from the specified path.
|
||||
// This function is used by the fx dependency injection framework.
|
||||
// Returns an error if the path is empty or if loading fails.
|
||||
func New(path ConfigPath) (*Config, error) {
|
||||
if path == "" {
|
||||
return nil, fmt.Errorf("config path not provided")
|
||||
@@ -55,32 +100,50 @@ func New(path ConfigPath) (*Config, error) {
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
// Load reads and parses the configuration file
|
||||
// Load reads and parses the configuration file from the specified path.
|
||||
// It applies default values for optional fields, performs environment variable
|
||||
// substitution using smartconfig, and validates the configuration.
|
||||
// The configuration file should be in YAML format. Returns an error if the file
|
||||
// cannot be read, parsed, or if validation fails.
|
||||
func Load(path string) (*Config, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
// Load config using smartconfig for interpolation
|
||||
sc, err := smartconfig.NewFromConfigPath(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read config file: %w", err)
|
||||
return nil, fmt.Errorf("failed to load config file: %w", err)
|
||||
}
|
||||
|
||||
cfg := &Config{
|
||||
// Set defaults
|
||||
BlobSizeLimit: 10 * 1024 * 1024 * 1024, // 10GB
|
||||
ChunkSize: 10 * 1024 * 1024, // 10MB
|
||||
BlobSizeLimit: Size(10 * 1024 * 1024 * 1024), // 10GB
|
||||
ChunkSize: Size(10 * 1024 * 1024), // 10MB
|
||||
BackupInterval: 1 * time.Hour,
|
||||
FullScanInterval: 24 * time.Hour,
|
||||
MinTimeBetweenRun: 15 * time.Minute,
|
||||
IndexPath: "/var/lib/vaultik/index.sqlite",
|
||||
IndexPrefix: "index/",
|
||||
IndexPath: filepath.Join(xdg.DataHome, appName, "index.sqlite"),
|
||||
CompressionLevel: 3,
|
||||
}
|
||||
|
||||
if err := yaml.Unmarshal(data, cfg); err != nil {
|
||||
// Convert smartconfig data to YAML then unmarshal
|
||||
configData := sc.Data()
|
||||
yamlBytes, err := yaml.Marshal(configData)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal config data: %w", err)
|
||||
}
|
||||
|
||||
if err := yaml.Unmarshal(yamlBytes, cfg); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse config: %w", err)
|
||||
}
|
||||
|
||||
// Expand tilde in all path fields
|
||||
cfg.IndexPath = expandTilde(cfg.IndexPath)
|
||||
cfg.StorageURL = expandTildeInURL(cfg.StorageURL)
|
||||
for i, dir := range cfg.SourceDirs {
|
||||
cfg.SourceDirs[i] = expandTilde(dir)
|
||||
}
|
||||
|
||||
// Check for environment variable override for IndexPath
|
||||
if envIndexPath := os.Getenv("VAULTIK_INDEX_PATH"); envIndexPath != "" {
|
||||
cfg.IndexPath = envIndexPath
|
||||
cfg.IndexPath = expandTilde(envIndexPath)
|
||||
}
|
||||
|
||||
// Get hostname if not set
|
||||
@@ -97,7 +160,7 @@ func Load(path string) (*Config, error) {
|
||||
cfg.S3.Region = "us-east-1"
|
||||
}
|
||||
if cfg.S3.PartSize == 0 {
|
||||
cfg.S3.PartSize = 5 * 1024 * 1024 // 5MB
|
||||
cfg.S3.PartSize = Size(5 * 1024 * 1024) // 5MB
|
||||
}
|
||||
|
||||
if err := cfg.Validate(); err != nil {
|
||||
@@ -107,37 +170,34 @@ func Load(path string) (*Config, error) {
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
// Validate checks if the configuration is valid
|
||||
// Validate checks if the configuration is valid and complete.
|
||||
// It ensures all required fields are present and have valid values:
|
||||
// - At least one age recipient must be specified
|
||||
// - At least one source directory must be configured
|
||||
// - Storage must be configured (either storage_url or s3.* fields)
|
||||
// - Chunk size must be at least 1MB
|
||||
// - Blob size limit must be at least the chunk size
|
||||
// - Compression level must be between 1 and 19
|
||||
// Returns an error describing the first validation failure encountered.
|
||||
func (c *Config) Validate() error {
|
||||
if c.AgeRecipient == "" {
|
||||
return fmt.Errorf("age_recipient is required")
|
||||
if len(c.AgeRecipients) == 0 {
|
||||
return fmt.Errorf("at least one age_recipient is required")
|
||||
}
|
||||
|
||||
if len(c.SourceDirs) == 0 {
|
||||
return fmt.Errorf("at least one source directory is required")
|
||||
}
|
||||
|
||||
if c.S3.Endpoint == "" {
|
||||
return fmt.Errorf("s3.endpoint is required")
|
||||
// Validate storage configuration
|
||||
if err := c.validateStorage(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if c.S3.Bucket == "" {
|
||||
return fmt.Errorf("s3.bucket is required")
|
||||
}
|
||||
|
||||
if c.S3.AccessKeyID == "" {
|
||||
return fmt.Errorf("s3.access_key_id is required")
|
||||
}
|
||||
|
||||
if c.S3.SecretAccessKey == "" {
|
||||
return fmt.Errorf("s3.secret_access_key is required")
|
||||
}
|
||||
|
||||
if c.ChunkSize < 1024*1024 { // 1MB minimum
|
||||
if c.ChunkSize.Int64() < 1024*1024 { // 1MB minimum
|
||||
return fmt.Errorf("chunk_size must be at least 1MB")
|
||||
}
|
||||
|
||||
if c.BlobSizeLimit < c.ChunkSize {
|
||||
if c.BlobSizeLimit.Int64() < c.ChunkSize.Int64() {
|
||||
return fmt.Errorf("blob_size_limit must be at least chunk_size")
|
||||
}
|
||||
|
||||
@@ -148,7 +208,52 @@ func (c *Config) Validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Module exports the config module for fx
|
||||
// validateStorage validates storage configuration.
|
||||
// If StorageURL is set, it takes precedence. S3 URLs require credentials.
|
||||
// File URLs don't require any S3 configuration.
|
||||
// If StorageURL is not set, legacy S3 configuration is required.
|
||||
func (c *Config) validateStorage() error {
|
||||
if c.StorageURL != "" {
|
||||
// URL-based configuration
|
||||
if strings.HasPrefix(c.StorageURL, "file://") {
|
||||
// File storage doesn't need S3 credentials
|
||||
return nil
|
||||
}
|
||||
if strings.HasPrefix(c.StorageURL, "s3://") {
|
||||
// S3 storage needs credentials
|
||||
if c.S3.AccessKeyID == "" {
|
||||
return fmt.Errorf("s3.access_key_id is required for s3:// URLs")
|
||||
}
|
||||
if c.S3.SecretAccessKey == "" {
|
||||
return fmt.Errorf("s3.secret_access_key is required for s3:// URLs")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("storage_url must start with s3:// or file://")
|
||||
}
|
||||
|
||||
// Legacy S3 configuration
|
||||
if c.S3.Endpoint == "" {
|
||||
return fmt.Errorf("s3.endpoint is required (or set storage_url)")
|
||||
}
|
||||
|
||||
if c.S3.Bucket == "" {
|
||||
return fmt.Errorf("s3.bucket is required (or set storage_url)")
|
||||
}
|
||||
|
||||
if c.S3.AccessKeyID == "" {
|
||||
return fmt.Errorf("s3.access_key_id is required")
|
||||
}
|
||||
|
||||
if c.S3.SecretAccessKey == "" {
|
||||
return fmt.Errorf("s3.secret_access_key is required")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Module exports the config module for fx dependency injection.
|
||||
// It provides the Config type to other modules in the application.
|
||||
var Module = fx.Module("config",
|
||||
fx.Provide(New),
|
||||
)
|
||||
|
||||
@@ -6,6 +6,12 @@ import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
const (
|
||||
TEST_SNEAK_AGE_PUBLIC_KEY = "age1278m9q7dp3chsh2dcy82qk27v047zywyvtxwnj4cvt0z65jw6a7q5dqhfj"
|
||||
TEST_INTEGRATION_AGE_PUBLIC_KEY = "age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"
|
||||
TEST_INTEGRATION_AGE_PRIVATE_KEY = "AGE-SECRET-KEY-19CR5YSFW59HM4TLD6GXVEDMZFTVVF7PPHKUT68TXSFPK7APHXA2QS2NJA5"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) {
|
||||
// Set up test environment
|
||||
testConfigPath := filepath.Join("..", "..", "test", "config.yaml")
|
||||
@@ -32,8 +38,11 @@ func TestConfigLoad(t *testing.T) {
|
||||
}
|
||||
|
||||
// Basic validation
|
||||
if cfg.AgeRecipient != "age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" {
|
||||
t.Errorf("Expected age recipient to be set, got '%s'", cfg.AgeRecipient)
|
||||
if len(cfg.AgeRecipients) != 2 {
|
||||
t.Errorf("Expected 2 age recipients, got %d", len(cfg.AgeRecipients))
|
||||
}
|
||||
if cfg.AgeRecipients[0] != TEST_SNEAK_AGE_PUBLIC_KEY {
|
||||
t.Errorf("Expected first age recipient to be %s, got '%s'", TEST_SNEAK_AGE_PUBLIC_KEY, cfg.AgeRecipients[0])
|
||||
}
|
||||
|
||||
if len(cfg.SourceDirs) != 2 {
|
||||
|
||||
62
internal/config/size.go
Normal file
62
internal/config/size.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
)
|
||||
|
||||
// Size represents a byte size that can be specified in configuration files.
|
||||
// It can unmarshal from both numeric values (interpreted as bytes) and
|
||||
// human-readable strings like "10MB", "2.5GB", or "1TB".
|
||||
type Size int64
|
||||
|
||||
// UnmarshalYAML implements yaml.Unmarshaler for Size, allowing it to be
|
||||
// parsed from YAML configuration files. It accepts both numeric values
|
||||
// (interpreted as bytes) and string values with units (e.g., "10MB").
|
||||
func (s *Size) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||
// Try to unmarshal as int64 first
|
||||
var intVal int64
|
||||
if err := unmarshal(&intVal); err == nil {
|
||||
*s = Size(intVal)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Try to unmarshal as string
|
||||
var strVal string
|
||||
if err := unmarshal(&strVal); err != nil {
|
||||
return fmt.Errorf("size must be a number or string")
|
||||
}
|
||||
|
||||
// Parse the string using go-humanize
|
||||
bytes, err := humanize.ParseBytes(strVal)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid size format: %w", err)
|
||||
}
|
||||
|
||||
*s = Size(bytes)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Int64 returns the size as int64 bytes.
|
||||
// This is useful when the size needs to be passed to APIs that expect
|
||||
// a numeric byte count.
|
||||
func (s Size) Int64() int64 {
|
||||
return int64(s)
|
||||
}
|
||||
|
||||
// String returns the size as a human-readable string.
|
||||
// For example, 1048576 bytes would be formatted as "1.0 MB".
|
||||
// This implements the fmt.Stringer interface.
|
||||
func (s Size) String() string {
|
||||
return humanize.Bytes(uint64(s))
|
||||
}
|
||||
|
||||
// ParseSize parses a size string into a Size value
|
||||
func ParseSize(s string) (Size, error) {
|
||||
bytes, err := humanize.ParseBytes(s)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("invalid size format: %w", err)
|
||||
}
|
||||
return Size(bytes), nil
|
||||
}
|
||||
209
internal/crypto/encryption.go
Normal file
209
internal/crypto/encryption.go
Normal file
@@ -0,0 +1,209 @@
|
||||
package crypto
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"filippo.io/age"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// Encryptor provides thread-safe encryption using the age encryption library.
|
||||
// It supports encrypting data for multiple recipients simultaneously, allowing
|
||||
// any of the corresponding private keys to decrypt the data. This is useful
|
||||
// for backup scenarios where multiple parties should be able to decrypt the data.
|
||||
type Encryptor struct {
|
||||
recipients []age.Recipient
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// NewEncryptor creates a new encryptor with the given age public keys.
|
||||
// Each public key should be a valid age X25519 recipient string (e.g., "age1...")
|
||||
// At least one recipient must be provided. Returns an error if any of the
|
||||
// public keys are invalid or if no recipients are specified.
|
||||
func NewEncryptor(publicKeys []string) (*Encryptor, error) {
|
||||
if len(publicKeys) == 0 {
|
||||
return nil, fmt.Errorf("at least one recipient is required")
|
||||
}
|
||||
|
||||
recipients := make([]age.Recipient, 0, len(publicKeys))
|
||||
for _, key := range publicKeys {
|
||||
recipient, err := age.ParseX25519Recipient(key)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing age recipient %s: %w", key, err)
|
||||
}
|
||||
recipients = append(recipients, recipient)
|
||||
}
|
||||
|
||||
return &Encryptor{
|
||||
recipients: recipients,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Encrypt encrypts data using age encryption for all configured recipients.
|
||||
// The encrypted data can be decrypted by any of the corresponding private keys.
|
||||
// This method is suitable for small to medium amounts of data that fit in memory.
|
||||
// For large data streams, use EncryptStream or EncryptWriter instead.
|
||||
func (e *Encryptor) Encrypt(data []byte) ([]byte, error) {
|
||||
e.mu.RLock()
|
||||
recipients := e.recipients
|
||||
e.mu.RUnlock()
|
||||
|
||||
var buf bytes.Buffer
|
||||
|
||||
// Create encrypted writer for all recipients
|
||||
w, err := age.Encrypt(&buf, recipients...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating encrypted writer: %w", err)
|
||||
}
|
||||
|
||||
// Write data
|
||||
if _, err := w.Write(data); err != nil {
|
||||
return nil, fmt.Errorf("writing encrypted data: %w", err)
|
||||
}
|
||||
|
||||
// Close to flush
|
||||
if err := w.Close(); err != nil {
|
||||
return nil, fmt.Errorf("closing encrypted writer: %w", err)
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// EncryptStream encrypts data from reader to writer using age encryption.
|
||||
// This method is suitable for encrypting large files or streams as it processes
|
||||
// data in a streaming fashion without loading everything into memory.
|
||||
// The encrypted data is written directly to the destination writer.
|
||||
func (e *Encryptor) EncryptStream(dst io.Writer, src io.Reader) error {
|
||||
e.mu.RLock()
|
||||
recipients := e.recipients
|
||||
e.mu.RUnlock()
|
||||
|
||||
// Create encrypted writer for all recipients
|
||||
w, err := age.Encrypt(dst, recipients...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating encrypted writer: %w", err)
|
||||
}
|
||||
|
||||
// Copy data
|
||||
if _, err := io.Copy(w, src); err != nil {
|
||||
return fmt.Errorf("copying encrypted data: %w", err)
|
||||
}
|
||||
|
||||
// Close to flush
|
||||
if err := w.Close(); err != nil {
|
||||
return fmt.Errorf("closing encrypted writer: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// EncryptWriter creates a writer that encrypts data written to it.
|
||||
// All data written to the returned WriteCloser will be encrypted and written
|
||||
// to the destination writer. The caller must call Close() on the returned
|
||||
// writer to ensure all encrypted data is properly flushed and finalized.
|
||||
// This is useful for integrating encryption into existing writer-based pipelines.
|
||||
func (e *Encryptor) EncryptWriter(dst io.Writer) (io.WriteCloser, error) {
|
||||
e.mu.RLock()
|
||||
recipients := e.recipients
|
||||
e.mu.RUnlock()
|
||||
|
||||
// Create encrypted writer for all recipients
|
||||
w, err := age.Encrypt(dst, recipients...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating encrypted writer: %w", err)
|
||||
}
|
||||
|
||||
return w, nil
|
||||
}
|
||||
|
||||
// UpdateRecipients updates the recipients for future encryption operations.
|
||||
// This method is thread-safe and can be called while other encryption operations
|
||||
// are in progress. Existing encryption operations will continue with the old
|
||||
// recipients. At least one recipient must be provided. Returns an error if any
|
||||
// of the public keys are invalid or if no recipients are specified.
|
||||
func (e *Encryptor) UpdateRecipients(publicKeys []string) error {
|
||||
if len(publicKeys) == 0 {
|
||||
return fmt.Errorf("at least one recipient is required")
|
||||
}
|
||||
|
||||
recipients := make([]age.Recipient, 0, len(publicKeys))
|
||||
for _, key := range publicKeys {
|
||||
recipient, err := age.ParseX25519Recipient(key)
|
||||
if err != nil {
|
||||
return fmt.Errorf("parsing age recipient %s: %w", key, err)
|
||||
}
|
||||
recipients = append(recipients, recipient)
|
||||
}
|
||||
|
||||
e.mu.Lock()
|
||||
e.recipients = recipients
|
||||
e.mu.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decryptor provides thread-safe decryption using the age encryption library.
|
||||
// It uses a private key to decrypt data that was encrypted for the corresponding
|
||||
// public key.
|
||||
type Decryptor struct {
|
||||
identity age.Identity
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// NewDecryptor creates a new decryptor with the given age private key.
|
||||
// The private key should be a valid age X25519 identity string.
|
||||
// Returns an error if the private key is invalid.
|
||||
func NewDecryptor(privateKey string) (*Decryptor, error) {
|
||||
identity, err := age.ParseX25519Identity(privateKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing age identity: %w", err)
|
||||
}
|
||||
|
||||
return &Decryptor{
|
||||
identity: identity,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Decrypt decrypts data using age decryption.
|
||||
// This method is suitable for small to medium amounts of data that fit in memory.
|
||||
// For large data streams, use DecryptStream instead.
|
||||
func (d *Decryptor) Decrypt(data []byte) ([]byte, error) {
|
||||
d.mu.RLock()
|
||||
identity := d.identity
|
||||
d.mu.RUnlock()
|
||||
|
||||
r, err := age.Decrypt(bytes.NewReader(data), identity)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating decrypted reader: %w", err)
|
||||
}
|
||||
|
||||
decrypted, err := io.ReadAll(r)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("reading decrypted data: %w", err)
|
||||
}
|
||||
|
||||
return decrypted, nil
|
||||
}
|
||||
|
||||
// DecryptStream returns a reader that decrypts data from the provided reader.
|
||||
// This method is suitable for decrypting large files or streams as it processes
|
||||
// data in a streaming fashion without loading everything into memory.
|
||||
// The caller should close the input reader when done.
|
||||
func (d *Decryptor) DecryptStream(src io.Reader) (io.Reader, error) {
|
||||
d.mu.RLock()
|
||||
identity := d.identity
|
||||
d.mu.RUnlock()
|
||||
|
||||
r, err := age.Decrypt(src, identity)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating decrypted reader: %w", err)
|
||||
}
|
||||
|
||||
return r, nil
|
||||
}
|
||||
|
||||
// Module exports the crypto module for fx dependency injection.
|
||||
var Module = fx.Module("crypto")
|
||||
157
internal/crypto/encryption_test.go
Normal file
157
internal/crypto/encryption_test.go
Normal file
@@ -0,0 +1,157 @@
|
||||
package crypto
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"filippo.io/age"
|
||||
)
|
||||
|
||||
func TestEncryptor(t *testing.T) {
|
||||
// Generate a test key pair
|
||||
identity, err := age.GenerateX25519Identity()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to generate identity: %v", err)
|
||||
}
|
||||
|
||||
publicKey := identity.Recipient().String()
|
||||
|
||||
// Create encryptor
|
||||
enc, err := NewEncryptor([]string{publicKey})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create encryptor: %v", err)
|
||||
}
|
||||
|
||||
// Test data
|
||||
plaintext := []byte("Hello, World! This is a test message.")
|
||||
|
||||
// Encrypt
|
||||
ciphertext, err := enc.Encrypt(plaintext)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to encrypt: %v", err)
|
||||
}
|
||||
|
||||
// Verify it's actually encrypted (should be larger and different)
|
||||
if bytes.Equal(plaintext, ciphertext) {
|
||||
t.Error("ciphertext equals plaintext")
|
||||
}
|
||||
|
||||
// Decrypt to verify
|
||||
r, err := age.Decrypt(bytes.NewReader(ciphertext), identity)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to decrypt: %v", err)
|
||||
}
|
||||
|
||||
var decrypted bytes.Buffer
|
||||
if _, err := decrypted.ReadFrom(r); err != nil {
|
||||
t.Fatalf("failed to read decrypted data: %v", err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(plaintext, decrypted.Bytes()) {
|
||||
t.Error("decrypted data doesn't match original")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncryptorMultipleRecipients(t *testing.T) {
|
||||
// Generate three test key pairs
|
||||
identity1, err := age.GenerateX25519Identity()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to generate identity1: %v", err)
|
||||
}
|
||||
identity2, err := age.GenerateX25519Identity()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to generate identity2: %v", err)
|
||||
}
|
||||
identity3, err := age.GenerateX25519Identity()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to generate identity3: %v", err)
|
||||
}
|
||||
|
||||
publicKeys := []string{
|
||||
identity1.Recipient().String(),
|
||||
identity2.Recipient().String(),
|
||||
identity3.Recipient().String(),
|
||||
}
|
||||
|
||||
// Create encryptor with multiple recipients
|
||||
enc, err := NewEncryptor(publicKeys)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create encryptor: %v", err)
|
||||
}
|
||||
|
||||
// Test data
|
||||
plaintext := []byte("Secret message for multiple recipients")
|
||||
|
||||
// Encrypt
|
||||
ciphertext, err := enc.Encrypt(plaintext)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to encrypt: %v", err)
|
||||
}
|
||||
|
||||
// Verify each recipient can decrypt
|
||||
identities := []age.Identity{identity1, identity2, identity3}
|
||||
for i, identity := range identities {
|
||||
r, err := age.Decrypt(bytes.NewReader(ciphertext), identity)
|
||||
if err != nil {
|
||||
t.Fatalf("recipient %d failed to decrypt: %v", i+1, err)
|
||||
}
|
||||
|
||||
var decrypted bytes.Buffer
|
||||
if _, err := decrypted.ReadFrom(r); err != nil {
|
||||
t.Fatalf("recipient %d failed to read decrypted data: %v", i+1, err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(plaintext, decrypted.Bytes()) {
|
||||
t.Errorf("recipient %d: decrypted data doesn't match original", i+1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncryptorUpdateRecipients(t *testing.T) {
|
||||
// Generate two identities
|
||||
identity1, _ := age.GenerateX25519Identity()
|
||||
identity2, _ := age.GenerateX25519Identity()
|
||||
|
||||
publicKey1 := identity1.Recipient().String()
|
||||
publicKey2 := identity2.Recipient().String()
|
||||
|
||||
// Create encryptor with first key
|
||||
enc, err := NewEncryptor([]string{publicKey1})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create encryptor: %v", err)
|
||||
}
|
||||
|
||||
// Encrypt with first key
|
||||
plaintext := []byte("test data")
|
||||
ciphertext1, err := enc.Encrypt(plaintext)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to encrypt: %v", err)
|
||||
}
|
||||
|
||||
// Update to second key
|
||||
if err := enc.UpdateRecipients([]string{publicKey2}); err != nil {
|
||||
t.Fatalf("failed to update recipients: %v", err)
|
||||
}
|
||||
|
||||
// Encrypt with second key
|
||||
ciphertext2, err := enc.Encrypt(plaintext)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to encrypt: %v", err)
|
||||
}
|
||||
|
||||
// First ciphertext should only decrypt with first identity
|
||||
if _, err := age.Decrypt(bytes.NewReader(ciphertext1), identity1); err != nil {
|
||||
t.Error("failed to decrypt with identity1")
|
||||
}
|
||||
if _, err := age.Decrypt(bytes.NewReader(ciphertext1), identity2); err == nil {
|
||||
t.Error("should not decrypt with identity2")
|
||||
}
|
||||
|
||||
// Second ciphertext should only decrypt with second identity
|
||||
if _, err := age.Decrypt(bytes.NewReader(ciphertext2), identity2); err != nil {
|
||||
t.Error("failed to decrypt with identity2")
|
||||
}
|
||||
if _, err := age.Decrypt(bytes.NewReader(ciphertext2), identity1); err == nil {
|
||||
t.Error("should not decrypt with identity1")
|
||||
}
|
||||
}
|
||||
@@ -16,15 +16,15 @@ func NewBlobChunkRepository(db *DB) *BlobChunkRepository {
|
||||
|
||||
func (r *BlobChunkRepository) Create(ctx context.Context, tx *sql.Tx, bc *BlobChunk) error {
|
||||
query := `
|
||||
INSERT INTO blob_chunks (blob_hash, chunk_hash, offset, length)
|
||||
INSERT INTO blob_chunks (blob_id, chunk_hash, offset, length)
|
||||
VALUES (?, ?, ?, ?)
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, bc.BlobHash, bc.ChunkHash, bc.Offset, bc.Length)
|
||||
_, err = tx.ExecContext(ctx, query, bc.BlobID, bc.ChunkHash, bc.Offset, bc.Length)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, bc.BlobHash, bc.ChunkHash, bc.Offset, bc.Length)
|
||||
_, err = r.db.ExecWithLog(ctx, query, bc.BlobID, bc.ChunkHash, bc.Offset, bc.Length)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -34,15 +34,15 @@ func (r *BlobChunkRepository) Create(ctx context.Context, tx *sql.Tx, bc *BlobCh
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *BlobChunkRepository) GetByBlobHash(ctx context.Context, blobHash string) ([]*BlobChunk, error) {
|
||||
func (r *BlobChunkRepository) GetByBlobID(ctx context.Context, blobID string) ([]*BlobChunk, error) {
|
||||
query := `
|
||||
SELECT blob_hash, chunk_hash, offset, length
|
||||
SELECT blob_id, chunk_hash, offset, length
|
||||
FROM blob_chunks
|
||||
WHERE blob_hash = ?
|
||||
WHERE blob_id = ?
|
||||
ORDER BY offset
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, blobHash)
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, blobID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying blob chunks: %w", err)
|
||||
}
|
||||
@@ -51,7 +51,7 @@ func (r *BlobChunkRepository) GetByBlobHash(ctx context.Context, blobHash string
|
||||
var blobChunks []*BlobChunk
|
||||
for rows.Next() {
|
||||
var bc BlobChunk
|
||||
err := rows.Scan(&bc.BlobHash, &bc.ChunkHash, &bc.Offset, &bc.Length)
|
||||
err := rows.Scan(&bc.BlobID, &bc.ChunkHash, &bc.Offset, &bc.Length)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning blob chunk: %w", err)
|
||||
}
|
||||
@@ -63,26 +63,90 @@ func (r *BlobChunkRepository) GetByBlobHash(ctx context.Context, blobHash string
|
||||
|
||||
func (r *BlobChunkRepository) GetByChunkHash(ctx context.Context, chunkHash string) (*BlobChunk, error) {
|
||||
query := `
|
||||
SELECT blob_hash, chunk_hash, offset, length
|
||||
SELECT blob_id, chunk_hash, offset, length
|
||||
FROM blob_chunks
|
||||
WHERE chunk_hash = ?
|
||||
LIMIT 1
|
||||
`
|
||||
|
||||
LogSQL("GetByChunkHash", query, chunkHash)
|
||||
var bc BlobChunk
|
||||
err := r.db.conn.QueryRowContext(ctx, query, chunkHash).Scan(
|
||||
&bc.BlobHash,
|
||||
&bc.BlobID,
|
||||
&bc.ChunkHash,
|
||||
&bc.Offset,
|
||||
&bc.Length,
|
||||
)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
LogSQL("GetByChunkHash", "No rows found", chunkHash)
|
||||
return nil, nil
|
||||
}
|
||||
if err != nil {
|
||||
LogSQL("GetByChunkHash", "Error", chunkHash, err)
|
||||
return nil, fmt.Errorf("querying blob chunk: %w", err)
|
||||
}
|
||||
|
||||
LogSQL("GetByChunkHash", "Found blob", chunkHash, "blob", bc.BlobID)
|
||||
return &bc, nil
|
||||
}
|
||||
|
||||
// GetByChunkHashTx retrieves a blob chunk within a transaction
|
||||
func (r *BlobChunkRepository) GetByChunkHashTx(ctx context.Context, tx *sql.Tx, chunkHash string) (*BlobChunk, error) {
|
||||
query := `
|
||||
SELECT blob_id, chunk_hash, offset, length
|
||||
FROM blob_chunks
|
||||
WHERE chunk_hash = ?
|
||||
LIMIT 1
|
||||
`
|
||||
|
||||
LogSQL("GetByChunkHashTx", query, chunkHash)
|
||||
var bc BlobChunk
|
||||
err := tx.QueryRowContext(ctx, query, chunkHash).Scan(
|
||||
&bc.BlobID,
|
||||
&bc.ChunkHash,
|
||||
&bc.Offset,
|
||||
&bc.Length,
|
||||
)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
LogSQL("GetByChunkHashTx", "No rows found", chunkHash)
|
||||
return nil, nil
|
||||
}
|
||||
if err != nil {
|
||||
LogSQL("GetByChunkHashTx", "Error", chunkHash, err)
|
||||
return nil, fmt.Errorf("querying blob chunk: %w", err)
|
||||
}
|
||||
|
||||
LogSQL("GetByChunkHashTx", "Found blob", chunkHash, "blob", bc.BlobID)
|
||||
return &bc, nil
|
||||
}
|
||||
|
||||
// DeleteOrphaned deletes blob_chunks entries where either the blob or chunk no longer exists
|
||||
func (r *BlobChunkRepository) DeleteOrphaned(ctx context.Context) error {
|
||||
// Delete blob_chunks where the blob doesn't exist
|
||||
query1 := `
|
||||
DELETE FROM blob_chunks
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM blobs
|
||||
WHERE blobs.id = blob_chunks.blob_id
|
||||
)
|
||||
`
|
||||
if _, err := r.db.ExecWithLog(ctx, query1); err != nil {
|
||||
return fmt.Errorf("deleting blob_chunks with missing blobs: %w", err)
|
||||
}
|
||||
|
||||
// Delete blob_chunks where the chunk doesn't exist
|
||||
query2 := `
|
||||
DELETE FROM blob_chunks
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM chunks
|
||||
WHERE chunks.chunk_hash = blob_chunks.chunk_hash
|
||||
)
|
||||
`
|
||||
if _, err := r.db.ExecWithLog(ctx, query2); err != nil {
|
||||
return fmt.Errorf("deleting blob_chunks with missing chunks: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -2,7 +2,9 @@ package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestBlobChunkRepository(t *testing.T) {
|
||||
@@ -10,78 +12,111 @@ func TestBlobChunkRepository(t *testing.T) {
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewBlobChunkRepository(db)
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create blob first
|
||||
blob := &Blob{
|
||||
ID: "blob1-uuid",
|
||||
Hash: "blob1-hash",
|
||||
CreatedTS: time.Now(),
|
||||
}
|
||||
err := repos.Blobs.Create(ctx, nil, blob)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create blob: %v", err)
|
||||
}
|
||||
|
||||
// Create chunks
|
||||
chunks := []string{"chunk1", "chunk2", "chunk3"}
|
||||
for _, chunkHash := range chunks {
|
||||
chunk := &Chunk{
|
||||
ChunkHash: chunkHash,
|
||||
Size: 1024,
|
||||
}
|
||||
err = repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk %s: %v", chunkHash, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Test Create
|
||||
bc1 := &BlobChunk{
|
||||
BlobHash: "blob1",
|
||||
BlobID: blob.ID,
|
||||
ChunkHash: "chunk1",
|
||||
Offset: 0,
|
||||
Length: 1024,
|
||||
}
|
||||
|
||||
err := repo.Create(ctx, nil, bc1)
|
||||
err = repos.BlobChunks.Create(ctx, nil, bc1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create blob chunk: %v", err)
|
||||
}
|
||||
|
||||
// Add more chunks to the same blob
|
||||
bc2 := &BlobChunk{
|
||||
BlobHash: "blob1",
|
||||
BlobID: blob.ID,
|
||||
ChunkHash: "chunk2",
|
||||
Offset: 1024,
|
||||
Length: 2048,
|
||||
}
|
||||
err = repo.Create(ctx, nil, bc2)
|
||||
err = repos.BlobChunks.Create(ctx, nil, bc2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create second blob chunk: %v", err)
|
||||
}
|
||||
|
||||
bc3 := &BlobChunk{
|
||||
BlobHash: "blob1",
|
||||
BlobID: blob.ID,
|
||||
ChunkHash: "chunk3",
|
||||
Offset: 3072,
|
||||
Length: 512,
|
||||
}
|
||||
err = repo.Create(ctx, nil, bc3)
|
||||
err = repos.BlobChunks.Create(ctx, nil, bc3)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create third blob chunk: %v", err)
|
||||
}
|
||||
|
||||
// Test GetByBlobHash
|
||||
chunks, err := repo.GetByBlobHash(ctx, "blob1")
|
||||
// Test GetByBlobID
|
||||
blobChunks, err := repos.BlobChunks.GetByBlobID(ctx, blob.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get blob chunks: %v", err)
|
||||
}
|
||||
if len(chunks) != 3 {
|
||||
t.Errorf("expected 3 chunks, got %d", len(chunks))
|
||||
if len(blobChunks) != 3 {
|
||||
t.Errorf("expected 3 chunks, got %d", len(blobChunks))
|
||||
}
|
||||
|
||||
// Verify order by offset
|
||||
expectedOffsets := []int64{0, 1024, 3072}
|
||||
for i, chunk := range chunks {
|
||||
if chunk.Offset != expectedOffsets[i] {
|
||||
t.Errorf("wrong chunk order: expected offset %d, got %d", expectedOffsets[i], chunk.Offset)
|
||||
for i, bc := range blobChunks {
|
||||
if bc.Offset != expectedOffsets[i] {
|
||||
t.Errorf("wrong chunk order: expected offset %d, got %d", expectedOffsets[i], bc.Offset)
|
||||
}
|
||||
}
|
||||
|
||||
// Test GetByChunkHash
|
||||
bc, err := repo.GetByChunkHash(ctx, "chunk2")
|
||||
bc, err := repos.BlobChunks.GetByChunkHash(ctx, "chunk2")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get blob chunk by chunk hash: %v", err)
|
||||
}
|
||||
if bc == nil {
|
||||
t.Fatal("expected blob chunk, got nil")
|
||||
}
|
||||
if bc.BlobHash != "blob1" {
|
||||
t.Errorf("wrong blob hash: expected blob1, got %s", bc.BlobHash)
|
||||
if bc.BlobID != blob.ID {
|
||||
t.Errorf("wrong blob ID: expected %s, got %s", blob.ID, bc.BlobID)
|
||||
}
|
||||
if bc.Offset != 1024 {
|
||||
t.Errorf("wrong offset: expected 1024, got %d", bc.Offset)
|
||||
}
|
||||
|
||||
// Test duplicate insert (should fail due to primary key constraint)
|
||||
err = repos.BlobChunks.Create(ctx, nil, bc1)
|
||||
if err == nil {
|
||||
t.Fatal("duplicate blob_chunk insert should fail due to primary key constraint")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "UNIQUE") && !strings.Contains(err.Error(), "constraint") {
|
||||
t.Fatalf("expected constraint error, got: %v", err)
|
||||
}
|
||||
|
||||
// Test non-existent chunk
|
||||
bc, err = repo.GetByChunkHash(ctx, "nonexistent")
|
||||
bc, err = repos.BlobChunks.GetByChunkHash(ctx, "nonexistent")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
@@ -95,26 +130,60 @@ func TestBlobChunkRepositoryMultipleBlobs(t *testing.T) {
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewBlobChunkRepository(db)
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create blobs
|
||||
blob1 := &Blob{
|
||||
ID: "blob1-uuid",
|
||||
Hash: "blob1-hash",
|
||||
CreatedTS: time.Now(),
|
||||
}
|
||||
blob2 := &Blob{
|
||||
ID: "blob2-uuid",
|
||||
Hash: "blob2-hash",
|
||||
CreatedTS: time.Now(),
|
||||
}
|
||||
|
||||
err := repos.Blobs.Create(ctx, nil, blob1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create blob1: %v", err)
|
||||
}
|
||||
err = repos.Blobs.Create(ctx, nil, blob2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create blob2: %v", err)
|
||||
}
|
||||
|
||||
// Create chunks
|
||||
chunkHashes := []string{"chunk1", "chunk2", "chunk3"}
|
||||
for _, chunkHash := range chunkHashes {
|
||||
chunk := &Chunk{
|
||||
ChunkHash: chunkHash,
|
||||
Size: 1024,
|
||||
}
|
||||
err = repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk %s: %v", chunkHash, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create chunks across multiple blobs
|
||||
// Some chunks are shared between blobs (deduplication scenario)
|
||||
blobChunks := []BlobChunk{
|
||||
{BlobHash: "blob1", ChunkHash: "chunk1", Offset: 0, Length: 1024},
|
||||
{BlobHash: "blob1", ChunkHash: "chunk2", Offset: 1024, Length: 1024},
|
||||
{BlobHash: "blob2", ChunkHash: "chunk2", Offset: 0, Length: 1024}, // chunk2 is shared
|
||||
{BlobHash: "blob2", ChunkHash: "chunk3", Offset: 1024, Length: 1024},
|
||||
{BlobID: blob1.ID, ChunkHash: "chunk1", Offset: 0, Length: 1024},
|
||||
{BlobID: blob1.ID, ChunkHash: "chunk2", Offset: 1024, Length: 1024},
|
||||
{BlobID: blob2.ID, ChunkHash: "chunk2", Offset: 0, Length: 1024}, // chunk2 is shared
|
||||
{BlobID: blob2.ID, ChunkHash: "chunk3", Offset: 1024, Length: 1024},
|
||||
}
|
||||
|
||||
for _, bc := range blobChunks {
|
||||
err := repo.Create(ctx, nil, &bc)
|
||||
err := repos.BlobChunks.Create(ctx, nil, &bc)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create blob chunk: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify blob1 chunks
|
||||
chunks, err := repo.GetByBlobHash(ctx, "blob1")
|
||||
chunks, err := repos.BlobChunks.GetByBlobID(ctx, blob1.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get blob1 chunks: %v", err)
|
||||
}
|
||||
@@ -123,7 +192,7 @@ func TestBlobChunkRepositoryMultipleBlobs(t *testing.T) {
|
||||
}
|
||||
|
||||
// Verify blob2 chunks
|
||||
chunks, err = repo.GetByBlobHash(ctx, "blob2")
|
||||
chunks, err = repos.BlobChunks.GetByBlobID(ctx, blob2.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get blob2 chunks: %v", err)
|
||||
}
|
||||
@@ -132,7 +201,7 @@ func TestBlobChunkRepositoryMultipleBlobs(t *testing.T) {
|
||||
}
|
||||
|
||||
// Verify shared chunk
|
||||
bc, err := repo.GetByChunkHash(ctx, "chunk2")
|
||||
bc, err := repos.BlobChunks.GetByChunkHash(ctx, "chunk2")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get shared chunk: %v", err)
|
||||
}
|
||||
@@ -140,7 +209,7 @@ func TestBlobChunkRepositoryMultipleBlobs(t *testing.T) {
|
||||
t.Fatal("expected shared chunk, got nil")
|
||||
}
|
||||
// GetByChunkHash returns first match, should be blob1
|
||||
if bc.BlobHash != "blob1" {
|
||||
t.Errorf("expected blob1 for shared chunk, got %s", bc.BlobHash)
|
||||
if bc.BlobID != blob1.ID {
|
||||
t.Errorf("expected %s for shared chunk, got %s", blob1.ID, bc.BlobID)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
)
|
||||
|
||||
type BlobRepository struct {
|
||||
@@ -17,15 +19,27 @@ func NewBlobRepository(db *DB) *BlobRepository {
|
||||
|
||||
func (r *BlobRepository) Create(ctx context.Context, tx *sql.Tx, blob *Blob) error {
|
||||
query := `
|
||||
INSERT INTO blobs (blob_hash, created_ts)
|
||||
VALUES (?, ?)
|
||||
INSERT INTO blobs (id, blob_hash, created_ts, finished_ts, uncompressed_size, compressed_size, uploaded_ts)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||
`
|
||||
|
||||
var finishedTS, uploadedTS *int64
|
||||
if blob.FinishedTS != nil {
|
||||
ts := blob.FinishedTS.Unix()
|
||||
finishedTS = &ts
|
||||
}
|
||||
if blob.UploadedTS != nil {
|
||||
ts := blob.UploadedTS.Unix()
|
||||
uploadedTS = &ts
|
||||
}
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, blob.BlobHash, blob.CreatedTS.Unix())
|
||||
_, err = tx.ExecContext(ctx, query, blob.ID, blob.Hash, blob.CreatedTS.Unix(),
|
||||
finishedTS, blob.UncompressedSize, blob.CompressedSize, uploadedTS)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, blob.BlobHash, blob.CreatedTS.Unix())
|
||||
_, err = r.db.ExecWithLog(ctx, query, blob.ID, blob.Hash, blob.CreatedTS.Unix(),
|
||||
finishedTS, blob.UncompressedSize, blob.CompressedSize, uploadedTS)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -37,17 +51,23 @@ func (r *BlobRepository) Create(ctx context.Context, tx *sql.Tx, blob *Blob) err
|
||||
|
||||
func (r *BlobRepository) GetByHash(ctx context.Context, hash string) (*Blob, error) {
|
||||
query := `
|
||||
SELECT blob_hash, created_ts
|
||||
SELECT id, blob_hash, created_ts, finished_ts, uncompressed_size, compressed_size, uploaded_ts
|
||||
FROM blobs
|
||||
WHERE blob_hash = ?
|
||||
`
|
||||
|
||||
var blob Blob
|
||||
var createdTSUnix int64
|
||||
var finishedTSUnix, uploadedTSUnix sql.NullInt64
|
||||
|
||||
err := r.db.conn.QueryRowContext(ctx, query, hash).Scan(
|
||||
&blob.BlobHash,
|
||||
&blob.ID,
|
||||
&blob.Hash,
|
||||
&createdTSUnix,
|
||||
&finishedTSUnix,
|
||||
&blob.UncompressedSize,
|
||||
&blob.CompressedSize,
|
||||
&uploadedTSUnix,
|
||||
)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
@@ -57,40 +77,124 @@ func (r *BlobRepository) GetByHash(ctx context.Context, hash string) (*Blob, err
|
||||
return nil, fmt.Errorf("querying blob: %w", err)
|
||||
}
|
||||
|
||||
blob.CreatedTS = time.Unix(createdTSUnix, 0)
|
||||
blob.CreatedTS = time.Unix(createdTSUnix, 0).UTC()
|
||||
if finishedTSUnix.Valid {
|
||||
ts := time.Unix(finishedTSUnix.Int64, 0).UTC()
|
||||
blob.FinishedTS = &ts
|
||||
}
|
||||
if uploadedTSUnix.Valid {
|
||||
ts := time.Unix(uploadedTSUnix.Int64, 0).UTC()
|
||||
blob.UploadedTS = &ts
|
||||
}
|
||||
return &blob, nil
|
||||
}
|
||||
|
||||
func (r *BlobRepository) List(ctx context.Context, limit, offset int) ([]*Blob, error) {
|
||||
// GetByID retrieves a blob by its ID
|
||||
func (r *BlobRepository) GetByID(ctx context.Context, id string) (*Blob, error) {
|
||||
query := `
|
||||
SELECT blob_hash, created_ts
|
||||
SELECT id, blob_hash, created_ts, finished_ts, uncompressed_size, compressed_size, uploaded_ts
|
||||
FROM blobs
|
||||
ORDER BY blob_hash
|
||||
LIMIT ? OFFSET ?
|
||||
WHERE id = ?
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, limit, offset)
|
||||
var blob Blob
|
||||
var createdTSUnix int64
|
||||
var finishedTSUnix, uploadedTSUnix sql.NullInt64
|
||||
|
||||
err := r.db.conn.QueryRowContext(ctx, query, id).Scan(
|
||||
&blob.ID,
|
||||
&blob.Hash,
|
||||
&createdTSUnix,
|
||||
&finishedTSUnix,
|
||||
&blob.UncompressedSize,
|
||||
&blob.CompressedSize,
|
||||
&uploadedTSUnix,
|
||||
)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying blobs: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var blobs []*Blob
|
||||
for rows.Next() {
|
||||
var blob Blob
|
||||
var createdTSUnix int64
|
||||
|
||||
err := rows.Scan(
|
||||
&blob.BlobHash,
|
||||
&createdTSUnix,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning blob: %w", err)
|
||||
}
|
||||
|
||||
blob.CreatedTS = time.Unix(createdTSUnix, 0)
|
||||
blobs = append(blobs, &blob)
|
||||
return nil, fmt.Errorf("querying blob: %w", err)
|
||||
}
|
||||
|
||||
return blobs, rows.Err()
|
||||
blob.CreatedTS = time.Unix(createdTSUnix, 0).UTC()
|
||||
if finishedTSUnix.Valid {
|
||||
ts := time.Unix(finishedTSUnix.Int64, 0).UTC()
|
||||
blob.FinishedTS = &ts
|
||||
}
|
||||
if uploadedTSUnix.Valid {
|
||||
ts := time.Unix(uploadedTSUnix.Int64, 0).UTC()
|
||||
blob.UploadedTS = &ts
|
||||
}
|
||||
return &blob, nil
|
||||
}
|
||||
|
||||
// UpdateFinished updates a blob when it's finalized
|
||||
func (r *BlobRepository) UpdateFinished(ctx context.Context, tx *sql.Tx, id string, hash string, uncompressedSize, compressedSize int64) error {
|
||||
query := `
|
||||
UPDATE blobs
|
||||
SET blob_hash = ?, finished_ts = ?, uncompressed_size = ?, compressed_size = ?
|
||||
WHERE id = ?
|
||||
`
|
||||
|
||||
now := time.Now().UTC().Unix()
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, hash, now, uncompressedSize, compressedSize, id)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, hash, now, uncompressedSize, compressedSize, id)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("updating blob: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateUploaded marks a blob as uploaded
|
||||
func (r *BlobRepository) UpdateUploaded(ctx context.Context, tx *sql.Tx, id string) error {
|
||||
query := `
|
||||
UPDATE blobs
|
||||
SET uploaded_ts = ?
|
||||
WHERE id = ?
|
||||
`
|
||||
|
||||
now := time.Now().UTC().Unix()
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, now, id)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, now, id)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("marking blob as uploaded: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteOrphaned deletes blobs that are not referenced by any snapshot
|
||||
func (r *BlobRepository) DeleteOrphaned(ctx context.Context) error {
|
||||
query := `
|
||||
DELETE FROM blobs
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM snapshot_blobs
|
||||
WHERE snapshot_blobs.blob_id = blobs.id
|
||||
)
|
||||
`
|
||||
|
||||
result, err := r.db.ExecWithLog(ctx, query)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned blobs: %w", err)
|
||||
}
|
||||
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
if rowsAffected > 0 {
|
||||
log.Debug("Deleted orphaned blobs", "count", rowsAffected)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -15,7 +15,8 @@ func TestBlobRepository(t *testing.T) {
|
||||
|
||||
// Test Create
|
||||
blob := &Blob{
|
||||
BlobHash: "blobhash123",
|
||||
ID: "test-blob-id-123",
|
||||
Hash: "blobhash123",
|
||||
CreatedTS: time.Now().Truncate(time.Second),
|
||||
}
|
||||
|
||||
@@ -25,23 +26,36 @@ func TestBlobRepository(t *testing.T) {
|
||||
}
|
||||
|
||||
// Test GetByHash
|
||||
retrieved, err := repo.GetByHash(ctx, blob.BlobHash)
|
||||
retrieved, err := repo.GetByHash(ctx, blob.Hash)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get blob: %v", err)
|
||||
}
|
||||
if retrieved == nil {
|
||||
t.Fatal("expected blob, got nil")
|
||||
}
|
||||
if retrieved.BlobHash != blob.BlobHash {
|
||||
t.Errorf("blob hash mismatch: got %s, want %s", retrieved.BlobHash, blob.BlobHash)
|
||||
if retrieved.Hash != blob.Hash {
|
||||
t.Errorf("blob hash mismatch: got %s, want %s", retrieved.Hash, blob.Hash)
|
||||
}
|
||||
if !retrieved.CreatedTS.Equal(blob.CreatedTS) {
|
||||
t.Errorf("created timestamp mismatch: got %v, want %v", retrieved.CreatedTS, blob.CreatedTS)
|
||||
}
|
||||
|
||||
// Test List
|
||||
// Test GetByID
|
||||
retrievedByID, err := repo.GetByID(ctx, blob.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get blob by ID: %v", err)
|
||||
}
|
||||
if retrievedByID == nil {
|
||||
t.Fatal("expected blob, got nil")
|
||||
}
|
||||
if retrievedByID.ID != blob.ID {
|
||||
t.Errorf("blob ID mismatch: got %s, want %s", retrievedByID.ID, blob.ID)
|
||||
}
|
||||
|
||||
// Test with second blob
|
||||
blob2 := &Blob{
|
||||
BlobHash: "blobhash456",
|
||||
ID: "test-blob-id-456",
|
||||
Hash: "blobhash456",
|
||||
CreatedTS: time.Now().Truncate(time.Second),
|
||||
}
|
||||
err = repo.Create(ctx, nil, blob2)
|
||||
@@ -49,29 +63,45 @@ func TestBlobRepository(t *testing.T) {
|
||||
t.Fatalf("failed to create second blob: %v", err)
|
||||
}
|
||||
|
||||
blobs, err := repo.List(ctx, 10, 0)
|
||||
// Test UpdateFinished
|
||||
now := time.Now()
|
||||
err = repo.UpdateFinished(ctx, nil, blob.ID, blob.Hash, 1000, 500)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to list blobs: %v", err)
|
||||
}
|
||||
if len(blobs) != 2 {
|
||||
t.Errorf("expected 2 blobs, got %d", len(blobs))
|
||||
t.Fatalf("failed to update blob as finished: %v", err)
|
||||
}
|
||||
|
||||
// Test pagination
|
||||
blobs, err = repo.List(ctx, 1, 0)
|
||||
// Verify update
|
||||
updated, err := repo.GetByID(ctx, blob.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to list blobs with limit: %v", err)
|
||||
t.Fatalf("failed to get updated blob: %v", err)
|
||||
}
|
||||
if len(blobs) != 1 {
|
||||
t.Errorf("expected 1 blob with limit, got %d", len(blobs))
|
||||
if updated.FinishedTS == nil {
|
||||
t.Fatal("expected finished timestamp to be set")
|
||||
}
|
||||
if updated.UncompressedSize != 1000 {
|
||||
t.Errorf("expected uncompressed size 1000, got %d", updated.UncompressedSize)
|
||||
}
|
||||
if updated.CompressedSize != 500 {
|
||||
t.Errorf("expected compressed size 500, got %d", updated.CompressedSize)
|
||||
}
|
||||
|
||||
blobs, err = repo.List(ctx, 1, 1)
|
||||
// Test UpdateUploaded
|
||||
err = repo.UpdateUploaded(ctx, nil, blob.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to list blobs with offset: %v", err)
|
||||
t.Fatalf("failed to update blob as uploaded: %v", err)
|
||||
}
|
||||
if len(blobs) != 1 {
|
||||
t.Errorf("expected 1 blob with offset, got %d", len(blobs))
|
||||
|
||||
// Verify upload update
|
||||
uploaded, err := repo.GetByID(ctx, blob.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get uploaded blob: %v", err)
|
||||
}
|
||||
if uploaded.UploadedTS == nil {
|
||||
t.Fatal("expected uploaded timestamp to be set")
|
||||
}
|
||||
// Allow 1 second tolerance for timestamp comparison
|
||||
if uploaded.UploadedTS.Before(now.Add(-1 * time.Second)) {
|
||||
t.Error("uploaded timestamp should be around test time")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -83,7 +113,8 @@ func TestBlobRepositoryDuplicate(t *testing.T) {
|
||||
repo := NewBlobRepository(db)
|
||||
|
||||
blob := &Blob{
|
||||
BlobHash: "duplicate_blob",
|
||||
ID: "duplicate-test-id",
|
||||
Hash: "duplicate_blob",
|
||||
CreatedTS: time.Now().Truncate(time.Second),
|
||||
}
|
||||
|
||||
|
||||
123
internal/database/cascade_debug_test.go
Normal file
123
internal/database/cascade_debug_test.go
Normal file
@@ -0,0 +1,123 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TestCascadeDeleteDebug tests cascade delete with debug output
|
||||
func TestCascadeDeleteDebug(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Check if foreign keys are enabled
|
||||
var fkEnabled int
|
||||
err := db.conn.QueryRow("PRAGMA foreign_keys").Scan(&fkEnabled)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("Foreign keys enabled: %d", fkEnabled)
|
||||
|
||||
// Create a file
|
||||
file := &File{
|
||||
Path: "/cascade-test.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
err = repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file: %v", err)
|
||||
}
|
||||
t.Logf("Created file with ID: %s", file.ID)
|
||||
|
||||
// Create chunks and file-chunk mappings
|
||||
for i := 0; i < 3; i++ {
|
||||
chunk := &Chunk{
|
||||
ChunkHash: fmt.Sprintf("cascade-chunk-%d", i),
|
||||
Size: 1024,
|
||||
}
|
||||
err = repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk: %v", err)
|
||||
}
|
||||
|
||||
fc := &FileChunk{
|
||||
FileID: file.ID,
|
||||
Idx: i,
|
||||
ChunkHash: chunk.ChunkHash,
|
||||
}
|
||||
err = repos.FileChunks.Create(ctx, nil, fc)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file chunk: %v", err)
|
||||
}
|
||||
t.Logf("Created file chunk mapping: file_id=%s, idx=%d, chunk=%s", fc.FileID, fc.Idx, fc.ChunkHash)
|
||||
}
|
||||
|
||||
// Verify file chunks exist
|
||||
fileChunks, err := repos.FileChunks.GetByFileID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("File chunks before delete: %d", len(fileChunks))
|
||||
|
||||
// Check the foreign key constraint
|
||||
var fkInfo string
|
||||
err = db.conn.QueryRow(`
|
||||
SELECT sql FROM sqlite_master
|
||||
WHERE type='table' AND name='file_chunks'
|
||||
`).Scan(&fkInfo)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("file_chunks table definition:\n%s", fkInfo)
|
||||
|
||||
// Delete the file
|
||||
t.Log("Deleting file...")
|
||||
err = repos.Files.DeleteByID(ctx, nil, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete file: %v", err)
|
||||
}
|
||||
|
||||
// Verify file is gone
|
||||
deletedFile, err := repos.Files.GetByID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if deletedFile != nil {
|
||||
t.Error("file should have been deleted")
|
||||
} else {
|
||||
t.Log("File was successfully deleted")
|
||||
}
|
||||
|
||||
// Check file chunks after delete
|
||||
fileChunks, err = repos.FileChunks.GetByFileID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("File chunks after delete: %d", len(fileChunks))
|
||||
|
||||
// Manually check the database
|
||||
var count int
|
||||
err = db.conn.QueryRow("SELECT COUNT(*) FROM file_chunks WHERE file_id = ?", file.ID).Scan(&count)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("Manual count of file_chunks for deleted file: %d", count)
|
||||
|
||||
if len(fileChunks) != 0 {
|
||||
t.Errorf("expected 0 file chunks after cascade delete, got %d", len(fileChunks))
|
||||
// List the remaining chunks
|
||||
for _, fc := range fileChunks {
|
||||
t.Logf("Remaining chunk: file_id=%s, idx=%d, chunk=%s", fc.FileID, fc.Idx, fc.ChunkHash)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -16,16 +16,16 @@ func NewChunkFileRepository(db *DB) *ChunkFileRepository {
|
||||
|
||||
func (r *ChunkFileRepository) Create(ctx context.Context, tx *sql.Tx, cf *ChunkFile) error {
|
||||
query := `
|
||||
INSERT INTO chunk_files (chunk_hash, file_path, file_offset, length)
|
||||
INSERT INTO chunk_files (chunk_hash, file_id, file_offset, length)
|
||||
VALUES (?, ?, ?, ?)
|
||||
ON CONFLICT(chunk_hash, file_path) DO NOTHING
|
||||
ON CONFLICT(chunk_hash, file_id) DO NOTHING
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, cf.ChunkHash, cf.FilePath, cf.FileOffset, cf.Length)
|
||||
_, err = tx.ExecContext(ctx, query, cf.ChunkHash, cf.FileID, cf.FileOffset, cf.Length)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, cf.ChunkHash, cf.FilePath, cf.FileOffset, cf.Length)
|
||||
_, err = r.db.ExecWithLog(ctx, query, cf.ChunkHash, cf.FileID, cf.FileOffset, cf.Length)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -37,7 +37,7 @@ func (r *ChunkFileRepository) Create(ctx context.Context, tx *sql.Tx, cf *ChunkF
|
||||
|
||||
func (r *ChunkFileRepository) GetByChunkHash(ctx context.Context, chunkHash string) ([]*ChunkFile, error) {
|
||||
query := `
|
||||
SELECT chunk_hash, file_path, file_offset, length
|
||||
SELECT chunk_hash, file_id, file_offset, length
|
||||
FROM chunk_files
|
||||
WHERE chunk_hash = ?
|
||||
`
|
||||
@@ -51,7 +51,7 @@ func (r *ChunkFileRepository) GetByChunkHash(ctx context.Context, chunkHash stri
|
||||
var chunkFiles []*ChunkFile
|
||||
for rows.Next() {
|
||||
var cf ChunkFile
|
||||
err := rows.Scan(&cf.ChunkHash, &cf.FilePath, &cf.FileOffset, &cf.Length)
|
||||
err := rows.Scan(&cf.ChunkHash, &cf.FileID, &cf.FileOffset, &cf.Length)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning chunk file: %w", err)
|
||||
}
|
||||
@@ -63,9 +63,10 @@ func (r *ChunkFileRepository) GetByChunkHash(ctx context.Context, chunkHash stri
|
||||
|
||||
func (r *ChunkFileRepository) GetByFilePath(ctx context.Context, filePath string) ([]*ChunkFile, error) {
|
||||
query := `
|
||||
SELECT chunk_hash, file_path, file_offset, length
|
||||
FROM chunk_files
|
||||
WHERE file_path = ?
|
||||
SELECT cf.chunk_hash, cf.file_id, cf.file_offset, cf.length
|
||||
FROM chunk_files cf
|
||||
JOIN files f ON cf.file_id = f.id
|
||||
WHERE f.path = ?
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, filePath)
|
||||
@@ -77,7 +78,7 @@ func (r *ChunkFileRepository) GetByFilePath(ctx context.Context, filePath string
|
||||
var chunkFiles []*ChunkFile
|
||||
for rows.Next() {
|
||||
var cf ChunkFile
|
||||
err := rows.Scan(&cf.ChunkHash, &cf.FilePath, &cf.FileOffset, &cf.Length)
|
||||
err := rows.Scan(&cf.ChunkHash, &cf.FileID, &cf.FileOffset, &cf.Length)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning chunk file: %w", err)
|
||||
}
|
||||
@@ -86,3 +87,48 @@ func (r *ChunkFileRepository) GetByFilePath(ctx context.Context, filePath string
|
||||
|
||||
return chunkFiles, rows.Err()
|
||||
}
|
||||
|
||||
// GetByFileID retrieves chunk files by file ID
|
||||
func (r *ChunkFileRepository) GetByFileID(ctx context.Context, fileID string) ([]*ChunkFile, error) {
|
||||
query := `
|
||||
SELECT chunk_hash, file_id, file_offset, length
|
||||
FROM chunk_files
|
||||
WHERE file_id = ?
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, fileID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying chunk files: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var chunkFiles []*ChunkFile
|
||||
for rows.Next() {
|
||||
var cf ChunkFile
|
||||
err := rows.Scan(&cf.ChunkHash, &cf.FileID, &cf.FileOffset, &cf.Length)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning chunk file: %w", err)
|
||||
}
|
||||
chunkFiles = append(chunkFiles, &cf)
|
||||
}
|
||||
|
||||
return chunkFiles, rows.Err()
|
||||
}
|
||||
|
||||
// DeleteByFileID deletes all chunk_files entries for a given file ID
|
||||
func (r *ChunkFileRepository) DeleteByFileID(ctx context.Context, tx *sql.Tx, fileID string) error {
|
||||
query := `DELETE FROM chunk_files WHERE file_id = ?`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, fileID)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, fileID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting chunk files: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package database
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestChunkFileRepository(t *testing.T) {
|
||||
@@ -11,16 +12,60 @@ func TestChunkFileRepository(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewChunkFileRepository(db)
|
||||
fileRepo := NewFileRepository(db)
|
||||
chunksRepo := NewChunkRepository(db)
|
||||
|
||||
// Create test files first
|
||||
testTime := time.Now().Truncate(time.Second)
|
||||
file1 := &File{
|
||||
Path: "/file1.txt",
|
||||
MTime: testTime,
|
||||
CTime: testTime,
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
LinkTarget: "",
|
||||
}
|
||||
err := fileRepo.Create(ctx, nil, file1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file1: %v", err)
|
||||
}
|
||||
|
||||
file2 := &File{
|
||||
Path: "/file2.txt",
|
||||
MTime: testTime,
|
||||
CTime: testTime,
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
LinkTarget: "",
|
||||
}
|
||||
err = fileRepo.Create(ctx, nil, file2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file2: %v", err)
|
||||
}
|
||||
|
||||
// Create chunk first
|
||||
chunk := &Chunk{
|
||||
ChunkHash: "chunk1",
|
||||
Size: 1024,
|
||||
}
|
||||
err = chunksRepo.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk: %v", err)
|
||||
}
|
||||
|
||||
// Test Create
|
||||
cf1 := &ChunkFile{
|
||||
ChunkHash: "chunk1",
|
||||
FilePath: "/file1.txt",
|
||||
FileID: file1.ID,
|
||||
FileOffset: 0,
|
||||
Length: 1024,
|
||||
}
|
||||
|
||||
err := repo.Create(ctx, nil, cf1)
|
||||
err = repo.Create(ctx, nil, cf1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk file: %v", err)
|
||||
}
|
||||
@@ -28,7 +73,7 @@ func TestChunkFileRepository(t *testing.T) {
|
||||
// Add same chunk in different file (deduplication scenario)
|
||||
cf2 := &ChunkFile{
|
||||
ChunkHash: "chunk1",
|
||||
FilePath: "/file2.txt",
|
||||
FileID: file2.ID,
|
||||
FileOffset: 2048,
|
||||
Length: 1024,
|
||||
}
|
||||
@@ -50,10 +95,10 @@ func TestChunkFileRepository(t *testing.T) {
|
||||
foundFile1 := false
|
||||
foundFile2 := false
|
||||
for _, cf := range chunkFiles {
|
||||
if cf.FilePath == "/file1.txt" && cf.FileOffset == 0 {
|
||||
if cf.FileID == file1.ID && cf.FileOffset == 0 {
|
||||
foundFile1 = true
|
||||
}
|
||||
if cf.FilePath == "/file2.txt" && cf.FileOffset == 2048 {
|
||||
if cf.FileID == file2.ID && cf.FileOffset == 2048 {
|
||||
foundFile2 = true
|
||||
}
|
||||
}
|
||||
@@ -61,10 +106,10 @@ func TestChunkFileRepository(t *testing.T) {
|
||||
t.Error("not all expected files found")
|
||||
}
|
||||
|
||||
// Test GetByFilePath
|
||||
chunkFiles, err = repo.GetByFilePath(ctx, "/file1.txt")
|
||||
// Test GetByFileID
|
||||
chunkFiles, err = repo.GetByFileID(ctx, file1.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get chunks by file path: %v", err)
|
||||
t.Fatalf("failed to get chunks by file ID: %v", err)
|
||||
}
|
||||
if len(chunkFiles) != 1 {
|
||||
t.Errorf("expected 1 chunk for file, got %d", len(chunkFiles))
|
||||
@@ -86,6 +131,37 @@ func TestChunkFileRepositoryComplexDeduplication(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewChunkFileRepository(db)
|
||||
fileRepo := NewFileRepository(db)
|
||||
chunksRepo := NewChunkRepository(db)
|
||||
|
||||
// Create test files
|
||||
testTime := time.Now().Truncate(time.Second)
|
||||
file1 := &File{Path: "/file1.txt", MTime: testTime, CTime: testTime, Size: 3072, Mode: 0644, UID: 1000, GID: 1000}
|
||||
file2 := &File{Path: "/file2.txt", MTime: testTime, CTime: testTime, Size: 3072, Mode: 0644, UID: 1000, GID: 1000}
|
||||
file3 := &File{Path: "/file3.txt", MTime: testTime, CTime: testTime, Size: 2048, Mode: 0644, UID: 1000, GID: 1000}
|
||||
|
||||
if err := fileRepo.Create(ctx, nil, file1); err != nil {
|
||||
t.Fatalf("failed to create file1: %v", err)
|
||||
}
|
||||
if err := fileRepo.Create(ctx, nil, file2); err != nil {
|
||||
t.Fatalf("failed to create file2: %v", err)
|
||||
}
|
||||
if err := fileRepo.Create(ctx, nil, file3); err != nil {
|
||||
t.Fatalf("failed to create file3: %v", err)
|
||||
}
|
||||
|
||||
// Create chunks first
|
||||
chunks := []string{"chunk1", "chunk2", "chunk3", "chunk4"}
|
||||
for _, chunkHash := range chunks {
|
||||
chunk := &Chunk{
|
||||
ChunkHash: chunkHash,
|
||||
Size: 1024,
|
||||
}
|
||||
err := chunksRepo.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk %s: %v", chunkHash, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Simulate a scenario where multiple files share chunks
|
||||
// File1: chunk1, chunk2, chunk3
|
||||
@@ -94,16 +170,16 @@ func TestChunkFileRepositoryComplexDeduplication(t *testing.T) {
|
||||
|
||||
chunkFiles := []ChunkFile{
|
||||
// File1
|
||||
{ChunkHash: "chunk1", FilePath: "/file1.txt", FileOffset: 0, Length: 1024},
|
||||
{ChunkHash: "chunk2", FilePath: "/file1.txt", FileOffset: 1024, Length: 1024},
|
||||
{ChunkHash: "chunk3", FilePath: "/file1.txt", FileOffset: 2048, Length: 1024},
|
||||
{ChunkHash: "chunk1", FileID: file1.ID, FileOffset: 0, Length: 1024},
|
||||
{ChunkHash: "chunk2", FileID: file1.ID, FileOffset: 1024, Length: 1024},
|
||||
{ChunkHash: "chunk3", FileID: file1.ID, FileOffset: 2048, Length: 1024},
|
||||
// File2
|
||||
{ChunkHash: "chunk2", FilePath: "/file2.txt", FileOffset: 0, Length: 1024},
|
||||
{ChunkHash: "chunk3", FilePath: "/file2.txt", FileOffset: 1024, Length: 1024},
|
||||
{ChunkHash: "chunk4", FilePath: "/file2.txt", FileOffset: 2048, Length: 1024},
|
||||
{ChunkHash: "chunk2", FileID: file2.ID, FileOffset: 0, Length: 1024},
|
||||
{ChunkHash: "chunk3", FileID: file2.ID, FileOffset: 1024, Length: 1024},
|
||||
{ChunkHash: "chunk4", FileID: file2.ID, FileOffset: 2048, Length: 1024},
|
||||
// File3
|
||||
{ChunkHash: "chunk1", FilePath: "/file3.txt", FileOffset: 0, Length: 1024},
|
||||
{ChunkHash: "chunk4", FilePath: "/file3.txt", FileOffset: 1024, Length: 1024},
|
||||
{ChunkHash: "chunk1", FileID: file3.ID, FileOffset: 0, Length: 1024},
|
||||
{ChunkHash: "chunk4", FileID: file3.ID, FileOffset: 1024, Length: 1024},
|
||||
}
|
||||
|
||||
for _, cf := range chunkFiles {
|
||||
@@ -132,11 +208,11 @@ func TestChunkFileRepositoryComplexDeduplication(t *testing.T) {
|
||||
}
|
||||
|
||||
// Test file2 chunks
|
||||
chunks, err := repo.GetByFilePath(ctx, "/file2.txt")
|
||||
file2Chunks, err := repo.GetByFileID(ctx, file2.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get chunks for file2: %v", err)
|
||||
}
|
||||
if len(chunks) != 3 {
|
||||
t.Errorf("expected 3 chunks for file2, got %d", len(chunks))
|
||||
if len(file2Chunks) != 3 {
|
||||
t.Errorf("expected 3 chunks for file2, got %d", len(file2Chunks))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,6 +4,8 @@ import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
)
|
||||
|
||||
type ChunkRepository struct {
|
||||
@@ -16,16 +18,16 @@ func NewChunkRepository(db *DB) *ChunkRepository {
|
||||
|
||||
func (r *ChunkRepository) Create(ctx context.Context, tx *sql.Tx, chunk *Chunk) error {
|
||||
query := `
|
||||
INSERT INTO chunks (chunk_hash, sha256, size)
|
||||
VALUES (?, ?, ?)
|
||||
INSERT INTO chunks (chunk_hash, size)
|
||||
VALUES (?, ?)
|
||||
ON CONFLICT(chunk_hash) DO NOTHING
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, chunk.ChunkHash, chunk.SHA256, chunk.Size)
|
||||
_, err = tx.ExecContext(ctx, query, chunk.ChunkHash, chunk.Size)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, chunk.ChunkHash, chunk.SHA256, chunk.Size)
|
||||
_, err = r.db.ExecWithLog(ctx, query, chunk.ChunkHash, chunk.Size)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -37,7 +39,7 @@ func (r *ChunkRepository) Create(ctx context.Context, tx *sql.Tx, chunk *Chunk)
|
||||
|
||||
func (r *ChunkRepository) GetByHash(ctx context.Context, hash string) (*Chunk, error) {
|
||||
query := `
|
||||
SELECT chunk_hash, sha256, size
|
||||
SELECT chunk_hash, size
|
||||
FROM chunks
|
||||
WHERE chunk_hash = ?
|
||||
`
|
||||
@@ -46,7 +48,6 @@ func (r *ChunkRepository) GetByHash(ctx context.Context, hash string) (*Chunk, e
|
||||
|
||||
err := r.db.conn.QueryRowContext(ctx, query, hash).Scan(
|
||||
&chunk.ChunkHash,
|
||||
&chunk.SHA256,
|
||||
&chunk.Size,
|
||||
)
|
||||
|
||||
@@ -66,7 +67,7 @@ func (r *ChunkRepository) GetByHashes(ctx context.Context, hashes []string) ([]*
|
||||
}
|
||||
|
||||
query := `
|
||||
SELECT chunk_hash, sha256, size
|
||||
SELECT chunk_hash, size
|
||||
FROM chunks
|
||||
WHERE chunk_hash IN (`
|
||||
|
||||
@@ -92,7 +93,6 @@ func (r *ChunkRepository) GetByHashes(ctx context.Context, hashes []string) ([]*
|
||||
|
||||
err := rows.Scan(
|
||||
&chunk.ChunkHash,
|
||||
&chunk.SHA256,
|
||||
&chunk.Size,
|
||||
)
|
||||
if err != nil {
|
||||
@@ -107,7 +107,7 @@ func (r *ChunkRepository) GetByHashes(ctx context.Context, hashes []string) ([]*
|
||||
|
||||
func (r *ChunkRepository) ListUnpacked(ctx context.Context, limit int) ([]*Chunk, error) {
|
||||
query := `
|
||||
SELECT c.chunk_hash, c.sha256, c.size
|
||||
SELECT c.chunk_hash, c.size
|
||||
FROM chunks c
|
||||
LEFT JOIN blob_chunks bc ON c.chunk_hash = bc.chunk_hash
|
||||
WHERE bc.chunk_hash IS NULL
|
||||
@@ -127,7 +127,6 @@ func (r *ChunkRepository) ListUnpacked(ctx context.Context, limit int) ([]*Chunk
|
||||
|
||||
err := rows.Scan(
|
||||
&chunk.ChunkHash,
|
||||
&chunk.SHA256,
|
||||
&chunk.Size,
|
||||
)
|
||||
if err != nil {
|
||||
@@ -139,3 +138,30 @@ func (r *ChunkRepository) ListUnpacked(ctx context.Context, limit int) ([]*Chunk
|
||||
|
||||
return chunks, rows.Err()
|
||||
}
|
||||
|
||||
// DeleteOrphaned deletes chunks that are not referenced by any file or blob
|
||||
func (r *ChunkRepository) DeleteOrphaned(ctx context.Context) error {
|
||||
query := `
|
||||
DELETE FROM chunks
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM file_chunks
|
||||
WHERE file_chunks.chunk_hash = chunks.chunk_hash
|
||||
)
|
||||
AND NOT EXISTS (
|
||||
SELECT 1 FROM blob_chunks
|
||||
WHERE blob_chunks.chunk_hash = chunks.chunk_hash
|
||||
)
|
||||
`
|
||||
|
||||
result, err := r.db.ExecWithLog(ctx, query)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned chunks: %w", err)
|
||||
}
|
||||
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
if rowsAffected > 0 {
|
||||
log.Debug("Deleted orphaned chunks", "count", rowsAffected)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
37
internal/database/chunks_ext.go
Normal file
37
internal/database/chunks_ext.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func (r *ChunkRepository) List(ctx context.Context) ([]*Chunk, error) {
|
||||
query := `
|
||||
SELECT chunk_hash, size
|
||||
FROM chunks
|
||||
ORDER BY chunk_hash
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying chunks: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var chunks []*Chunk
|
||||
for rows.Next() {
|
||||
var chunk Chunk
|
||||
|
||||
err := rows.Scan(
|
||||
&chunk.ChunkHash,
|
||||
&chunk.Size,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning chunk: %w", err)
|
||||
}
|
||||
|
||||
chunks = append(chunks, &chunk)
|
||||
}
|
||||
|
||||
return chunks, rows.Err()
|
||||
}
|
||||
@@ -15,7 +15,6 @@ func TestChunkRepository(t *testing.T) {
|
||||
// Test Create
|
||||
chunk := &Chunk{
|
||||
ChunkHash: "chunkhash123",
|
||||
SHA256: "sha256hash123",
|
||||
Size: 4096,
|
||||
}
|
||||
|
||||
@@ -35,9 +34,6 @@ func TestChunkRepository(t *testing.T) {
|
||||
if retrieved.ChunkHash != chunk.ChunkHash {
|
||||
t.Errorf("chunk hash mismatch: got %s, want %s", retrieved.ChunkHash, chunk.ChunkHash)
|
||||
}
|
||||
if retrieved.SHA256 != chunk.SHA256 {
|
||||
t.Errorf("sha256 mismatch: got %s, want %s", retrieved.SHA256, chunk.SHA256)
|
||||
}
|
||||
if retrieved.Size != chunk.Size {
|
||||
t.Errorf("size mismatch: got %d, want %d", retrieved.Size, chunk.Size)
|
||||
}
|
||||
@@ -51,7 +47,6 @@ func TestChunkRepository(t *testing.T) {
|
||||
// Test GetByHashes
|
||||
chunk2 := &Chunk{
|
||||
ChunkHash: "chunkhash456",
|
||||
SHA256: "sha256hash456",
|
||||
Size: 8192,
|
||||
}
|
||||
err = repo.Create(ctx, nil, chunk2)
|
||||
|
||||
@@ -1,143 +1,290 @@
|
||||
// Package database provides the local SQLite index for Vaultik backup operations.
|
||||
// The database tracks files, chunks, and their associations with blobs.
|
||||
//
|
||||
// Blobs in Vaultik are the final storage units uploaded to S3. Each blob is a
|
||||
// large (up to 10GB) file containing many compressed and encrypted chunks from
|
||||
// multiple source files. Blobs are content-addressed, meaning their filename
|
||||
// is derived from their SHA256 hash after compression and encryption.
|
||||
//
|
||||
// The database does not support migrations. If the schema changes, delete
|
||||
// the local database and perform a full backup to recreate it.
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
_ "embed"
|
||||
"fmt"
|
||||
"sync"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
_ "modernc.org/sqlite"
|
||||
)
|
||||
|
||||
//go:embed schema.sql
|
||||
var schemaSQL string
|
||||
|
||||
// DB represents the Vaultik local index database connection.
|
||||
// It uses SQLite to track file metadata, content-defined chunks, and blob associations.
|
||||
// The database enables incremental backups by detecting changed files and
|
||||
// supports deduplication by tracking which chunks are already stored in blobs.
|
||||
// Write operations are synchronized through a mutex to ensure thread safety.
|
||||
type DB struct {
|
||||
conn *sql.DB
|
||||
writeLock sync.Mutex
|
||||
conn *sql.DB
|
||||
path string
|
||||
}
|
||||
|
||||
// New creates a new database connection at the specified path.
|
||||
// It automatically handles database recovery, creates the schema if needed,
|
||||
// and configures SQLite with appropriate settings for performance and reliability.
|
||||
// The database uses WAL mode for better concurrency and sets a busy timeout
|
||||
// to handle concurrent access gracefully.
|
||||
//
|
||||
// If the database appears locked, it will attempt recovery by removing stale
|
||||
// lock files and switching temporarily to TRUNCATE journal mode.
|
||||
//
|
||||
// New creates a new database connection at the specified path.
|
||||
// It automatically handles recovery from stale locks, creates the schema if needed,
|
||||
// and configures SQLite with WAL mode for better concurrency.
|
||||
// The path parameter can be a file path for persistent storage or ":memory:"
|
||||
// for an in-memory database (useful for testing).
|
||||
func New(ctx context.Context, path string) (*DB, error) {
|
||||
conn, err := sql.Open("sqlite", path+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=5000")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening database: %w", err)
|
||||
log.Debug("Opening database connection", "path", path)
|
||||
|
||||
// First, try to recover from any stale locks
|
||||
if err := recoverDatabase(ctx, path); err != nil {
|
||||
log.Warn("Failed to recover database", "error", err)
|
||||
}
|
||||
|
||||
// First attempt with standard WAL mode
|
||||
log.Debug("Attempting to open database with WAL mode", "path", path)
|
||||
conn, err := sql.Open(
|
||||
"sqlite",
|
||||
path+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=10000&_locking_mode=NORMAL&_foreign_keys=ON",
|
||||
)
|
||||
if err == nil {
|
||||
// Set connection pool settings
|
||||
// SQLite can handle multiple readers but only one writer at a time.
|
||||
// Setting MaxOpenConns to 1 ensures all writes are serialized through
|
||||
// a single connection, preventing SQLITE_BUSY errors.
|
||||
conn.SetMaxOpenConns(1)
|
||||
conn.SetMaxIdleConns(1)
|
||||
|
||||
if err := conn.PingContext(ctx); err == nil {
|
||||
// Success on first try
|
||||
log.Debug("Database opened successfully with WAL mode", "path", path)
|
||||
|
||||
// Enable foreign keys explicitly
|
||||
if _, err := conn.ExecContext(ctx, "PRAGMA foreign_keys = ON"); err != nil {
|
||||
log.Warn("Failed to enable foreign keys", "error", err)
|
||||
}
|
||||
|
||||
db := &DB{conn: conn, path: path}
|
||||
if err := db.createSchema(ctx); err != nil {
|
||||
_ = conn.Close()
|
||||
return nil, fmt.Errorf("creating schema: %w", err)
|
||||
}
|
||||
return db, nil
|
||||
}
|
||||
log.Debug("Failed to ping database, closing connection", "path", path, "error", err)
|
||||
_ = conn.Close()
|
||||
}
|
||||
|
||||
// If first attempt failed, try with TRUNCATE mode to clear any locks
|
||||
log.Info(
|
||||
"Database appears locked, attempting recovery with TRUNCATE mode",
|
||||
"path", path,
|
||||
)
|
||||
conn, err = sql.Open(
|
||||
"sqlite",
|
||||
path+"?_journal_mode=TRUNCATE&_synchronous=NORMAL&_busy_timeout=10000&_foreign_keys=ON",
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening database in recovery mode: %w", err)
|
||||
}
|
||||
|
||||
// Set connection pool settings
|
||||
// SQLite can handle multiple readers but only one writer at a time.
|
||||
// Setting MaxOpenConns to 1 ensures all writes are serialized through
|
||||
// a single connection, preventing SQLITE_BUSY errors.
|
||||
conn.SetMaxOpenConns(1)
|
||||
conn.SetMaxIdleConns(1)
|
||||
|
||||
if err := conn.PingContext(ctx); err != nil {
|
||||
if closeErr := conn.Close(); closeErr != nil {
|
||||
Fatal("failed to close database connection: %v", closeErr)
|
||||
}
|
||||
return nil, fmt.Errorf("pinging database: %w", err)
|
||||
log.Debug("Failed to ping database in recovery mode, closing", "path", path, "error", err)
|
||||
_ = conn.Close()
|
||||
return nil, fmt.Errorf(
|
||||
"database still locked after recovery attempt: %w",
|
||||
err,
|
||||
)
|
||||
}
|
||||
|
||||
db := &DB{conn: conn}
|
||||
log.Debug("Database opened in TRUNCATE mode", "path", path)
|
||||
|
||||
// Switch back to WAL mode
|
||||
log.Debug("Switching database back to WAL mode", "path", path)
|
||||
if _, err := conn.ExecContext(ctx, "PRAGMA journal_mode=WAL"); err != nil {
|
||||
log.Warn("Failed to switch back to WAL mode", "path", path, "error", err)
|
||||
}
|
||||
|
||||
// Ensure foreign keys are enabled
|
||||
if _, err := conn.ExecContext(ctx, "PRAGMA foreign_keys=ON"); err != nil {
|
||||
log.Warn("Failed to enable foreign keys", "path", path, "error", err)
|
||||
}
|
||||
|
||||
db := &DB{conn: conn, path: path}
|
||||
if err := db.createSchema(ctx); err != nil {
|
||||
if closeErr := conn.Close(); closeErr != nil {
|
||||
Fatal("failed to close database connection: %v", closeErr)
|
||||
}
|
||||
_ = conn.Close()
|
||||
return nil, fmt.Errorf("creating schema: %w", err)
|
||||
}
|
||||
|
||||
log.Debug("Database connection established successfully", "path", path)
|
||||
return db, nil
|
||||
}
|
||||
|
||||
// Close closes the database connection.
|
||||
// It ensures all pending operations are completed before closing.
|
||||
// Returns an error if the database connection cannot be closed properly.
|
||||
func (db *DB) Close() error {
|
||||
log.Debug("Closing database connection", "path", db.path)
|
||||
if err := db.conn.Close(); err != nil {
|
||||
Fatal("failed to close database: %v", err)
|
||||
log.Error("Failed to close database", "path", db.path, "error", err)
|
||||
return fmt.Errorf("failed to close database: %w", err)
|
||||
}
|
||||
log.Debug("Database connection closed successfully", "path", db.path)
|
||||
return nil
|
||||
}
|
||||
|
||||
// recoverDatabase attempts to recover a locked database
|
||||
func recoverDatabase(ctx context.Context, path string) error {
|
||||
// Check if database file exists
|
||||
if _, err := os.Stat(path); os.IsNotExist(err) {
|
||||
// No database file, nothing to recover
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove stale lock files
|
||||
// SQLite creates -wal and -shm files for WAL mode
|
||||
walPath := path + "-wal"
|
||||
shmPath := path + "-shm"
|
||||
journalPath := path + "-journal"
|
||||
|
||||
log.Info("Attempting database recovery", "path", path)
|
||||
|
||||
// Always remove lock files on startup to ensure clean state
|
||||
removed := false
|
||||
|
||||
// Check for and remove journal file (from non-WAL mode)
|
||||
if _, err := os.Stat(journalPath); err == nil {
|
||||
log.Info("Found journal file, removing", "path", journalPath)
|
||||
if err := os.Remove(journalPath); err != nil {
|
||||
log.Warn("Failed to remove journal file", "error", err)
|
||||
} else {
|
||||
removed = true
|
||||
}
|
||||
}
|
||||
|
||||
// Remove WAL file
|
||||
if _, err := os.Stat(walPath); err == nil {
|
||||
log.Info("Found WAL file, removing", "path", walPath)
|
||||
if err := os.Remove(walPath); err != nil {
|
||||
log.Warn("Failed to remove WAL file", "error", err)
|
||||
} else {
|
||||
removed = true
|
||||
}
|
||||
}
|
||||
|
||||
// Remove SHM file
|
||||
if _, err := os.Stat(shmPath); err == nil {
|
||||
log.Info("Found shared memory file, removing", "path", shmPath)
|
||||
if err := os.Remove(shmPath); err != nil {
|
||||
log.Warn("Failed to remove shared memory file", "error", err)
|
||||
} else {
|
||||
removed = true
|
||||
}
|
||||
}
|
||||
|
||||
if removed {
|
||||
log.Info("Database lock files removed")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Conn returns the underlying *sql.DB connection.
|
||||
// This should be used sparingly and primarily for read operations.
|
||||
// For write operations, prefer using the ExecWithLog method.
|
||||
func (db *DB) Conn() *sql.DB {
|
||||
return db.conn
|
||||
}
|
||||
|
||||
func (db *DB) BeginTx(ctx context.Context, opts *sql.TxOptions) (*sql.Tx, error) {
|
||||
// BeginTx starts a new database transaction with the given options.
|
||||
// The caller is responsible for committing or rolling back the transaction.
|
||||
// For write transactions, consider using the Repositories.WithTx method instead,
|
||||
// which handles locking and rollback automatically.
|
||||
func (db *DB) BeginTx(
|
||||
ctx context.Context,
|
||||
opts *sql.TxOptions,
|
||||
) (*sql.Tx, error) {
|
||||
return db.conn.BeginTx(ctx, opts)
|
||||
}
|
||||
|
||||
// LockForWrite acquires the write lock
|
||||
func (db *DB) LockForWrite() {
|
||||
db.writeLock.Lock()
|
||||
}
|
||||
// Note: LockForWrite and UnlockWrite methods have been removed.
|
||||
// SQLite handles its own locking internally, so explicit locking is not needed.
|
||||
|
||||
// UnlockWrite releases the write lock
|
||||
func (db *DB) UnlockWrite() {
|
||||
db.writeLock.Unlock()
|
||||
}
|
||||
|
||||
// ExecWithLock executes a write query with the write lock held
|
||||
func (db *DB) ExecWithLock(ctx context.Context, query string, args ...interface{}) (sql.Result, error) {
|
||||
db.writeLock.Lock()
|
||||
defer db.writeLock.Unlock()
|
||||
// ExecWithLog executes a write query with SQL logging.
|
||||
// SQLite handles its own locking internally, so we just pass through to ExecContext.
|
||||
// The query and args parameters follow the same format as sql.DB.ExecContext.
|
||||
func (db *DB) ExecWithLog(
|
||||
ctx context.Context,
|
||||
query string,
|
||||
args ...interface{},
|
||||
) (sql.Result, error) {
|
||||
LogSQL("Execute", query, args...)
|
||||
return db.conn.ExecContext(ctx, query, args...)
|
||||
}
|
||||
|
||||
// QueryRowWithLock executes a write query that returns a row with the write lock held
|
||||
func (db *DB) QueryRowWithLock(ctx context.Context, query string, args ...interface{}) *sql.Row {
|
||||
db.writeLock.Lock()
|
||||
defer db.writeLock.Unlock()
|
||||
// QueryRowWithLog executes a query that returns at most one row with SQL logging.
|
||||
// This is useful for queries that modify data and return values (e.g., INSERT ... RETURNING).
|
||||
// SQLite handles its own locking internally.
|
||||
// The query and args parameters follow the same format as sql.DB.QueryRowContext.
|
||||
func (db *DB) QueryRowWithLog(
|
||||
ctx context.Context,
|
||||
query string,
|
||||
args ...interface{},
|
||||
) *sql.Row {
|
||||
LogSQL("QueryRow", query, args...)
|
||||
return db.conn.QueryRowContext(ctx, query, args...)
|
||||
}
|
||||
|
||||
func (db *DB) createSchema(ctx context.Context) error {
|
||||
schema := `
|
||||
CREATE TABLE IF NOT EXISTS files (
|
||||
path TEXT PRIMARY KEY,
|
||||
mtime INTEGER NOT NULL,
|
||||
ctime INTEGER NOT NULL,
|
||||
size INTEGER NOT NULL,
|
||||
mode INTEGER NOT NULL,
|
||||
uid INTEGER NOT NULL,
|
||||
gid INTEGER NOT NULL,
|
||||
link_target TEXT
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS file_chunks (
|
||||
path TEXT NOT NULL,
|
||||
idx INTEGER NOT NULL,
|
||||
chunk_hash TEXT NOT NULL,
|
||||
PRIMARY KEY (path, idx)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS chunks (
|
||||
chunk_hash TEXT PRIMARY KEY,
|
||||
sha256 TEXT NOT NULL,
|
||||
size INTEGER NOT NULL
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS blobs (
|
||||
blob_hash TEXT PRIMARY KEY,
|
||||
created_ts INTEGER NOT NULL
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS blob_chunks (
|
||||
blob_hash TEXT NOT NULL,
|
||||
chunk_hash TEXT NOT NULL,
|
||||
offset INTEGER NOT NULL,
|
||||
length INTEGER NOT NULL,
|
||||
PRIMARY KEY (blob_hash, chunk_hash)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS chunk_files (
|
||||
chunk_hash TEXT NOT NULL,
|
||||
file_path TEXT NOT NULL,
|
||||
file_offset INTEGER NOT NULL,
|
||||
length INTEGER NOT NULL,
|
||||
PRIMARY KEY (chunk_hash, file_path)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS snapshots (
|
||||
id TEXT PRIMARY KEY,
|
||||
hostname TEXT NOT NULL,
|
||||
vaultik_version TEXT NOT NULL,
|
||||
created_ts INTEGER NOT NULL,
|
||||
file_count INTEGER NOT NULL,
|
||||
chunk_count INTEGER NOT NULL,
|
||||
blob_count INTEGER NOT NULL,
|
||||
total_size INTEGER NOT NULL,
|
||||
blob_size INTEGER NOT NULL,
|
||||
compression_ratio REAL NOT NULL
|
||||
);
|
||||
`
|
||||
|
||||
_, err := db.conn.ExecContext(ctx, schema)
|
||||
_, err := db.conn.ExecContext(ctx, schemaSQL)
|
||||
return err
|
||||
}
|
||||
|
||||
// NewTestDB creates an in-memory SQLite database for testing purposes.
|
||||
// The database is automatically initialized with the schema and is ready for use.
|
||||
// Each call creates a new independent database instance.
|
||||
func NewTestDB() (*DB, error) {
|
||||
return New(context.Background(), ":memory:")
|
||||
}
|
||||
|
||||
// LogSQL logs SQL queries and their arguments when debug mode is enabled.
|
||||
// Debug mode is activated by setting the GODEBUG environment variable to include "vaultik".
|
||||
// This is useful for troubleshooting database operations and understanding query patterns.
|
||||
//
|
||||
// The operation parameter describes the type of SQL operation (e.g., "Execute", "Query").
|
||||
// The query parameter is the SQL statement being executed.
|
||||
// The args parameter contains the query arguments that will be interpolated.
|
||||
func LogSQL(operation, query string, args ...interface{}) {
|
||||
if strings.Contains(os.Getenv("GODEBUG"), "vaultik") {
|
||||
log.Debug(
|
||||
"SQL "+operation,
|
||||
"query",
|
||||
strings.TrimSpace(query),
|
||||
"args",
|
||||
fmt.Sprintf("%v", args),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -67,21 +67,26 @@ func TestDatabaseConcurrentAccess(t *testing.T) {
|
||||
}()
|
||||
|
||||
// Test concurrent writes
|
||||
done := make(chan bool, 10)
|
||||
type result struct {
|
||||
index int
|
||||
err error
|
||||
}
|
||||
results := make(chan result, 10)
|
||||
|
||||
for i := 0; i < 10; i++ {
|
||||
go func(i int) {
|
||||
_, err := db.ExecWithLock(ctx, "INSERT INTO chunks (chunk_hash, sha256, size) VALUES (?, ?, ?)",
|
||||
fmt.Sprintf("hash%d", i), fmt.Sprintf("sha%d", i), i*1024)
|
||||
if err != nil {
|
||||
t.Errorf("concurrent insert failed: %v", err)
|
||||
}
|
||||
done <- true
|
||||
_, err := db.ExecWithLog(ctx, "INSERT INTO chunks (chunk_hash, size) VALUES (?, ?)",
|
||||
fmt.Sprintf("hash%d", i), i*1024)
|
||||
results <- result{index: i, err: err}
|
||||
}(i)
|
||||
}
|
||||
|
||||
// Wait for all goroutines
|
||||
// Wait for all goroutines and check results
|
||||
for i := 0; i < 10; i++ {
|
||||
<-done
|
||||
r := <-results
|
||||
if r.err != nil {
|
||||
t.Fatalf("concurrent insert %d failed: %v", r.index, r.err)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify all inserts succeeded
|
||||
|
||||
@@ -16,16 +16,16 @@ func NewFileChunkRepository(db *DB) *FileChunkRepository {
|
||||
|
||||
func (r *FileChunkRepository) Create(ctx context.Context, tx *sql.Tx, fc *FileChunk) error {
|
||||
query := `
|
||||
INSERT INTO file_chunks (path, idx, chunk_hash)
|
||||
INSERT INTO file_chunks (file_id, idx, chunk_hash)
|
||||
VALUES (?, ?, ?)
|
||||
ON CONFLICT(path, idx) DO NOTHING
|
||||
ON CONFLICT(file_id, idx) DO NOTHING
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, fc.Path, fc.Idx, fc.ChunkHash)
|
||||
_, err = tx.ExecContext(ctx, query, fc.FileID, fc.Idx, fc.ChunkHash)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, fc.Path, fc.Idx, fc.ChunkHash)
|
||||
_, err = r.db.ExecWithLog(ctx, query, fc.FileID, fc.Idx, fc.ChunkHash)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -37,10 +37,11 @@ func (r *FileChunkRepository) Create(ctx context.Context, tx *sql.Tx, fc *FileCh
|
||||
|
||||
func (r *FileChunkRepository) GetByPath(ctx context.Context, path string) ([]*FileChunk, error) {
|
||||
query := `
|
||||
SELECT path, idx, chunk_hash
|
||||
FROM file_chunks
|
||||
WHERE path = ?
|
||||
ORDER BY idx
|
||||
SELECT fc.file_id, fc.idx, fc.chunk_hash
|
||||
FROM file_chunks fc
|
||||
JOIN files f ON fc.file_id = f.id
|
||||
WHERE f.path = ?
|
||||
ORDER BY fc.idx
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, path)
|
||||
@@ -52,7 +53,7 @@ func (r *FileChunkRepository) GetByPath(ctx context.Context, path string) ([]*Fi
|
||||
var fileChunks []*FileChunk
|
||||
for rows.Next() {
|
||||
var fc FileChunk
|
||||
err := rows.Scan(&fc.Path, &fc.Idx, &fc.ChunkHash)
|
||||
err := rows.Scan(&fc.FileID, &fc.Idx, &fc.ChunkHash)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning file chunk: %w", err)
|
||||
}
|
||||
@@ -62,14 +63,73 @@ func (r *FileChunkRepository) GetByPath(ctx context.Context, path string) ([]*Fi
|
||||
return fileChunks, rows.Err()
|
||||
}
|
||||
|
||||
// GetByFileID retrieves file chunks by file ID
|
||||
func (r *FileChunkRepository) GetByFileID(ctx context.Context, fileID string) ([]*FileChunk, error) {
|
||||
query := `
|
||||
SELECT file_id, idx, chunk_hash
|
||||
FROM file_chunks
|
||||
WHERE file_id = ?
|
||||
ORDER BY idx
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, fileID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying file chunks: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var fileChunks []*FileChunk
|
||||
for rows.Next() {
|
||||
var fc FileChunk
|
||||
err := rows.Scan(&fc.FileID, &fc.Idx, &fc.ChunkHash)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning file chunk: %w", err)
|
||||
}
|
||||
fileChunks = append(fileChunks, &fc)
|
||||
}
|
||||
|
||||
return fileChunks, rows.Err()
|
||||
}
|
||||
|
||||
// GetByPathTx retrieves file chunks within a transaction
|
||||
func (r *FileChunkRepository) GetByPathTx(ctx context.Context, tx *sql.Tx, path string) ([]*FileChunk, error) {
|
||||
query := `
|
||||
SELECT fc.file_id, fc.idx, fc.chunk_hash
|
||||
FROM file_chunks fc
|
||||
JOIN files f ON fc.file_id = f.id
|
||||
WHERE f.path = ?
|
||||
ORDER BY fc.idx
|
||||
`
|
||||
|
||||
LogSQL("GetByPathTx", query, path)
|
||||
rows, err := tx.QueryContext(ctx, query, path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying file chunks: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var fileChunks []*FileChunk
|
||||
for rows.Next() {
|
||||
var fc FileChunk
|
||||
err := rows.Scan(&fc.FileID, &fc.Idx, &fc.ChunkHash)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning file chunk: %w", err)
|
||||
}
|
||||
fileChunks = append(fileChunks, &fc)
|
||||
}
|
||||
LogSQL("GetByPathTx", "Complete", path, "count", len(fileChunks))
|
||||
|
||||
return fileChunks, rows.Err()
|
||||
}
|
||||
|
||||
func (r *FileChunkRepository) DeleteByPath(ctx context.Context, tx *sql.Tx, path string) error {
|
||||
query := `DELETE FROM file_chunks WHERE path = ?`
|
||||
query := `DELETE FROM file_chunks WHERE file_id = (SELECT id FROM files WHERE path = ?)`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, path)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, path)
|
||||
_, err = r.db.ExecWithLog(ctx, query, path)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -78,3 +138,37 @@ func (r *FileChunkRepository) DeleteByPath(ctx context.Context, tx *sql.Tx, path
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteByFileID deletes all chunks for a file by its UUID
|
||||
func (r *FileChunkRepository) DeleteByFileID(ctx context.Context, tx *sql.Tx, fileID string) error {
|
||||
query := `DELETE FROM file_chunks WHERE file_id = ?`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, fileID)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, fileID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting file chunks: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetByFile is an alias for GetByPath for compatibility
|
||||
func (r *FileChunkRepository) GetByFile(ctx context.Context, path string) ([]*FileChunk, error) {
|
||||
LogSQL("GetByFile", "Starting", path)
|
||||
result, err := r.GetByPath(ctx, path)
|
||||
LogSQL("GetByFile", "Complete", path, "count", len(result))
|
||||
return result, err
|
||||
}
|
||||
|
||||
// GetByFileTx retrieves file chunks within a transaction
|
||||
func (r *FileChunkRepository) GetByFileTx(ctx context.Context, tx *sql.Tx, path string) ([]*FileChunk, error) {
|
||||
LogSQL("GetByFileTx", "Starting", path)
|
||||
result, err := r.GetByPathTx(ctx, tx, path)
|
||||
LogSQL("GetByFileTx", "Complete", path, "count", len(result))
|
||||
return result, err
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestFileChunkRepository(t *testing.T) {
|
||||
@@ -12,22 +13,54 @@ func TestFileChunkRepository(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewFileChunkRepository(db)
|
||||
fileRepo := NewFileRepository(db)
|
||||
|
||||
// Create test file first
|
||||
testTime := time.Now().Truncate(time.Second)
|
||||
file := &File{
|
||||
Path: "/test/file.txt",
|
||||
MTime: testTime,
|
||||
CTime: testTime,
|
||||
Size: 3072,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
LinkTarget: "",
|
||||
}
|
||||
err := fileRepo.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file: %v", err)
|
||||
}
|
||||
|
||||
// Create chunks first
|
||||
chunks := []string{"chunk1", "chunk2", "chunk3"}
|
||||
chunkRepo := NewChunkRepository(db)
|
||||
for _, chunkHash := range chunks {
|
||||
chunk := &Chunk{
|
||||
ChunkHash: chunkHash,
|
||||
Size: 1024,
|
||||
}
|
||||
err = chunkRepo.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk %s: %v", chunkHash, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Test Create
|
||||
fc1 := &FileChunk{
|
||||
Path: "/test/file.txt",
|
||||
FileID: file.ID,
|
||||
Idx: 0,
|
||||
ChunkHash: "chunk1",
|
||||
}
|
||||
|
||||
err := repo.Create(ctx, nil, fc1)
|
||||
err = repo.Create(ctx, nil, fc1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file chunk: %v", err)
|
||||
}
|
||||
|
||||
// Add more chunks for the same file
|
||||
fc2 := &FileChunk{
|
||||
Path: "/test/file.txt",
|
||||
FileID: file.ID,
|
||||
Idx: 1,
|
||||
ChunkHash: "chunk2",
|
||||
}
|
||||
@@ -37,7 +70,7 @@ func TestFileChunkRepository(t *testing.T) {
|
||||
}
|
||||
|
||||
fc3 := &FileChunk{
|
||||
Path: "/test/file.txt",
|
||||
FileID: file.ID,
|
||||
Idx: 2,
|
||||
ChunkHash: "chunk3",
|
||||
}
|
||||
@@ -46,17 +79,17 @@ func TestFileChunkRepository(t *testing.T) {
|
||||
t.Fatalf("failed to create third file chunk: %v", err)
|
||||
}
|
||||
|
||||
// Test GetByPath
|
||||
chunks, err := repo.GetByPath(ctx, "/test/file.txt")
|
||||
// Test GetByFile
|
||||
fileChunks, err := repo.GetByFile(ctx, "/test/file.txt")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get file chunks: %v", err)
|
||||
}
|
||||
if len(chunks) != 3 {
|
||||
t.Errorf("expected 3 chunks, got %d", len(chunks))
|
||||
if len(fileChunks) != 3 {
|
||||
t.Errorf("expected 3 chunks, got %d", len(fileChunks))
|
||||
}
|
||||
|
||||
// Verify order
|
||||
for i, chunk := range chunks {
|
||||
for i, chunk := range fileChunks {
|
||||
if chunk.Idx != i {
|
||||
t.Errorf("wrong chunk order: expected idx %d, got %d", i, chunk.Idx)
|
||||
}
|
||||
@@ -68,18 +101,18 @@ func TestFileChunkRepository(t *testing.T) {
|
||||
t.Fatalf("failed to create duplicate file chunk: %v", err)
|
||||
}
|
||||
|
||||
// Test DeleteByPath
|
||||
err = repo.DeleteByPath(ctx, nil, "/test/file.txt")
|
||||
// Test DeleteByFileID
|
||||
err = repo.DeleteByFileID(ctx, nil, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete file chunks: %v", err)
|
||||
}
|
||||
|
||||
chunks, err = repo.GetByPath(ctx, "/test/file.txt")
|
||||
fileChunks, err = repo.GetByFileID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get deleted file chunks: %v", err)
|
||||
}
|
||||
if len(chunks) != 0 {
|
||||
t.Errorf("expected 0 chunks after delete, got %d", len(chunks))
|
||||
if len(fileChunks) != 0 {
|
||||
t.Errorf("expected 0 chunks after delete, got %d", len(fileChunks))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -89,15 +122,54 @@ func TestFileChunkRepositoryMultipleFiles(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewFileChunkRepository(db)
|
||||
fileRepo := NewFileRepository(db)
|
||||
|
||||
// Create test files
|
||||
testTime := time.Now().Truncate(time.Second)
|
||||
filePaths := []string{"/file1.txt", "/file2.txt", "/file3.txt"}
|
||||
files := make([]*File, len(filePaths))
|
||||
|
||||
for i, path := range filePaths {
|
||||
file := &File{
|
||||
Path: path,
|
||||
MTime: testTime,
|
||||
CTime: testTime,
|
||||
Size: 2048,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
LinkTarget: "",
|
||||
}
|
||||
err := fileRepo.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file %s: %v", path, err)
|
||||
}
|
||||
files[i] = file
|
||||
}
|
||||
|
||||
// Create all chunks first
|
||||
chunkRepo := NewChunkRepository(db)
|
||||
for i := range files {
|
||||
for j := 0; j < 2; j++ {
|
||||
chunkHash := fmt.Sprintf("file%d_chunk%d", i, j)
|
||||
chunk := &Chunk{
|
||||
ChunkHash: chunkHash,
|
||||
Size: 1024,
|
||||
}
|
||||
err := chunkRepo.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk %s: %v", chunkHash, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create chunks for multiple files
|
||||
files := []string{"/file1.txt", "/file2.txt", "/file3.txt"}
|
||||
for _, path := range files {
|
||||
for i := 0; i < 2; i++ {
|
||||
for i, file := range files {
|
||||
for j := 0; j < 2; j++ {
|
||||
fc := &FileChunk{
|
||||
Path: path,
|
||||
Idx: i,
|
||||
ChunkHash: fmt.Sprintf("%s_chunk%d", path, i),
|
||||
FileID: file.ID,
|
||||
Idx: j,
|
||||
ChunkHash: fmt.Sprintf("file%d_chunk%d", i, j),
|
||||
}
|
||||
err := repo.Create(ctx, nil, fc)
|
||||
if err != nil {
|
||||
@@ -107,13 +179,13 @@ func TestFileChunkRepositoryMultipleFiles(t *testing.T) {
|
||||
}
|
||||
|
||||
// Verify each file has correct chunks
|
||||
for _, path := range files {
|
||||
chunks, err := repo.GetByPath(ctx, path)
|
||||
for i, file := range files {
|
||||
chunks, err := repo.GetByFileID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get chunks for %s: %v", path, err)
|
||||
t.Fatalf("failed to get chunks for file %d: %v", i, err)
|
||||
}
|
||||
if len(chunks) != 2 {
|
||||
t.Errorf("expected 2 chunks for %s, got %d", path, len(chunks))
|
||||
t.Errorf("expected 2 chunks for file %d, got %d", i, len(chunks))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,9 @@ import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type FileRepository struct {
|
||||
@@ -16,9 +19,14 @@ func NewFileRepository(db *DB) *FileRepository {
|
||||
}
|
||||
|
||||
func (r *FileRepository) Create(ctx context.Context, tx *sql.Tx, file *File) error {
|
||||
// Generate UUID if not provided
|
||||
if file.ID == "" {
|
||||
file.ID = uuid.New().String()
|
||||
}
|
||||
|
||||
query := `
|
||||
INSERT INTO files (path, mtime, ctime, size, mode, uid, gid, link_target)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
INSERT INTO files (id, path, mtime, ctime, size, mode, uid, gid, link_target)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(path) DO UPDATE SET
|
||||
mtime = excluded.mtime,
|
||||
ctime = excluded.ctime,
|
||||
@@ -27,13 +35,15 @@ func (r *FileRepository) Create(ctx context.Context, tx *sql.Tx, file *File) err
|
||||
uid = excluded.uid,
|
||||
gid = excluded.gid,
|
||||
link_target = excluded.link_target
|
||||
RETURNING id
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, file.Path, file.MTime.Unix(), file.CTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget)
|
||||
LogSQL("Execute", query, file.ID, file.Path, file.MTime.Unix(), file.CTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget)
|
||||
err = tx.QueryRowContext(ctx, query, file.ID, file.Path, file.MTime.Unix(), file.CTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget).Scan(&file.ID)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, file.Path, file.MTime.Unix(), file.CTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget)
|
||||
err = r.db.QueryRowWithLog(ctx, query, file.ID, file.Path, file.MTime.Unix(), file.CTime.Unix(), file.Size, file.Mode, file.UID, file.GID, file.LinkTarget).Scan(&file.ID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -45,7 +55,7 @@ func (r *FileRepository) Create(ctx context.Context, tx *sql.Tx, file *File) err
|
||||
|
||||
func (r *FileRepository) GetByPath(ctx context.Context, path string) (*File, error) {
|
||||
query := `
|
||||
SELECT path, mtime, ctime, size, mode, uid, gid, link_target
|
||||
SELECT id, path, mtime, ctime, size, mode, uid, gid, link_target
|
||||
FROM files
|
||||
WHERE path = ?
|
||||
`
|
||||
@@ -55,6 +65,7 @@ func (r *FileRepository) GetByPath(ctx context.Context, path string) (*File, err
|
||||
var linkTarget sql.NullString
|
||||
|
||||
err := r.db.conn.QueryRowContext(ctx, query, path).Scan(
|
||||
&file.ID,
|
||||
&file.Path,
|
||||
&mtimeUnix,
|
||||
&ctimeUnix,
|
||||
@@ -72,8 +83,89 @@ func (r *FileRepository) GetByPath(ctx context.Context, path string) (*File, err
|
||||
return nil, fmt.Errorf("querying file: %w", err)
|
||||
}
|
||||
|
||||
file.MTime = time.Unix(mtimeUnix, 0)
|
||||
file.CTime = time.Unix(ctimeUnix, 0)
|
||||
file.MTime = time.Unix(mtimeUnix, 0).UTC()
|
||||
file.CTime = time.Unix(ctimeUnix, 0).UTC()
|
||||
if linkTarget.Valid {
|
||||
file.LinkTarget = linkTarget.String
|
||||
}
|
||||
|
||||
return &file, nil
|
||||
}
|
||||
|
||||
// GetByID retrieves a file by its UUID
|
||||
func (r *FileRepository) GetByID(ctx context.Context, id string) (*File, error) {
|
||||
query := `
|
||||
SELECT id, path, mtime, ctime, size, mode, uid, gid, link_target
|
||||
FROM files
|
||||
WHERE id = ?
|
||||
`
|
||||
|
||||
var file File
|
||||
var mtimeUnix, ctimeUnix int64
|
||||
var linkTarget sql.NullString
|
||||
|
||||
err := r.db.conn.QueryRowContext(ctx, query, id).Scan(
|
||||
&file.ID,
|
||||
&file.Path,
|
||||
&mtimeUnix,
|
||||
&ctimeUnix,
|
||||
&file.Size,
|
||||
&file.Mode,
|
||||
&file.UID,
|
||||
&file.GID,
|
||||
&linkTarget,
|
||||
)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying file: %w", err)
|
||||
}
|
||||
|
||||
file.MTime = time.Unix(mtimeUnix, 0).UTC()
|
||||
file.CTime = time.Unix(ctimeUnix, 0).UTC()
|
||||
if linkTarget.Valid {
|
||||
file.LinkTarget = linkTarget.String
|
||||
}
|
||||
|
||||
return &file, nil
|
||||
}
|
||||
|
||||
func (r *FileRepository) GetByPathTx(ctx context.Context, tx *sql.Tx, path string) (*File, error) {
|
||||
query := `
|
||||
SELECT id, path, mtime, ctime, size, mode, uid, gid, link_target
|
||||
FROM files
|
||||
WHERE path = ?
|
||||
`
|
||||
|
||||
var file File
|
||||
var mtimeUnix, ctimeUnix int64
|
||||
var linkTarget sql.NullString
|
||||
|
||||
LogSQL("GetByPathTx QueryRowContext", query, path)
|
||||
err := tx.QueryRowContext(ctx, query, path).Scan(
|
||||
&file.ID,
|
||||
&file.Path,
|
||||
&mtimeUnix,
|
||||
&ctimeUnix,
|
||||
&file.Size,
|
||||
&file.Mode,
|
||||
&file.UID,
|
||||
&file.GID,
|
||||
&linkTarget,
|
||||
)
|
||||
LogSQL("GetByPathTx Scan complete", query, path)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying file: %w", err)
|
||||
}
|
||||
|
||||
file.MTime = time.Unix(mtimeUnix, 0).UTC()
|
||||
file.CTime = time.Unix(ctimeUnix, 0).UTC()
|
||||
if linkTarget.Valid {
|
||||
file.LinkTarget = linkTarget.String
|
||||
}
|
||||
@@ -83,7 +175,7 @@ func (r *FileRepository) GetByPath(ctx context.Context, path string) (*File, err
|
||||
|
||||
func (r *FileRepository) ListModifiedSince(ctx context.Context, since time.Time) ([]*File, error) {
|
||||
query := `
|
||||
SELECT path, mtime, ctime, size, mode, uid, gid, link_target
|
||||
SELECT id, path, mtime, ctime, size, mode, uid, gid, link_target
|
||||
FROM files
|
||||
WHERE mtime >= ?
|
||||
ORDER BY path
|
||||
@@ -102,6 +194,7 @@ func (r *FileRepository) ListModifiedSince(ctx context.Context, since time.Time)
|
||||
var linkTarget sql.NullString
|
||||
|
||||
err := rows.Scan(
|
||||
&file.ID,
|
||||
&file.Path,
|
||||
&mtimeUnix,
|
||||
&ctimeUnix,
|
||||
@@ -134,7 +227,7 @@ func (r *FileRepository) Delete(ctx context.Context, tx *sql.Tx, path string) er
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, path)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, path)
|
||||
_, err = r.db.ExecWithLog(ctx, query, path)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -143,3 +236,91 @@ func (r *FileRepository) Delete(ctx context.Context, tx *sql.Tx, path string) er
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteByID deletes a file by its UUID
|
||||
func (r *FileRepository) DeleteByID(ctx context.Context, tx *sql.Tx, id string) error {
|
||||
query := `DELETE FROM files WHERE id = ?`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, id)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, id)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *FileRepository) ListByPrefix(ctx context.Context, prefix string) ([]*File, error) {
|
||||
query := `
|
||||
SELECT id, path, mtime, ctime, size, mode, uid, gid, link_target
|
||||
FROM files
|
||||
WHERE path LIKE ? || '%'
|
||||
ORDER BY path
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, prefix)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying files: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var files []*File
|
||||
for rows.Next() {
|
||||
var file File
|
||||
var mtimeUnix, ctimeUnix int64
|
||||
var linkTarget sql.NullString
|
||||
|
||||
err := rows.Scan(
|
||||
&file.ID,
|
||||
&file.Path,
|
||||
&mtimeUnix,
|
||||
&ctimeUnix,
|
||||
&file.Size,
|
||||
&file.Mode,
|
||||
&file.UID,
|
||||
&file.GID,
|
||||
&linkTarget,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning file: %w", err)
|
||||
}
|
||||
|
||||
file.MTime = time.Unix(mtimeUnix, 0)
|
||||
file.CTime = time.Unix(ctimeUnix, 0)
|
||||
if linkTarget.Valid {
|
||||
file.LinkTarget = linkTarget.String
|
||||
}
|
||||
|
||||
files = append(files, &file)
|
||||
}
|
||||
|
||||
return files, rows.Err()
|
||||
}
|
||||
|
||||
// DeleteOrphaned deletes files that are not referenced by any snapshot
|
||||
func (r *FileRepository) DeleteOrphaned(ctx context.Context) error {
|
||||
query := `
|
||||
DELETE FROM files
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM snapshot_files
|
||||
WHERE snapshot_files.file_id = files.id
|
||||
)
|
||||
`
|
||||
|
||||
result, err := r.db.ExecWithLog(ctx, query)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned files: %w", err)
|
||||
}
|
||||
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
if rowsAffected > 0 {
|
||||
log.Debug("Deleted orphaned files", "count", rowsAffected)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,9 +1,15 @@
|
||||
// Package database provides data models and repository interfaces for the Vaultik backup system.
|
||||
// It includes types for files, chunks, blobs, snapshots, and their relationships.
|
||||
package database
|
||||
|
||||
import "time"
|
||||
|
||||
// File represents a file record in the database
|
||||
// File represents a file or directory in the backup system.
|
||||
// It stores metadata about files including timestamps, permissions, ownership,
|
||||
// and symlink targets. This information is used to restore files with their
|
||||
// original attributes.
|
||||
type File struct {
|
||||
ID string // UUID primary key
|
||||
Path string
|
||||
MTime time.Time
|
||||
CTime time.Time
|
||||
@@ -14,57 +20,101 @@ type File struct {
|
||||
LinkTarget string // empty for regular files, target path for symlinks
|
||||
}
|
||||
|
||||
// IsSymlink returns true if this file is a symbolic link
|
||||
// IsSymlink returns true if this file is a symbolic link.
|
||||
// A file is considered a symlink if it has a non-empty LinkTarget.
|
||||
func (f *File) IsSymlink() bool {
|
||||
return f.LinkTarget != ""
|
||||
}
|
||||
|
||||
// FileChunk represents the mapping between files and chunks
|
||||
// FileChunk represents the mapping between files and their constituent chunks.
|
||||
// Large files are split into multiple chunks for efficient deduplication and storage.
|
||||
// The Idx field maintains the order of chunks within a file.
|
||||
type FileChunk struct {
|
||||
Path string
|
||||
FileID string
|
||||
Idx int
|
||||
ChunkHash string
|
||||
}
|
||||
|
||||
// Chunk represents a chunk record in the database
|
||||
// Chunk represents a data chunk in the deduplication system.
|
||||
// Files are split into chunks which are content-addressed by their hash.
|
||||
// The ChunkHash is the SHA256 hash of the chunk content, used for deduplication.
|
||||
type Chunk struct {
|
||||
ChunkHash string
|
||||
SHA256 string
|
||||
Size int64
|
||||
}
|
||||
|
||||
// Blob represents a blob record in the database
|
||||
// Blob represents a blob record in the database.
|
||||
// A blob is Vaultik's final storage unit - a large file (up to 10GB) containing
|
||||
// many compressed and encrypted chunks from multiple source files.
|
||||
// Blobs are content-addressed, meaning their filename in S3 is derived from
|
||||
// the SHA256 hash of their compressed and encrypted content.
|
||||
// The blob creation process is: chunks are accumulated -> compressed with zstd
|
||||
// -> encrypted with age -> hashed -> uploaded to S3 with the hash as filename.
|
||||
type Blob struct {
|
||||
BlobHash string
|
||||
CreatedTS time.Time
|
||||
ID string // UUID assigned when blob creation starts
|
||||
Hash string // SHA256 of final compressed+encrypted content (empty until finalized)
|
||||
CreatedTS time.Time // When blob creation started
|
||||
FinishedTS *time.Time // When blob was finalized (nil if still packing)
|
||||
UncompressedSize int64 // Total size of raw chunks before compression
|
||||
CompressedSize int64 // Size after compression and encryption
|
||||
UploadedTS *time.Time // When blob was uploaded to S3 (nil if not uploaded)
|
||||
}
|
||||
|
||||
// BlobChunk represents the mapping between blobs and chunks
|
||||
// BlobChunk represents the mapping between blobs and the chunks they contain.
|
||||
// This allows tracking which chunks are stored in which blobs, along with
|
||||
// their position and size within the blob. The offset and length fields
|
||||
// enable extracting specific chunks from a blob without processing the entire blob.
|
||||
type BlobChunk struct {
|
||||
BlobHash string
|
||||
BlobID string
|
||||
ChunkHash string
|
||||
Offset int64
|
||||
Length int64
|
||||
}
|
||||
|
||||
// ChunkFile represents the reverse mapping of chunks to files
|
||||
// ChunkFile represents the reverse mapping showing which files contain a specific chunk.
|
||||
// This is used during deduplication to identify all files that share a chunk,
|
||||
// which is important for garbage collection and integrity verification.
|
||||
type ChunkFile struct {
|
||||
ChunkHash string
|
||||
FilePath string
|
||||
FileID string
|
||||
FileOffset int64
|
||||
Length int64
|
||||
}
|
||||
|
||||
// Snapshot represents a snapshot record in the database
|
||||
type Snapshot struct {
|
||||
ID string
|
||||
Hostname string
|
||||
VaultikVersion string
|
||||
CreatedTS time.Time
|
||||
FileCount int64
|
||||
ChunkCount int64
|
||||
BlobCount int64
|
||||
TotalSize int64 // Total size of all referenced files
|
||||
BlobSize int64 // Total size of all referenced blobs (compressed and encrypted)
|
||||
CompressionRatio float64 // Compression ratio (BlobSize / TotalSize)
|
||||
ID string
|
||||
Hostname string
|
||||
VaultikVersion string
|
||||
VaultikGitRevision string
|
||||
StartedAt time.Time
|
||||
CompletedAt *time.Time // nil if still in progress
|
||||
FileCount int64
|
||||
ChunkCount int64
|
||||
BlobCount int64
|
||||
TotalSize int64 // Total size of all referenced files
|
||||
BlobSize int64 // Total size of all referenced blobs (compressed and encrypted)
|
||||
BlobUncompressedSize int64 // Total uncompressed size of all referenced blobs
|
||||
CompressionRatio float64 // Compression ratio (BlobSize / BlobUncompressedSize)
|
||||
CompressionLevel int // Compression level used for this snapshot
|
||||
UploadBytes int64 // Total bytes uploaded during this snapshot
|
||||
UploadDurationMs int64 // Total milliseconds spent uploading to S3
|
||||
}
|
||||
|
||||
// IsComplete returns true if the snapshot has completed
|
||||
func (s *Snapshot) IsComplete() bool {
|
||||
return s.CompletedAt != nil
|
||||
}
|
||||
|
||||
// SnapshotFile represents the mapping between snapshots and files
|
||||
type SnapshotFile struct {
|
||||
SnapshotID string
|
||||
FileID string
|
||||
}
|
||||
|
||||
// SnapshotBlob represents the mapping between snapshots and blobs
|
||||
type SnapshotBlob struct {
|
||||
SnapshotID string
|
||||
BlobID string
|
||||
BlobHash string // Denormalized for easier manifest generation
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"path/filepath"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
@@ -32,7 +33,13 @@ func provideDatabase(lc fx.Lifecycle, cfg *config.Config) (*DB, error) {
|
||||
|
||||
lc.Append(fx.Hook{
|
||||
OnStop: func(ctx context.Context) error {
|
||||
return db.Close()
|
||||
log.Debug("Database module OnStop hook called")
|
||||
if err := db.Close(); err != nil {
|
||||
log.Error("Failed to close database in OnStop hook", "error", err)
|
||||
return err
|
||||
}
|
||||
log.Debug("Database closed successfully in OnStop hook")
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
|
||||
@@ -6,6 +6,9 @@ import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Repositories provides access to all database repositories.
|
||||
// It serves as a centralized access point for all database operations
|
||||
// and manages transaction coordination across repositories.
|
||||
type Repositories struct {
|
||||
db *DB
|
||||
Files *FileRepository
|
||||
@@ -15,8 +18,11 @@ type Repositories struct {
|
||||
BlobChunks *BlobChunkRepository
|
||||
ChunkFiles *ChunkFileRepository
|
||||
Snapshots *SnapshotRepository
|
||||
Uploads *UploadRepository
|
||||
}
|
||||
|
||||
// NewRepositories creates a new Repositories instance with all repository types.
|
||||
// Each repository shares the same database connection for coordinated transactions.
|
||||
func NewRepositories(db *DB) *Repositories {
|
||||
return &Repositories{
|
||||
db: db,
|
||||
@@ -27,20 +33,26 @@ func NewRepositories(db *DB) *Repositories {
|
||||
BlobChunks: NewBlobChunkRepository(db),
|
||||
ChunkFiles: NewChunkFileRepository(db),
|
||||
Snapshots: NewSnapshotRepository(db),
|
||||
Uploads: NewUploadRepository(db.conn),
|
||||
}
|
||||
}
|
||||
|
||||
// TxFunc is a function that executes within a database transaction.
|
||||
// The transaction is automatically committed if the function returns nil,
|
||||
// or rolled back if it returns an error.
|
||||
type TxFunc func(ctx context.Context, tx *sql.Tx) error
|
||||
|
||||
// WithTx executes a function within a write transaction.
|
||||
// SQLite handles its own locking internally, so no explicit locking is needed.
|
||||
// The transaction is automatically committed on success or rolled back on error.
|
||||
// This method should be used for all write operations to ensure atomicity.
|
||||
func (r *Repositories) WithTx(ctx context.Context, fn TxFunc) error {
|
||||
// Acquire write lock for the entire transaction
|
||||
r.db.LockForWrite()
|
||||
defer r.db.UnlockWrite()
|
||||
|
||||
LogSQL("WithTx", "Beginning transaction", "")
|
||||
tx, err := r.db.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("beginning transaction: %w", err)
|
||||
}
|
||||
LogSQL("WithTx", "Transaction started", "")
|
||||
|
||||
defer func() {
|
||||
if p := recover(); p != nil {
|
||||
@@ -63,6 +75,10 @@ func (r *Repositories) WithTx(ctx context.Context, fn TxFunc) error {
|
||||
return tx.Commit()
|
||||
}
|
||||
|
||||
// WithReadTx executes a function within a read-only transaction.
|
||||
// Read transactions can run concurrently with other read transactions
|
||||
// but will be blocked by write transactions. The transaction is
|
||||
// automatically committed on success or rolled back on error.
|
||||
func (r *Repositories) WithReadTx(ctx context.Context, fn TxFunc) error {
|
||||
opts := &sql.TxOptions{
|
||||
ReadOnly: true,
|
||||
|
||||
@@ -34,7 +34,6 @@ func TestRepositoriesTransaction(t *testing.T) {
|
||||
// Create chunks
|
||||
chunk1 := &Chunk{
|
||||
ChunkHash: "tx_chunk1",
|
||||
SHA256: "tx_sha1",
|
||||
Size: 512,
|
||||
}
|
||||
if err := repos.Chunks.Create(ctx, tx, chunk1); err != nil {
|
||||
@@ -43,7 +42,6 @@ func TestRepositoriesTransaction(t *testing.T) {
|
||||
|
||||
chunk2 := &Chunk{
|
||||
ChunkHash: "tx_chunk2",
|
||||
SHA256: "tx_sha2",
|
||||
Size: 512,
|
||||
}
|
||||
if err := repos.Chunks.Create(ctx, tx, chunk2); err != nil {
|
||||
@@ -52,7 +50,7 @@ func TestRepositoriesTransaction(t *testing.T) {
|
||||
|
||||
// Map chunks to file
|
||||
fc1 := &FileChunk{
|
||||
Path: file.Path,
|
||||
FileID: file.ID,
|
||||
Idx: 0,
|
||||
ChunkHash: chunk1.ChunkHash,
|
||||
}
|
||||
@@ -61,7 +59,7 @@ func TestRepositoriesTransaction(t *testing.T) {
|
||||
}
|
||||
|
||||
fc2 := &FileChunk{
|
||||
Path: file.Path,
|
||||
FileID: file.ID,
|
||||
Idx: 1,
|
||||
ChunkHash: chunk2.ChunkHash,
|
||||
}
|
||||
@@ -71,7 +69,8 @@ func TestRepositoriesTransaction(t *testing.T) {
|
||||
|
||||
// Create blob
|
||||
blob := &Blob{
|
||||
BlobHash: "tx_blob1",
|
||||
ID: "tx-blob-id-1",
|
||||
Hash: "tx_blob1",
|
||||
CreatedTS: time.Now().Truncate(time.Second),
|
||||
}
|
||||
if err := repos.Blobs.Create(ctx, tx, blob); err != nil {
|
||||
@@ -80,7 +79,7 @@ func TestRepositoriesTransaction(t *testing.T) {
|
||||
|
||||
// Map chunks to blob
|
||||
bc1 := &BlobChunk{
|
||||
BlobHash: blob.BlobHash,
|
||||
BlobID: blob.ID,
|
||||
ChunkHash: chunk1.ChunkHash,
|
||||
Offset: 0,
|
||||
Length: 512,
|
||||
@@ -90,7 +89,7 @@ func TestRepositoriesTransaction(t *testing.T) {
|
||||
}
|
||||
|
||||
bc2 := &BlobChunk{
|
||||
BlobHash: blob.BlobHash,
|
||||
BlobID: blob.ID,
|
||||
ChunkHash: chunk2.ChunkHash,
|
||||
Offset: 512,
|
||||
Length: 512,
|
||||
@@ -115,7 +114,7 @@ func TestRepositoriesTransaction(t *testing.T) {
|
||||
t.Error("expected file after transaction")
|
||||
}
|
||||
|
||||
chunks, err := repos.FileChunks.GetByPath(ctx, "/test/tx_file.txt")
|
||||
chunks, err := repos.FileChunks.GetByFile(ctx, "/test/tx_file.txt")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get file chunks: %v", err)
|
||||
}
|
||||
@@ -158,7 +157,6 @@ func TestRepositoriesTransactionRollback(t *testing.T) {
|
||||
// Create a chunk
|
||||
chunk := &Chunk{
|
||||
ChunkHash: "rollback_chunk",
|
||||
SHA256: "rollback_sha",
|
||||
Size: 1024,
|
||||
}
|
||||
if err := repos.Chunks.Create(ctx, tx, chunk); err != nil {
|
||||
@@ -217,7 +215,7 @@ func TestRepositoriesReadTransaction(t *testing.T) {
|
||||
var retrievedFile *File
|
||||
err = repos.WithReadTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
var err error
|
||||
retrievedFile, err = repos.Files.GetByPath(ctx, "/test/read_file.txt")
|
||||
retrievedFile, err = repos.Files.GetByPathTx(ctx, tx, "/test/read_file.txt")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
871
internal/database/repository_comprehensive_test.go
Normal file
871
internal/database/repository_comprehensive_test.go
Normal file
@@ -0,0 +1,871 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TestFileRepositoryUUIDGeneration tests that files get unique UUIDs
|
||||
func TestFileRepositoryUUIDGeneration(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewFileRepository(db)
|
||||
|
||||
// Create multiple files
|
||||
files := []*File{
|
||||
{
|
||||
Path: "/file1.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
},
|
||||
{
|
||||
Path: "/file2.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 2048,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
},
|
||||
}
|
||||
|
||||
uuids := make(map[string]bool)
|
||||
for _, file := range files {
|
||||
err := repo.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file: %v", err)
|
||||
}
|
||||
|
||||
// Check UUID was generated
|
||||
if file.ID == "" {
|
||||
t.Error("file ID was not generated")
|
||||
}
|
||||
|
||||
// Check UUID is unique
|
||||
if uuids[file.ID] {
|
||||
t.Errorf("duplicate UUID generated: %s", file.ID)
|
||||
}
|
||||
uuids[file.ID] = true
|
||||
}
|
||||
}
|
||||
|
||||
// TestFileRepositoryGetByID tests retrieving files by UUID
|
||||
func TestFileRepositoryGetByID(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewFileRepository(db)
|
||||
|
||||
// Create a file
|
||||
file := &File{
|
||||
Path: "/test.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
|
||||
err := repo.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file: %v", err)
|
||||
}
|
||||
|
||||
// Retrieve by ID
|
||||
retrieved, err := repo.GetByID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get file by ID: %v", err)
|
||||
}
|
||||
|
||||
if retrieved.ID != file.ID {
|
||||
t.Errorf("ID mismatch: expected %s, got %s", file.ID, retrieved.ID)
|
||||
}
|
||||
if retrieved.Path != file.Path {
|
||||
t.Errorf("Path mismatch: expected %s, got %s", file.Path, retrieved.Path)
|
||||
}
|
||||
|
||||
// Test non-existent ID
|
||||
nonExistent, err := repo.GetByID(ctx, "non-existent-uuid")
|
||||
if err != nil {
|
||||
t.Fatalf("GetByID should not return error for non-existent ID: %v", err)
|
||||
}
|
||||
if nonExistent != nil {
|
||||
t.Error("expected nil for non-existent ID")
|
||||
}
|
||||
}
|
||||
|
||||
// TestOrphanedFileCleanup tests the cleanup of orphaned files
|
||||
func TestOrphanedFileCleanup(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create files
|
||||
file1 := &File{
|
||||
Path: "/orphaned.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
file2 := &File{
|
||||
Path: "/referenced.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 2048,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
|
||||
err := repos.Files.Create(ctx, nil, file1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file1: %v", err)
|
||||
}
|
||||
err = repos.Files.Create(ctx, nil, file2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file2: %v", err)
|
||||
}
|
||||
|
||||
// Create a snapshot and reference only file2
|
||||
snapshot := &Snapshot{
|
||||
ID: "test-snapshot",
|
||||
Hostname: "test-host",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
err = repos.Snapshots.Create(ctx, nil, snapshot)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot: %v", err)
|
||||
}
|
||||
|
||||
// Add file2 to snapshot
|
||||
err = repos.Snapshots.AddFileByID(ctx, nil, snapshot.ID, file2.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to add file to snapshot: %v", err)
|
||||
}
|
||||
|
||||
// Run orphaned cleanup
|
||||
err = repos.Files.DeleteOrphaned(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete orphaned files: %v", err)
|
||||
}
|
||||
|
||||
// Check that orphaned file is gone
|
||||
orphanedFile, err := repos.Files.GetByID(ctx, file1.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting file: %v", err)
|
||||
}
|
||||
if orphanedFile != nil {
|
||||
t.Error("orphaned file should have been deleted")
|
||||
}
|
||||
|
||||
// Check that referenced file still exists
|
||||
referencedFile, err := repos.Files.GetByID(ctx, file2.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting file: %v", err)
|
||||
}
|
||||
if referencedFile == nil {
|
||||
t.Error("referenced file should not have been deleted")
|
||||
}
|
||||
}
|
||||
|
||||
// TestOrphanedChunkCleanup tests the cleanup of orphaned chunks
|
||||
func TestOrphanedChunkCleanup(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create chunks
|
||||
chunk1 := &Chunk{
|
||||
ChunkHash: "orphaned-chunk",
|
||||
Size: 1024,
|
||||
}
|
||||
chunk2 := &Chunk{
|
||||
ChunkHash: "referenced-chunk",
|
||||
Size: 1024,
|
||||
}
|
||||
|
||||
err := repos.Chunks.Create(ctx, nil, chunk1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk1: %v", err)
|
||||
}
|
||||
err = repos.Chunks.Create(ctx, nil, chunk2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk2: %v", err)
|
||||
}
|
||||
|
||||
// Create a file and reference only chunk2
|
||||
file := &File{
|
||||
Path: "/test.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
err = repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file: %v", err)
|
||||
}
|
||||
|
||||
// Create file-chunk mapping only for chunk2
|
||||
fc := &FileChunk{
|
||||
FileID: file.ID,
|
||||
Idx: 0,
|
||||
ChunkHash: chunk2.ChunkHash,
|
||||
}
|
||||
err = repos.FileChunks.Create(ctx, nil, fc)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file chunk: %v", err)
|
||||
}
|
||||
|
||||
// Run orphaned cleanup
|
||||
err = repos.Chunks.DeleteOrphaned(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete orphaned chunks: %v", err)
|
||||
}
|
||||
|
||||
// Check that orphaned chunk is gone
|
||||
orphanedChunk, err := repos.Chunks.GetByHash(ctx, chunk1.ChunkHash)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting chunk: %v", err)
|
||||
}
|
||||
if orphanedChunk != nil {
|
||||
t.Error("orphaned chunk should have been deleted")
|
||||
}
|
||||
|
||||
// Check that referenced chunk still exists
|
||||
referencedChunk, err := repos.Chunks.GetByHash(ctx, chunk2.ChunkHash)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting chunk: %v", err)
|
||||
}
|
||||
if referencedChunk == nil {
|
||||
t.Error("referenced chunk should not have been deleted")
|
||||
}
|
||||
}
|
||||
|
||||
// TestOrphanedBlobCleanup tests the cleanup of orphaned blobs
|
||||
func TestOrphanedBlobCleanup(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create blobs
|
||||
blob1 := &Blob{
|
||||
ID: "orphaned-blob-id",
|
||||
Hash: "orphaned-blob",
|
||||
CreatedTS: time.Now().Truncate(time.Second),
|
||||
}
|
||||
blob2 := &Blob{
|
||||
ID: "referenced-blob-id",
|
||||
Hash: "referenced-blob",
|
||||
CreatedTS: time.Now().Truncate(time.Second),
|
||||
}
|
||||
|
||||
err := repos.Blobs.Create(ctx, nil, blob1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create blob1: %v", err)
|
||||
}
|
||||
err = repos.Blobs.Create(ctx, nil, blob2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create blob2: %v", err)
|
||||
}
|
||||
|
||||
// Create a snapshot and reference only blob2
|
||||
snapshot := &Snapshot{
|
||||
ID: "test-snapshot",
|
||||
Hostname: "test-host",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
err = repos.Snapshots.Create(ctx, nil, snapshot)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot: %v", err)
|
||||
}
|
||||
|
||||
// Add blob2 to snapshot
|
||||
err = repos.Snapshots.AddBlob(ctx, nil, snapshot.ID, blob2.ID, blob2.Hash)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to add blob to snapshot: %v", err)
|
||||
}
|
||||
|
||||
// Run orphaned cleanup
|
||||
err = repos.Blobs.DeleteOrphaned(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete orphaned blobs: %v", err)
|
||||
}
|
||||
|
||||
// Check that orphaned blob is gone
|
||||
orphanedBlob, err := repos.Blobs.GetByID(ctx, blob1.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting blob: %v", err)
|
||||
}
|
||||
if orphanedBlob != nil {
|
||||
t.Error("orphaned blob should have been deleted")
|
||||
}
|
||||
|
||||
// Check that referenced blob still exists
|
||||
referencedBlob, err := repos.Blobs.GetByID(ctx, blob2.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting blob: %v", err)
|
||||
}
|
||||
if referencedBlob == nil {
|
||||
t.Error("referenced blob should not have been deleted")
|
||||
}
|
||||
}
|
||||
|
||||
// TestFileChunkRepositoryWithUUIDs tests file-chunk relationships with UUIDs
|
||||
func TestFileChunkRepositoryWithUUIDs(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create a file
|
||||
file := &File{
|
||||
Path: "/test.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 3072,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
err := repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file: %v", err)
|
||||
}
|
||||
|
||||
// Create chunks
|
||||
chunks := []string{"chunk1", "chunk2", "chunk3"}
|
||||
for i, chunkHash := range chunks {
|
||||
chunk := &Chunk{
|
||||
ChunkHash: chunkHash,
|
||||
Size: 1024,
|
||||
}
|
||||
err = repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk: %v", err)
|
||||
}
|
||||
|
||||
// Create file-chunk mapping
|
||||
fc := &FileChunk{
|
||||
FileID: file.ID,
|
||||
Idx: i,
|
||||
ChunkHash: chunkHash,
|
||||
}
|
||||
err = repos.FileChunks.Create(ctx, nil, fc)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file chunk: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Test GetByFileID
|
||||
fileChunks, err := repos.FileChunks.GetByFileID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get file chunks: %v", err)
|
||||
}
|
||||
if len(fileChunks) != 3 {
|
||||
t.Errorf("expected 3 chunks, got %d", len(fileChunks))
|
||||
}
|
||||
|
||||
// Test DeleteByFileID
|
||||
err = repos.FileChunks.DeleteByFileID(ctx, nil, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete file chunks: %v", err)
|
||||
}
|
||||
|
||||
fileChunks, err = repos.FileChunks.GetByFileID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get file chunks after delete: %v", err)
|
||||
}
|
||||
if len(fileChunks) != 0 {
|
||||
t.Errorf("expected 0 chunks after delete, got %d", len(fileChunks))
|
||||
}
|
||||
}
|
||||
|
||||
// TestChunkFileRepositoryWithUUIDs tests chunk-file relationships with UUIDs
|
||||
func TestChunkFileRepositoryWithUUIDs(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create files
|
||||
file1 := &File{
|
||||
Path: "/file1.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
file2 := &File{
|
||||
Path: "/file2.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
|
||||
err := repos.Files.Create(ctx, nil, file1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file1: %v", err)
|
||||
}
|
||||
err = repos.Files.Create(ctx, nil, file2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file2: %v", err)
|
||||
}
|
||||
|
||||
// Create a chunk that appears in both files (deduplication)
|
||||
chunk := &Chunk{
|
||||
ChunkHash: "shared-chunk",
|
||||
Size: 1024,
|
||||
}
|
||||
err = repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk: %v", err)
|
||||
}
|
||||
|
||||
// Create chunk-file mappings
|
||||
cf1 := &ChunkFile{
|
||||
ChunkHash: chunk.ChunkHash,
|
||||
FileID: file1.ID,
|
||||
FileOffset: 0,
|
||||
Length: 1024,
|
||||
}
|
||||
cf2 := &ChunkFile{
|
||||
ChunkHash: chunk.ChunkHash,
|
||||
FileID: file2.ID,
|
||||
FileOffset: 512,
|
||||
Length: 1024,
|
||||
}
|
||||
|
||||
err = repos.ChunkFiles.Create(ctx, nil, cf1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk file 1: %v", err)
|
||||
}
|
||||
err = repos.ChunkFiles.Create(ctx, nil, cf2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk file 2: %v", err)
|
||||
}
|
||||
|
||||
// Test GetByChunkHash
|
||||
chunkFiles, err := repos.ChunkFiles.GetByChunkHash(ctx, chunk.ChunkHash)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get chunk files: %v", err)
|
||||
}
|
||||
if len(chunkFiles) != 2 {
|
||||
t.Errorf("expected 2 files for chunk, got %d", len(chunkFiles))
|
||||
}
|
||||
|
||||
// Test GetByFileID
|
||||
chunkFiles, err = repos.ChunkFiles.GetByFileID(ctx, file1.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get chunks by file ID: %v", err)
|
||||
}
|
||||
if len(chunkFiles) != 1 {
|
||||
t.Errorf("expected 1 chunk for file, got %d", len(chunkFiles))
|
||||
}
|
||||
}
|
||||
|
||||
// TestSnapshotRepositoryExtendedFields tests snapshot with version and git revision
|
||||
func TestSnapshotRepositoryExtendedFields(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewSnapshotRepository(db)
|
||||
|
||||
// Create snapshot with extended fields
|
||||
snapshot := &Snapshot{
|
||||
ID: "test-20250722-120000Z",
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "0.0.1",
|
||||
VaultikGitRevision: "abc123def456",
|
||||
StartedAt: time.Now(),
|
||||
CompletedAt: nil,
|
||||
FileCount: 100,
|
||||
ChunkCount: 200,
|
||||
BlobCount: 50,
|
||||
TotalSize: 1024 * 1024,
|
||||
BlobSize: 512 * 1024,
|
||||
BlobUncompressedSize: 1024 * 1024,
|
||||
CompressionLevel: 6,
|
||||
CompressionRatio: 2.0,
|
||||
UploadDurationMs: 5000,
|
||||
}
|
||||
|
||||
err := repo.Create(ctx, nil, snapshot)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot: %v", err)
|
||||
}
|
||||
|
||||
// Retrieve and verify
|
||||
retrieved, err := repo.GetByID(ctx, snapshot.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get snapshot: %v", err)
|
||||
}
|
||||
|
||||
if retrieved.VaultikVersion != snapshot.VaultikVersion {
|
||||
t.Errorf("version mismatch: expected %s, got %s", snapshot.VaultikVersion, retrieved.VaultikVersion)
|
||||
}
|
||||
if retrieved.VaultikGitRevision != snapshot.VaultikGitRevision {
|
||||
t.Errorf("git revision mismatch: expected %s, got %s", snapshot.VaultikGitRevision, retrieved.VaultikGitRevision)
|
||||
}
|
||||
if retrieved.CompressionLevel != snapshot.CompressionLevel {
|
||||
t.Errorf("compression level mismatch: expected %d, got %d", snapshot.CompressionLevel, retrieved.CompressionLevel)
|
||||
}
|
||||
if retrieved.BlobUncompressedSize != snapshot.BlobUncompressedSize {
|
||||
t.Errorf("uncompressed size mismatch: expected %d, got %d", snapshot.BlobUncompressedSize, retrieved.BlobUncompressedSize)
|
||||
}
|
||||
if retrieved.UploadDurationMs != snapshot.UploadDurationMs {
|
||||
t.Errorf("upload duration mismatch: expected %d, got %d", snapshot.UploadDurationMs, retrieved.UploadDurationMs)
|
||||
}
|
||||
}
|
||||
|
||||
// TestComplexOrphanedDataScenario tests a complex scenario with multiple relationships
|
||||
func TestComplexOrphanedDataScenario(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create snapshots
|
||||
snapshot1 := &Snapshot{
|
||||
ID: "snapshot1",
|
||||
Hostname: "host1",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
snapshot2 := &Snapshot{
|
||||
ID: "snapshot2",
|
||||
Hostname: "host1",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
|
||||
err := repos.Snapshots.Create(ctx, nil, snapshot1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot1: %v", err)
|
||||
}
|
||||
err = repos.Snapshots.Create(ctx, nil, snapshot2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot2: %v", err)
|
||||
}
|
||||
|
||||
// Create files
|
||||
files := make([]*File, 3)
|
||||
for i := range files {
|
||||
files[i] = &File{
|
||||
Path: fmt.Sprintf("/file%d.txt", i),
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
err = repos.Files.Create(ctx, nil, files[i])
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file%d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Add files to snapshots
|
||||
// Snapshot1: file0, file1
|
||||
// Snapshot2: file1, file2
|
||||
// file0: only in snapshot1
|
||||
// file1: in both snapshots
|
||||
// file2: only in snapshot2
|
||||
err = repos.Snapshots.AddFileByID(ctx, nil, snapshot1.ID, files[0].ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = repos.Snapshots.AddFileByID(ctx, nil, snapshot1.ID, files[1].ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = repos.Snapshots.AddFileByID(ctx, nil, snapshot2.ID, files[1].ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = repos.Snapshots.AddFileByID(ctx, nil, snapshot2.ID, files[2].ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Delete snapshot1
|
||||
err = repos.Snapshots.DeleteSnapshotFiles(ctx, snapshot1.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = repos.Snapshots.Delete(ctx, snapshot1.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Run orphaned cleanup
|
||||
err = repos.Files.DeleteOrphaned(ctx)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Check results
|
||||
// file0 should be deleted (only in deleted snapshot)
|
||||
file0, err := repos.Files.GetByID(ctx, files[0].ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting file0: %v", err)
|
||||
}
|
||||
if file0 != nil {
|
||||
t.Error("file0 should have been deleted")
|
||||
}
|
||||
|
||||
// file1 should exist (still in snapshot2)
|
||||
file1, err := repos.Files.GetByID(ctx, files[1].ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting file1: %v", err)
|
||||
}
|
||||
if file1 == nil {
|
||||
t.Error("file1 should still exist")
|
||||
}
|
||||
|
||||
// file2 should exist (still in snapshot2)
|
||||
file2, err := repos.Files.GetByID(ctx, files[2].ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting file2: %v", err)
|
||||
}
|
||||
if file2 == nil {
|
||||
t.Error("file2 should still exist")
|
||||
}
|
||||
}
|
||||
|
||||
// TestCascadeDelete tests that cascade deletes work properly
|
||||
func TestCascadeDelete(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create a file
|
||||
file := &File{
|
||||
Path: "/cascade-test.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
err := repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file: %v", err)
|
||||
}
|
||||
|
||||
// Create chunks and file-chunk mappings
|
||||
for i := 0; i < 3; i++ {
|
||||
chunk := &Chunk{
|
||||
ChunkHash: fmt.Sprintf("cascade-chunk-%d", i),
|
||||
Size: 1024,
|
||||
}
|
||||
err = repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk: %v", err)
|
||||
}
|
||||
|
||||
fc := &FileChunk{
|
||||
FileID: file.ID,
|
||||
Idx: i,
|
||||
ChunkHash: chunk.ChunkHash,
|
||||
}
|
||||
err = repos.FileChunks.Create(ctx, nil, fc)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file chunk: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify file chunks exist
|
||||
fileChunks, err := repos.FileChunks.GetByFileID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(fileChunks) != 3 {
|
||||
t.Errorf("expected 3 file chunks, got %d", len(fileChunks))
|
||||
}
|
||||
|
||||
// Delete the file
|
||||
err = repos.Files.DeleteByID(ctx, nil, file.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete file: %v", err)
|
||||
}
|
||||
|
||||
// Verify file chunks were cascade deleted
|
||||
fileChunks, err = repos.FileChunks.GetByFileID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(fileChunks) != 0 {
|
||||
t.Errorf("expected 0 file chunks after cascade delete, got %d", len(fileChunks))
|
||||
}
|
||||
}
|
||||
|
||||
// TestTransactionIsolation tests that transactions properly isolate changes
|
||||
func TestTransactionIsolation(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Start a transaction
|
||||
err := repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
// Create a file within the transaction
|
||||
file := &File{
|
||||
Path: "/tx-test.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
err := repos.Files.Create(ctx, tx, file)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Within the same transaction, we should be able to query it
|
||||
// Note: This would require modifying GetByPath to accept a tx parameter
|
||||
// For now, we'll just test that rollback works
|
||||
|
||||
// Return an error to trigger rollback
|
||||
return fmt.Errorf("intentional rollback")
|
||||
})
|
||||
|
||||
if err == nil {
|
||||
t.Fatal("expected error from transaction")
|
||||
}
|
||||
|
||||
// Verify the file was not created (transaction rolled back)
|
||||
files, err := repos.Files.ListByPrefix(ctx, "/tx-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(files) != 0 {
|
||||
t.Error("file should not exist after rollback")
|
||||
}
|
||||
}
|
||||
|
||||
// TestConcurrentOrphanedCleanup tests that concurrent cleanup operations don't interfere
|
||||
func TestConcurrentOrphanedCleanup(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Set a 5-second busy timeout to handle concurrent operations
|
||||
if _, err := db.conn.Exec("PRAGMA busy_timeout = 5000"); err != nil {
|
||||
t.Fatalf("failed to set busy timeout: %v", err)
|
||||
}
|
||||
|
||||
// Create a snapshot
|
||||
snapshot := &Snapshot{
|
||||
ID: "concurrent-test",
|
||||
Hostname: "test-host",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
err := repos.Snapshots.Create(ctx, nil, snapshot)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create many files, some orphaned
|
||||
for i := 0; i < 20; i++ {
|
||||
file := &File{
|
||||
Path: fmt.Sprintf("/concurrent-%d.txt", i),
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
err = repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Add even-numbered files to snapshot
|
||||
if i%2 == 0 {
|
||||
err = repos.Snapshots.AddFileByID(ctx, nil, snapshot.ID, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run multiple cleanup operations concurrently
|
||||
// Note: SQLite has limited support for concurrent writes, so we expect some to fail
|
||||
done := make(chan error, 3)
|
||||
for i := 0; i < 3; i++ {
|
||||
go func() {
|
||||
done <- repos.Files.DeleteOrphaned(ctx)
|
||||
}()
|
||||
}
|
||||
|
||||
// Wait for all to complete
|
||||
for i := 0; i < 3; i++ {
|
||||
err := <-done
|
||||
if err != nil {
|
||||
t.Errorf("cleanup %d failed: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify correct files were deleted
|
||||
files, err := repos.Files.ListByPrefix(ctx, "/concurrent-")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Should have 10 files remaining (even numbered)
|
||||
if len(files) != 10 {
|
||||
t.Errorf("expected 10 files remaining, got %d", len(files))
|
||||
}
|
||||
|
||||
// Verify all remaining files are even-numbered
|
||||
for _, file := range files {
|
||||
var num int
|
||||
_, err := fmt.Sscanf(file.Path, "/concurrent-%d.txt", &num)
|
||||
if err != nil {
|
||||
t.Logf("failed to parse file number from %s: %v", file.Path, err)
|
||||
}
|
||||
if num%2 != 0 {
|
||||
t.Errorf("odd-numbered file %s should have been deleted", file.Path)
|
||||
}
|
||||
}
|
||||
}
|
||||
165
internal/database/repository_debug_test.go
Normal file
165
internal/database/repository_debug_test.go
Normal file
@@ -0,0 +1,165 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TestOrphanedFileCleanupDebug tests orphaned file cleanup with debug output
|
||||
func TestOrphanedFileCleanupDebug(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create files
|
||||
file1 := &File{
|
||||
Path: "/orphaned.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
file2 := &File{
|
||||
Path: "/referenced.txt",
|
||||
MTime: time.Now().Truncate(time.Second),
|
||||
CTime: time.Now().Truncate(time.Second),
|
||||
Size: 2048,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
|
||||
err := repos.Files.Create(ctx, nil, file1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file1: %v", err)
|
||||
}
|
||||
t.Logf("Created file1 with ID: %s", file1.ID)
|
||||
|
||||
err = repos.Files.Create(ctx, nil, file2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file2: %v", err)
|
||||
}
|
||||
t.Logf("Created file2 with ID: %s", file2.ID)
|
||||
|
||||
// Create a snapshot and reference only file2
|
||||
snapshot := &Snapshot{
|
||||
ID: "test-snapshot",
|
||||
Hostname: "test-host",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
err = repos.Snapshots.Create(ctx, nil, snapshot)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot: %v", err)
|
||||
}
|
||||
t.Logf("Created snapshot: %s", snapshot.ID)
|
||||
|
||||
// Check snapshot_files before adding
|
||||
var count int
|
||||
err = db.conn.QueryRow("SELECT COUNT(*) FROM snapshot_files").Scan(&count)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("snapshot_files count before add: %d", count)
|
||||
|
||||
// Add file2 to snapshot
|
||||
err = repos.Snapshots.AddFileByID(ctx, nil, snapshot.ID, file2.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to add file to snapshot: %v", err)
|
||||
}
|
||||
t.Logf("Added file2 to snapshot")
|
||||
|
||||
// Check snapshot_files after adding
|
||||
err = db.conn.QueryRow("SELECT COUNT(*) FROM snapshot_files").Scan(&count)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("snapshot_files count after add: %d", count)
|
||||
|
||||
// Check which files are referenced
|
||||
rows, err := db.conn.Query("SELECT file_id FROM snapshot_files")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer func() {
|
||||
if err := rows.Close(); err != nil {
|
||||
t.Logf("failed to close rows: %v", err)
|
||||
}
|
||||
}()
|
||||
t.Log("Files in snapshot_files:")
|
||||
for rows.Next() {
|
||||
var fileID string
|
||||
if err := rows.Scan(&fileID); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf(" - %s", fileID)
|
||||
}
|
||||
|
||||
// Check files before cleanup
|
||||
err = db.conn.QueryRow("SELECT COUNT(*) FROM files").Scan(&count)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("Files count before cleanup: %d", count)
|
||||
|
||||
// Run orphaned cleanup
|
||||
err = repos.Files.DeleteOrphaned(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete orphaned files: %v", err)
|
||||
}
|
||||
t.Log("Ran orphaned cleanup")
|
||||
|
||||
// Check files after cleanup
|
||||
err = db.conn.QueryRow("SELECT COUNT(*) FROM files").Scan(&count)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("Files count after cleanup: %d", count)
|
||||
|
||||
// List remaining files
|
||||
files, err := repos.Files.ListByPrefix(ctx, "/")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Log("Remaining files:")
|
||||
for _, f := range files {
|
||||
t.Logf(" - ID: %s, Path: %s", f.ID, f.Path)
|
||||
}
|
||||
|
||||
// Check that orphaned file is gone
|
||||
orphanedFile, err := repos.Files.GetByID(ctx, file1.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting file: %v", err)
|
||||
}
|
||||
if orphanedFile != nil {
|
||||
t.Error("orphaned file should have been deleted")
|
||||
// Let's check why it wasn't deleted
|
||||
var exists bool
|
||||
err = db.conn.QueryRow(`
|
||||
SELECT EXISTS(
|
||||
SELECT 1 FROM snapshot_files
|
||||
WHERE file_id = ?
|
||||
)`, file1.ID).Scan(&exists)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("File1 exists in snapshot_files: %v", exists)
|
||||
} else {
|
||||
t.Log("Orphaned file was correctly deleted")
|
||||
}
|
||||
|
||||
// Check that referenced file still exists
|
||||
referencedFile, err := repos.Files.GetByID(ctx, file2.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("error getting file: %v", err)
|
||||
}
|
||||
if referencedFile == nil {
|
||||
t.Error("referenced file should not have been deleted")
|
||||
} else {
|
||||
t.Log("Referenced file correctly remains")
|
||||
}
|
||||
}
|
||||
541
internal/database/repository_edge_cases_test.go
Normal file
541
internal/database/repository_edge_cases_test.go
Normal file
@@ -0,0 +1,541 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TestFileRepositoryEdgeCases tests edge cases for file repository
|
||||
func TestFileRepositoryEdgeCases(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repo := NewFileRepository(db)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
file *File
|
||||
wantErr bool
|
||||
errMsg string
|
||||
}{
|
||||
{
|
||||
name: "empty path",
|
||||
file: &File{
|
||||
Path: "",
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
},
|
||||
wantErr: false, // Empty strings are allowed, only NULL is not allowed
|
||||
},
|
||||
{
|
||||
name: "very long path",
|
||||
file: &File{
|
||||
Path: "/" + strings.Repeat("a", 4096),
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "path with special characters",
|
||||
file: &File{
|
||||
Path: "/test/file with spaces and 特殊文字.txt",
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "zero size file",
|
||||
file: &File{
|
||||
Path: "/empty.txt",
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 0,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "symlink with target",
|
||||
file: &File{
|
||||
Path: "/link",
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 0,
|
||||
Mode: 0777 | 0120000, // symlink mode
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
LinkTarget: "/target",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Add a unique suffix to paths to avoid UNIQUE constraint violations
|
||||
if tt.file.Path != "" {
|
||||
tt.file.Path = fmt.Sprintf("%s_%d_%d", tt.file.Path, i, time.Now().UnixNano())
|
||||
}
|
||||
|
||||
err := repo.Create(ctx, nil, tt.file)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("Create() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
if err != nil && tt.errMsg != "" && !strings.Contains(err.Error(), tt.errMsg) {
|
||||
t.Errorf("Create() error = %v, want error containing %q", err, tt.errMsg)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestDuplicateHandling tests handling of duplicate entries
|
||||
func TestDuplicateHandling(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Test duplicate file paths - Create uses UPSERT logic
|
||||
t.Run("duplicate file paths", func(t *testing.T) {
|
||||
file1 := &File{
|
||||
Path: "/duplicate.txt",
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
file2 := &File{
|
||||
Path: "/duplicate.txt", // Same path
|
||||
MTime: time.Now().Add(time.Hour),
|
||||
CTime: time.Now().Add(time.Hour),
|
||||
Size: 2048,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
|
||||
err := repos.Files.Create(ctx, nil, file1)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file1: %v", err)
|
||||
}
|
||||
originalID := file1.ID
|
||||
|
||||
// Create with same path should update the existing record (UPSERT behavior)
|
||||
err = repos.Files.Create(ctx, nil, file2)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file2: %v", err)
|
||||
}
|
||||
|
||||
// Verify the file was updated, not duplicated
|
||||
retrievedFile, err := repos.Files.GetByPath(ctx, "/duplicate.txt")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to retrieve file: %v", err)
|
||||
}
|
||||
|
||||
// The file should have been updated with file2's data
|
||||
if retrievedFile.Size != 2048 {
|
||||
t.Errorf("expected size 2048, got %d", retrievedFile.Size)
|
||||
}
|
||||
|
||||
// ID might be different due to the UPSERT
|
||||
if retrievedFile.ID != file2.ID {
|
||||
t.Logf("File ID changed from %s to %s during upsert", originalID, retrievedFile.ID)
|
||||
}
|
||||
})
|
||||
|
||||
// Test duplicate chunk hashes
|
||||
t.Run("duplicate chunk hashes", func(t *testing.T) {
|
||||
chunk := &Chunk{
|
||||
ChunkHash: "duplicate-chunk",
|
||||
Size: 1024,
|
||||
}
|
||||
|
||||
err := repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create chunk: %v", err)
|
||||
}
|
||||
|
||||
// Creating the same chunk again should be idempotent (ON CONFLICT DO NOTHING)
|
||||
err = repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Errorf("duplicate chunk creation should be idempotent, got error: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
// Test duplicate file-chunk mappings
|
||||
t.Run("duplicate file-chunk mappings", func(t *testing.T) {
|
||||
file := &File{
|
||||
Path: "/test-dup-fc.txt",
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
err := repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
chunk := &Chunk{
|
||||
ChunkHash: "test-chunk-dup",
|
||||
Size: 1024,
|
||||
}
|
||||
err = repos.Chunks.Create(ctx, nil, chunk)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
fc := &FileChunk{
|
||||
FileID: file.ID,
|
||||
Idx: 0,
|
||||
ChunkHash: chunk.ChunkHash,
|
||||
}
|
||||
|
||||
err = repos.FileChunks.Create(ctx, nil, fc)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Creating the same mapping again should be idempotent
|
||||
err = repos.FileChunks.Create(ctx, nil, fc)
|
||||
if err != nil {
|
||||
t.Error("file-chunk creation should be idempotent")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestNullHandling tests handling of NULL values
|
||||
func TestNullHandling(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Test file with no link target
|
||||
t.Run("file without link target", func(t *testing.T) {
|
||||
file := &File{
|
||||
Path: "/regular.txt",
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
LinkTarget: "", // Should be stored as NULL
|
||||
}
|
||||
|
||||
err := repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
retrieved, err := repos.Files.GetByID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if retrieved.LinkTarget != "" {
|
||||
t.Errorf("expected empty link target, got %q", retrieved.LinkTarget)
|
||||
}
|
||||
})
|
||||
|
||||
// Test snapshot with NULL completed_at
|
||||
t.Run("incomplete snapshot", func(t *testing.T) {
|
||||
snapshot := &Snapshot{
|
||||
ID: "incomplete-test",
|
||||
Hostname: "test-host",
|
||||
StartedAt: time.Now(),
|
||||
CompletedAt: nil, // Should remain NULL until completed
|
||||
}
|
||||
|
||||
err := repos.Snapshots.Create(ctx, nil, snapshot)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
retrieved, err := repos.Snapshots.GetByID(ctx, snapshot.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if retrieved.CompletedAt != nil {
|
||||
t.Error("expected nil CompletedAt for incomplete snapshot")
|
||||
}
|
||||
})
|
||||
|
||||
// Test blob with NULL uploaded_ts
|
||||
t.Run("blob not uploaded", func(t *testing.T) {
|
||||
blob := &Blob{
|
||||
ID: "not-uploaded",
|
||||
Hash: "test-hash",
|
||||
CreatedTS: time.Now(),
|
||||
UploadedTS: nil, // Not uploaded yet
|
||||
}
|
||||
|
||||
err := repos.Blobs.Create(ctx, nil, blob)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
retrieved, err := repos.Blobs.GetByID(ctx, blob.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if retrieved.UploadedTS != nil {
|
||||
t.Error("expected nil UploadedTS for non-uploaded blob")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestLargeDatasets tests operations with large amounts of data
|
||||
func TestLargeDatasets(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping large dataset test in short mode")
|
||||
}
|
||||
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create a snapshot
|
||||
snapshot := &Snapshot{
|
||||
ID: "large-dataset-test",
|
||||
Hostname: "test-host",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
err := repos.Snapshots.Create(ctx, nil, snapshot)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create many files
|
||||
const fileCount = 1000
|
||||
fileIDs := make([]string, fileCount)
|
||||
|
||||
t.Run("create many files", func(t *testing.T) {
|
||||
start := time.Now()
|
||||
for i := 0; i < fileCount; i++ {
|
||||
file := &File{
|
||||
Path: fmt.Sprintf("/large/file%05d.txt", i),
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: int64(i * 1024),
|
||||
Mode: 0644,
|
||||
UID: uint32(1000 + (i % 10)),
|
||||
GID: uint32(1000 + (i % 10)),
|
||||
}
|
||||
err := repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create file %d: %v", i, err)
|
||||
}
|
||||
fileIDs[i] = file.ID
|
||||
|
||||
// Add half to snapshot
|
||||
if i%2 == 0 {
|
||||
err = repos.Snapshots.AddFileByID(ctx, nil, snapshot.ID, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
t.Logf("Created %d files in %v", fileCount, time.Since(start))
|
||||
})
|
||||
|
||||
// Test ListByPrefix performance
|
||||
t.Run("list by prefix performance", func(t *testing.T) {
|
||||
start := time.Now()
|
||||
files, err := repos.Files.ListByPrefix(ctx, "/large/")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(files) != fileCount {
|
||||
t.Errorf("expected %d files, got %d", fileCount, len(files))
|
||||
}
|
||||
t.Logf("Listed %d files in %v", len(files), time.Since(start))
|
||||
})
|
||||
|
||||
// Test orphaned cleanup performance
|
||||
t.Run("orphaned cleanup performance", func(t *testing.T) {
|
||||
start := time.Now()
|
||||
err := repos.Files.DeleteOrphaned(ctx)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Logf("Cleaned up orphaned files in %v", time.Since(start))
|
||||
|
||||
// Verify correct number remain
|
||||
files, err := repos.Files.ListByPrefix(ctx, "/large/")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(files) != fileCount/2 {
|
||||
t.Errorf("expected %d files after cleanup, got %d", fileCount/2, len(files))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestErrorPropagation tests that errors are properly propagated
|
||||
func TestErrorPropagation(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Test GetByID with non-existent ID
|
||||
t.Run("GetByID non-existent", func(t *testing.T) {
|
||||
file, err := repos.Files.GetByID(ctx, "non-existent-uuid")
|
||||
if err != nil {
|
||||
t.Errorf("GetByID should not return error for non-existent ID, got: %v", err)
|
||||
}
|
||||
if file != nil {
|
||||
t.Error("expected nil file for non-existent ID")
|
||||
}
|
||||
})
|
||||
|
||||
// Test GetByPath with non-existent path
|
||||
t.Run("GetByPath non-existent", func(t *testing.T) {
|
||||
file, err := repos.Files.GetByPath(ctx, "/non/existent/path.txt")
|
||||
if err != nil {
|
||||
t.Errorf("GetByPath should not return error for non-existent path, got: %v", err)
|
||||
}
|
||||
if file != nil {
|
||||
t.Error("expected nil file for non-existent path")
|
||||
}
|
||||
})
|
||||
|
||||
// Test invalid foreign key reference
|
||||
t.Run("invalid foreign key", func(t *testing.T) {
|
||||
fc := &FileChunk{
|
||||
FileID: "non-existent-file-id",
|
||||
Idx: 0,
|
||||
ChunkHash: "some-chunk",
|
||||
}
|
||||
err := repos.FileChunks.Create(ctx, nil, fc)
|
||||
if err == nil {
|
||||
t.Error("expected error for invalid foreign key")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "FOREIGN KEY") {
|
||||
t.Errorf("expected foreign key error, got: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestQueryInjection tests that the system is safe from SQL injection
|
||||
func TestQueryInjection(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Test various injection attempts
|
||||
injectionTests := []string{
|
||||
"'; DROP TABLE files; --",
|
||||
"' OR '1'='1",
|
||||
"'; DELETE FROM files WHERE '1'='1'; --",
|
||||
`test'); DROP TABLE files; --`,
|
||||
}
|
||||
|
||||
for _, injection := range injectionTests {
|
||||
t.Run("injection attempt", func(t *testing.T) {
|
||||
// Try injection in file path
|
||||
file := &File{
|
||||
Path: injection,
|
||||
MTime: time.Now(),
|
||||
CTime: time.Now(),
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
_ = repos.Files.Create(ctx, nil, file)
|
||||
// Should either succeed (treating as normal string) or fail with constraint
|
||||
// but should NOT execute the injected SQL
|
||||
|
||||
// Verify tables still exist
|
||||
var count int
|
||||
err := db.conn.QueryRow("SELECT COUNT(*) FROM files").Scan(&count)
|
||||
if err != nil {
|
||||
t.Fatal("files table was damaged by injection")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestTimezoneHandling tests that times are properly handled in UTC
|
||||
func TestTimezoneHandling(t *testing.T) {
|
||||
db, cleanup := setupTestDB(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
repos := NewRepositories(db)
|
||||
|
||||
// Create file with specific timezone
|
||||
loc, err := time.LoadLocation("America/New_York")
|
||||
if err != nil {
|
||||
t.Skip("timezone not available")
|
||||
}
|
||||
|
||||
// Use Truncate to remove sub-second precision since we store as Unix timestamps
|
||||
nyTime := time.Now().In(loc).Truncate(time.Second)
|
||||
file := &File{
|
||||
Path: "/timezone-test.txt",
|
||||
MTime: nyTime,
|
||||
CTime: nyTime,
|
||||
Size: 1024,
|
||||
Mode: 0644,
|
||||
UID: 1000,
|
||||
GID: 1000,
|
||||
}
|
||||
|
||||
err = repos.Files.Create(ctx, nil, file)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Retrieve and verify times are in UTC
|
||||
retrieved, err := repos.Files.GetByID(ctx, file.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Check that times are equivalent (same instant)
|
||||
if !retrieved.MTime.Equal(nyTime) {
|
||||
t.Error("time was not preserved correctly")
|
||||
}
|
||||
|
||||
// Check that retrieved time is in UTC
|
||||
if retrieved.MTime.Location() != time.UTC {
|
||||
t.Error("retrieved time is not in UTC")
|
||||
}
|
||||
}
|
||||
118
internal/database/schema.sql
Normal file
118
internal/database/schema.sql
Normal file
@@ -0,0 +1,118 @@
|
||||
-- Vaultik Database Schema
|
||||
-- Note: This database does not support migrations. If the schema changes,
|
||||
-- delete the local database and perform a full backup to recreate it.
|
||||
|
||||
-- Files table: stores metadata about files in the filesystem
|
||||
CREATE TABLE IF NOT EXISTS files (
|
||||
id TEXT PRIMARY KEY, -- UUID
|
||||
path TEXT NOT NULL UNIQUE,
|
||||
mtime INTEGER NOT NULL,
|
||||
ctime INTEGER NOT NULL,
|
||||
size INTEGER NOT NULL,
|
||||
mode INTEGER NOT NULL,
|
||||
uid INTEGER NOT NULL,
|
||||
gid INTEGER NOT NULL,
|
||||
link_target TEXT
|
||||
);
|
||||
|
||||
-- Create index on path for efficient lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_files_path ON files(path);
|
||||
|
||||
-- File chunks table: maps files to their constituent chunks
|
||||
CREATE TABLE IF NOT EXISTS file_chunks (
|
||||
file_id TEXT NOT NULL,
|
||||
idx INTEGER NOT NULL,
|
||||
chunk_hash TEXT NOT NULL,
|
||||
PRIMARY KEY (file_id, idx),
|
||||
FOREIGN KEY (file_id) REFERENCES files(id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (chunk_hash) REFERENCES chunks(chunk_hash)
|
||||
);
|
||||
|
||||
-- Chunks table: stores unique content-defined chunks
|
||||
CREATE TABLE IF NOT EXISTS chunks (
|
||||
chunk_hash TEXT PRIMARY KEY,
|
||||
size INTEGER NOT NULL
|
||||
);
|
||||
|
||||
-- Blobs table: stores packed, compressed, and encrypted blob information
|
||||
CREATE TABLE IF NOT EXISTS blobs (
|
||||
id TEXT PRIMARY KEY,
|
||||
blob_hash TEXT UNIQUE,
|
||||
created_ts INTEGER NOT NULL,
|
||||
finished_ts INTEGER,
|
||||
uncompressed_size INTEGER NOT NULL DEFAULT 0,
|
||||
compressed_size INTEGER NOT NULL DEFAULT 0,
|
||||
uploaded_ts INTEGER
|
||||
);
|
||||
|
||||
-- Blob chunks table: maps chunks to the blobs that contain them
|
||||
CREATE TABLE IF NOT EXISTS blob_chunks (
|
||||
blob_id TEXT NOT NULL,
|
||||
chunk_hash TEXT NOT NULL,
|
||||
offset INTEGER NOT NULL,
|
||||
length INTEGER NOT NULL,
|
||||
PRIMARY KEY (blob_id, chunk_hash),
|
||||
FOREIGN KEY (blob_id) REFERENCES blobs(id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (chunk_hash) REFERENCES chunks(chunk_hash)
|
||||
);
|
||||
|
||||
-- Chunk files table: reverse mapping of chunks to files
|
||||
CREATE TABLE IF NOT EXISTS chunk_files (
|
||||
chunk_hash TEXT NOT NULL,
|
||||
file_id TEXT NOT NULL,
|
||||
file_offset INTEGER NOT NULL,
|
||||
length INTEGER NOT NULL,
|
||||
PRIMARY KEY (chunk_hash, file_id),
|
||||
FOREIGN KEY (chunk_hash) REFERENCES chunks(chunk_hash),
|
||||
FOREIGN KEY (file_id) REFERENCES files(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
-- Snapshots table: tracks backup snapshots
|
||||
CREATE TABLE IF NOT EXISTS snapshots (
|
||||
id TEXT PRIMARY KEY,
|
||||
hostname TEXT NOT NULL,
|
||||
vaultik_version TEXT NOT NULL,
|
||||
vaultik_git_revision TEXT NOT NULL,
|
||||
started_at INTEGER NOT NULL,
|
||||
completed_at INTEGER,
|
||||
file_count INTEGER NOT NULL DEFAULT 0,
|
||||
chunk_count INTEGER NOT NULL DEFAULT 0,
|
||||
blob_count INTEGER NOT NULL DEFAULT 0,
|
||||
total_size INTEGER NOT NULL DEFAULT 0,
|
||||
blob_size INTEGER NOT NULL DEFAULT 0,
|
||||
blob_uncompressed_size INTEGER NOT NULL DEFAULT 0,
|
||||
compression_ratio REAL NOT NULL DEFAULT 1.0,
|
||||
compression_level INTEGER NOT NULL DEFAULT 3,
|
||||
upload_bytes INTEGER NOT NULL DEFAULT 0,
|
||||
upload_duration_ms INTEGER NOT NULL DEFAULT 0
|
||||
);
|
||||
|
||||
-- Snapshot files table: maps snapshots to files
|
||||
CREATE TABLE IF NOT EXISTS snapshot_files (
|
||||
snapshot_id TEXT NOT NULL,
|
||||
file_id TEXT NOT NULL,
|
||||
PRIMARY KEY (snapshot_id, file_id),
|
||||
FOREIGN KEY (snapshot_id) REFERENCES snapshots(id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (file_id) REFERENCES files(id)
|
||||
);
|
||||
|
||||
-- Snapshot blobs table: maps snapshots to blobs
|
||||
CREATE TABLE IF NOT EXISTS snapshot_blobs (
|
||||
snapshot_id TEXT NOT NULL,
|
||||
blob_id TEXT NOT NULL,
|
||||
blob_hash TEXT NOT NULL,
|
||||
PRIMARY KEY (snapshot_id, blob_id),
|
||||
FOREIGN KEY (snapshot_id) REFERENCES snapshots(id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (blob_id) REFERENCES blobs(id)
|
||||
);
|
||||
|
||||
-- Uploads table: tracks blob upload metrics
|
||||
CREATE TABLE IF NOT EXISTS uploads (
|
||||
blob_hash TEXT PRIMARY KEY,
|
||||
snapshot_id TEXT NOT NULL,
|
||||
uploaded_at INTEGER NOT NULL,
|
||||
size INTEGER NOT NULL,
|
||||
duration_ms INTEGER NOT NULL,
|
||||
FOREIGN KEY (blob_hash) REFERENCES blobs(blob_hash),
|
||||
FOREIGN KEY (snapshot_id) REFERENCES snapshots(id)
|
||||
);
|
||||
11
internal/database/schema/008_uploads.sql
Normal file
11
internal/database/schema/008_uploads.sql
Normal file
@@ -0,0 +1,11 @@
|
||||
-- Track blob upload metrics
|
||||
CREATE TABLE IF NOT EXISTS uploads (
|
||||
blob_hash TEXT PRIMARY KEY,
|
||||
uploaded_at TIMESTAMP NOT NULL,
|
||||
size INTEGER NOT NULL,
|
||||
duration_ms INTEGER NOT NULL,
|
||||
FOREIGN KEY (blob_hash) REFERENCES blobs(blob_hash)
|
||||
);
|
||||
|
||||
CREATE INDEX idx_uploads_uploaded_at ON uploads(uploaded_at);
|
||||
CREATE INDEX idx_uploads_duration ON uploads(duration_ms);
|
||||
@@ -17,17 +17,27 @@ func NewSnapshotRepository(db *DB) *SnapshotRepository {
|
||||
|
||||
func (r *SnapshotRepository) Create(ctx context.Context, tx *sql.Tx, snapshot *Snapshot) error {
|
||||
query := `
|
||||
INSERT INTO snapshots (id, hostname, vaultik_version, created_ts, file_count, chunk_count, blob_count, total_size, blob_size, compression_ratio)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
INSERT INTO snapshots (id, hostname, vaultik_version, vaultik_git_revision, started_at, completed_at,
|
||||
file_count, chunk_count, blob_count, total_size, blob_size, blob_uncompressed_size,
|
||||
compression_ratio, compression_level, upload_bytes, upload_duration_ms)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`
|
||||
|
||||
var completedAt *int64
|
||||
if snapshot.CompletedAt != nil {
|
||||
ts := snapshot.CompletedAt.Unix()
|
||||
completedAt = &ts
|
||||
}
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, snapshot.ID, snapshot.Hostname, snapshot.VaultikVersion, snapshot.CreatedTS.Unix(),
|
||||
snapshot.FileCount, snapshot.ChunkCount, snapshot.BlobCount, snapshot.TotalSize, snapshot.BlobSize, snapshot.CompressionRatio)
|
||||
_, err = tx.ExecContext(ctx, query, snapshot.ID, snapshot.Hostname, snapshot.VaultikVersion, snapshot.VaultikGitRevision, snapshot.StartedAt.Unix(),
|
||||
completedAt, snapshot.FileCount, snapshot.ChunkCount, snapshot.BlobCount, snapshot.TotalSize, snapshot.BlobSize, snapshot.BlobUncompressedSize,
|
||||
snapshot.CompressionRatio, snapshot.CompressionLevel, snapshot.UploadBytes, snapshot.UploadDurationMs)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, snapshot.ID, snapshot.Hostname, snapshot.VaultikVersion, snapshot.CreatedTS.Unix(),
|
||||
snapshot.FileCount, snapshot.ChunkCount, snapshot.BlobCount, snapshot.TotalSize, snapshot.BlobSize, snapshot.CompressionRatio)
|
||||
_, err = r.db.ExecWithLog(ctx, query, snapshot.ID, snapshot.Hostname, snapshot.VaultikVersion, snapshot.VaultikGitRevision, snapshot.StartedAt.Unix(),
|
||||
completedAt, snapshot.FileCount, snapshot.ChunkCount, snapshot.BlobCount, snapshot.TotalSize, snapshot.BlobSize, snapshot.BlobUncompressedSize,
|
||||
snapshot.CompressionRatio, snapshot.CompressionLevel, snapshot.UploadBytes, snapshot.UploadDurationMs)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -58,7 +68,7 @@ func (r *SnapshotRepository) UpdateCounts(ctx context.Context, tx *sql.Tx, snaps
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, fileCount, chunkCount, blobCount, totalSize, blobSize, compressionRatio, snapshotID)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLock(ctx, query, fileCount, chunkCount, blobCount, totalSize, blobSize, compressionRatio, snapshotID)
|
||||
_, err = r.db.ExecWithLog(ctx, query, fileCount, chunkCount, blobCount, totalSize, blobSize, compressionRatio, snapshotID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@@ -68,27 +78,83 @@ func (r *SnapshotRepository) UpdateCounts(ctx context.Context, tx *sql.Tx, snaps
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateExtendedStats updates extended statistics for a snapshot
|
||||
func (r *SnapshotRepository) UpdateExtendedStats(ctx context.Context, tx *sql.Tx, snapshotID string, blobUncompressedSize int64, compressionLevel int, uploadDurationMs int64) error {
|
||||
// Calculate compression ratio based on uncompressed vs compressed sizes
|
||||
var compressionRatio float64
|
||||
if blobUncompressedSize > 0 {
|
||||
// Get current blob_size from DB to calculate ratio
|
||||
var blobSize int64
|
||||
queryGet := `SELECT blob_size FROM snapshots WHERE id = ?`
|
||||
if tx != nil {
|
||||
err := tx.QueryRowContext(ctx, queryGet, snapshotID).Scan(&blobSize)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting blob size: %w", err)
|
||||
}
|
||||
} else {
|
||||
err := r.db.conn.QueryRowContext(ctx, queryGet, snapshotID).Scan(&blobSize)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting blob size: %w", err)
|
||||
}
|
||||
}
|
||||
compressionRatio = float64(blobSize) / float64(blobUncompressedSize)
|
||||
} else {
|
||||
compressionRatio = 1.0
|
||||
}
|
||||
|
||||
query := `
|
||||
UPDATE snapshots
|
||||
SET blob_uncompressed_size = ?,
|
||||
compression_ratio = ?,
|
||||
compression_level = ?,
|
||||
upload_bytes = blob_size,
|
||||
upload_duration_ms = ?
|
||||
WHERE id = ?
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, blobUncompressedSize, compressionRatio, compressionLevel, uploadDurationMs, snapshotID)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, blobUncompressedSize, compressionRatio, compressionLevel, uploadDurationMs, snapshotID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("updating extended stats: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *SnapshotRepository) GetByID(ctx context.Context, snapshotID string) (*Snapshot, error) {
|
||||
query := `
|
||||
SELECT id, hostname, vaultik_version, created_ts, file_count, chunk_count, blob_count, total_size, blob_size, compression_ratio
|
||||
SELECT id, hostname, vaultik_version, vaultik_git_revision, started_at, completed_at,
|
||||
file_count, chunk_count, blob_count, total_size, blob_size, blob_uncompressed_size,
|
||||
compression_ratio, compression_level, upload_bytes, upload_duration_ms
|
||||
FROM snapshots
|
||||
WHERE id = ?
|
||||
`
|
||||
|
||||
var snapshot Snapshot
|
||||
var createdTSUnix int64
|
||||
var startedAtUnix int64
|
||||
var completedAtUnix *int64
|
||||
|
||||
err := r.db.conn.QueryRowContext(ctx, query, snapshotID).Scan(
|
||||
&snapshot.ID,
|
||||
&snapshot.Hostname,
|
||||
&snapshot.VaultikVersion,
|
||||
&createdTSUnix,
|
||||
&snapshot.VaultikGitRevision,
|
||||
&startedAtUnix,
|
||||
&completedAtUnix,
|
||||
&snapshot.FileCount,
|
||||
&snapshot.ChunkCount,
|
||||
&snapshot.BlobCount,
|
||||
&snapshot.TotalSize,
|
||||
&snapshot.BlobSize,
|
||||
&snapshot.BlobUncompressedSize,
|
||||
&snapshot.CompressionRatio,
|
||||
&snapshot.CompressionLevel,
|
||||
&snapshot.UploadBytes,
|
||||
&snapshot.UploadDurationMs,
|
||||
)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
@@ -98,16 +164,20 @@ func (r *SnapshotRepository) GetByID(ctx context.Context, snapshotID string) (*S
|
||||
return nil, fmt.Errorf("querying snapshot: %w", err)
|
||||
}
|
||||
|
||||
snapshot.CreatedTS = time.Unix(createdTSUnix, 0)
|
||||
snapshot.StartedAt = time.Unix(startedAtUnix, 0).UTC()
|
||||
if completedAtUnix != nil {
|
||||
t := time.Unix(*completedAtUnix, 0).UTC()
|
||||
snapshot.CompletedAt = &t
|
||||
}
|
||||
|
||||
return &snapshot, nil
|
||||
}
|
||||
|
||||
func (r *SnapshotRepository) ListRecent(ctx context.Context, limit int) ([]*Snapshot, error) {
|
||||
query := `
|
||||
SELECT id, hostname, vaultik_version, created_ts, file_count, chunk_count, blob_count, total_size, blob_size, compression_ratio
|
||||
SELECT id, hostname, vaultik_version, vaultik_git_revision, started_at, completed_at, file_count, chunk_count, blob_count, total_size, blob_size, compression_ratio
|
||||
FROM snapshots
|
||||
ORDER BY created_ts DESC
|
||||
ORDER BY started_at DESC
|
||||
LIMIT ?
|
||||
`
|
||||
|
||||
@@ -120,13 +190,16 @@ func (r *SnapshotRepository) ListRecent(ctx context.Context, limit int) ([]*Snap
|
||||
var snapshots []*Snapshot
|
||||
for rows.Next() {
|
||||
var snapshot Snapshot
|
||||
var createdTSUnix int64
|
||||
var startedAtUnix int64
|
||||
var completedAtUnix *int64
|
||||
|
||||
err := rows.Scan(
|
||||
&snapshot.ID,
|
||||
&snapshot.Hostname,
|
||||
&snapshot.VaultikVersion,
|
||||
&createdTSUnix,
|
||||
&snapshot.VaultikGitRevision,
|
||||
&startedAtUnix,
|
||||
&completedAtUnix,
|
||||
&snapshot.FileCount,
|
||||
&snapshot.ChunkCount,
|
||||
&snapshot.BlobCount,
|
||||
@@ -138,10 +211,296 @@ func (r *SnapshotRepository) ListRecent(ctx context.Context, limit int) ([]*Snap
|
||||
return nil, fmt.Errorf("scanning snapshot: %w", err)
|
||||
}
|
||||
|
||||
snapshot.CreatedTS = time.Unix(createdTSUnix, 0)
|
||||
snapshot.StartedAt = time.Unix(startedAtUnix, 0)
|
||||
if completedAtUnix != nil {
|
||||
t := time.Unix(*completedAtUnix, 0)
|
||||
snapshot.CompletedAt = &t
|
||||
}
|
||||
|
||||
snapshots = append(snapshots, &snapshot)
|
||||
}
|
||||
|
||||
return snapshots, rows.Err()
|
||||
}
|
||||
|
||||
// MarkComplete marks a snapshot as completed with the current timestamp
|
||||
func (r *SnapshotRepository) MarkComplete(ctx context.Context, tx *sql.Tx, snapshotID string) error {
|
||||
query := `
|
||||
UPDATE snapshots
|
||||
SET completed_at = ?
|
||||
WHERE id = ?
|
||||
`
|
||||
|
||||
completedAt := time.Now().UTC().Unix()
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, completedAt, snapshotID)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, completedAt, snapshotID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("marking snapshot complete: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddFile adds a file to a snapshot
|
||||
func (r *SnapshotRepository) AddFile(ctx context.Context, tx *sql.Tx, snapshotID string, filePath string) error {
|
||||
query := `
|
||||
INSERT OR IGNORE INTO snapshot_files (snapshot_id, file_id)
|
||||
SELECT ?, id FROM files WHERE path = ?
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, snapshotID, filePath)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, snapshotID, filePath)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("adding file to snapshot: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddFileByID adds a file to a snapshot by file ID
|
||||
func (r *SnapshotRepository) AddFileByID(ctx context.Context, tx *sql.Tx, snapshotID string, fileID string) error {
|
||||
query := `
|
||||
INSERT OR IGNORE INTO snapshot_files (snapshot_id, file_id)
|
||||
VALUES (?, ?)
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, snapshotID, fileID)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, snapshotID, fileID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("adding file to snapshot: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddBlob adds a blob to a snapshot
|
||||
func (r *SnapshotRepository) AddBlob(ctx context.Context, tx *sql.Tx, snapshotID string, blobID string, blobHash string) error {
|
||||
query := `
|
||||
INSERT OR IGNORE INTO snapshot_blobs (snapshot_id, blob_id, blob_hash)
|
||||
VALUES (?, ?, ?)
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, snapshotID, blobID, blobHash)
|
||||
} else {
|
||||
_, err = r.db.ExecWithLog(ctx, query, snapshotID, blobID, blobHash)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("adding blob to snapshot: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetBlobHashes returns all blob hashes for a snapshot
|
||||
func (r *SnapshotRepository) GetBlobHashes(ctx context.Context, snapshotID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT sb.blob_hash
|
||||
FROM snapshot_blobs sb
|
||||
WHERE sb.snapshot_id = ?
|
||||
ORDER BY sb.blob_hash
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, snapshotID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying blob hashes: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var blobs []string
|
||||
for rows.Next() {
|
||||
var blobHash string
|
||||
if err := rows.Scan(&blobHash); err != nil {
|
||||
return nil, fmt.Errorf("scanning blob hash: %w", err)
|
||||
}
|
||||
blobs = append(blobs, blobHash)
|
||||
}
|
||||
|
||||
return blobs, rows.Err()
|
||||
}
|
||||
|
||||
// GetSnapshotTotalCompressedSize returns the total compressed size of all blobs referenced by a snapshot
|
||||
func (r *SnapshotRepository) GetSnapshotTotalCompressedSize(ctx context.Context, snapshotID string) (int64, error) {
|
||||
query := `
|
||||
SELECT COALESCE(SUM(b.compressed_size), 0)
|
||||
FROM snapshot_blobs sb
|
||||
JOIN blobs b ON sb.blob_hash = b.blob_hash
|
||||
WHERE sb.snapshot_id = ?
|
||||
`
|
||||
|
||||
var totalSize int64
|
||||
err := r.db.conn.QueryRowContext(ctx, query, snapshotID).Scan(&totalSize)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("querying total compressed size: %w", err)
|
||||
}
|
||||
|
||||
return totalSize, nil
|
||||
}
|
||||
|
||||
// GetIncompleteSnapshots returns all snapshots that haven't been completed
|
||||
func (r *SnapshotRepository) GetIncompleteSnapshots(ctx context.Context) ([]*Snapshot, error) {
|
||||
query := `
|
||||
SELECT id, hostname, vaultik_version, vaultik_git_revision, started_at, completed_at, file_count, chunk_count, blob_count, total_size, blob_size, compression_ratio
|
||||
FROM snapshots
|
||||
WHERE completed_at IS NULL
|
||||
ORDER BY started_at DESC
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying incomplete snapshots: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var snapshots []*Snapshot
|
||||
for rows.Next() {
|
||||
var snapshot Snapshot
|
||||
var startedAtUnix int64
|
||||
var completedAtUnix *int64
|
||||
|
||||
err := rows.Scan(
|
||||
&snapshot.ID,
|
||||
&snapshot.Hostname,
|
||||
&snapshot.VaultikVersion,
|
||||
&snapshot.VaultikGitRevision,
|
||||
&startedAtUnix,
|
||||
&completedAtUnix,
|
||||
&snapshot.FileCount,
|
||||
&snapshot.ChunkCount,
|
||||
&snapshot.BlobCount,
|
||||
&snapshot.TotalSize,
|
||||
&snapshot.BlobSize,
|
||||
&snapshot.CompressionRatio,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning snapshot: %w", err)
|
||||
}
|
||||
|
||||
snapshot.StartedAt = time.Unix(startedAtUnix, 0)
|
||||
if completedAtUnix != nil {
|
||||
t := time.Unix(*completedAtUnix, 0)
|
||||
snapshot.CompletedAt = &t
|
||||
}
|
||||
|
||||
snapshots = append(snapshots, &snapshot)
|
||||
}
|
||||
|
||||
return snapshots, rows.Err()
|
||||
}
|
||||
|
||||
// GetIncompleteByHostname returns all incomplete snapshots for a specific hostname
|
||||
func (r *SnapshotRepository) GetIncompleteByHostname(ctx context.Context, hostname string) ([]*Snapshot, error) {
|
||||
query := `
|
||||
SELECT id, hostname, vaultik_version, vaultik_git_revision, started_at, completed_at, file_count, chunk_count, blob_count, total_size, blob_size, compression_ratio
|
||||
FROM snapshots
|
||||
WHERE completed_at IS NULL AND hostname = ?
|
||||
ORDER BY started_at DESC
|
||||
`
|
||||
|
||||
rows, err := r.db.conn.QueryContext(ctx, query, hostname)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying incomplete snapshots: %w", err)
|
||||
}
|
||||
defer CloseRows(rows)
|
||||
|
||||
var snapshots []*Snapshot
|
||||
for rows.Next() {
|
||||
var snapshot Snapshot
|
||||
var startedAtUnix int64
|
||||
var completedAtUnix *int64
|
||||
|
||||
err := rows.Scan(
|
||||
&snapshot.ID,
|
||||
&snapshot.Hostname,
|
||||
&snapshot.VaultikVersion,
|
||||
&snapshot.VaultikGitRevision,
|
||||
&startedAtUnix,
|
||||
&completedAtUnix,
|
||||
&snapshot.FileCount,
|
||||
&snapshot.ChunkCount,
|
||||
&snapshot.BlobCount,
|
||||
&snapshot.TotalSize,
|
||||
&snapshot.BlobSize,
|
||||
&snapshot.CompressionRatio,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scanning snapshot: %w", err)
|
||||
}
|
||||
|
||||
snapshot.StartedAt = time.Unix(startedAtUnix, 0).UTC()
|
||||
if completedAtUnix != nil {
|
||||
t := time.Unix(*completedAtUnix, 0).UTC()
|
||||
snapshot.CompletedAt = &t
|
||||
}
|
||||
|
||||
snapshots = append(snapshots, &snapshot)
|
||||
}
|
||||
|
||||
return snapshots, rows.Err()
|
||||
}
|
||||
|
||||
// Delete removes a snapshot record
|
||||
func (r *SnapshotRepository) Delete(ctx context.Context, snapshotID string) error {
|
||||
query := `DELETE FROM snapshots WHERE id = ?`
|
||||
|
||||
_, err := r.db.ExecWithLog(ctx, query, snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting snapshot: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteSnapshotFiles removes all snapshot_files entries for a snapshot
|
||||
func (r *SnapshotRepository) DeleteSnapshotFiles(ctx context.Context, snapshotID string) error {
|
||||
query := `DELETE FROM snapshot_files WHERE snapshot_id = ?`
|
||||
|
||||
_, err := r.db.ExecWithLog(ctx, query, snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting snapshot files: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteSnapshotBlobs removes all snapshot_blobs entries for a snapshot
|
||||
func (r *SnapshotRepository) DeleteSnapshotBlobs(ctx context.Context, snapshotID string) error {
|
||||
query := `DELETE FROM snapshot_blobs WHERE snapshot_id = ?`
|
||||
|
||||
_, err := r.db.ExecWithLog(ctx, query, snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting snapshot blobs: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteSnapshotUploads removes all uploads entries for a snapshot
|
||||
func (r *SnapshotRepository) DeleteSnapshotUploads(ctx context.Context, snapshotID string) error {
|
||||
query := `DELETE FROM uploads WHERE snapshot_id = ?`
|
||||
|
||||
_, err := r.db.ExecWithLog(ctx, query, snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting snapshot uploads: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -30,7 +30,8 @@ func TestSnapshotRepository(t *testing.T) {
|
||||
ID: "2024-01-01T12:00:00Z",
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "1.0.0",
|
||||
CreatedTS: time.Now().Truncate(time.Second),
|
||||
StartedAt: time.Now().Truncate(time.Second),
|
||||
CompletedAt: nil,
|
||||
FileCount: 100,
|
||||
ChunkCount: 500,
|
||||
BlobCount: 10,
|
||||
@@ -99,7 +100,8 @@ func TestSnapshotRepository(t *testing.T) {
|
||||
ID: fmt.Sprintf("2024-01-0%dT12:00:00Z", i),
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "1.0.0",
|
||||
CreatedTS: time.Now().Add(time.Duration(i) * time.Hour).Truncate(time.Second),
|
||||
StartedAt: time.Now().Add(time.Duration(i) * time.Hour).Truncate(time.Second),
|
||||
CompletedAt: nil,
|
||||
FileCount: int64(100 * i),
|
||||
ChunkCount: int64(500 * i),
|
||||
BlobCount: int64(10 * i),
|
||||
@@ -121,7 +123,7 @@ func TestSnapshotRepository(t *testing.T) {
|
||||
|
||||
// Verify order (most recent first)
|
||||
for i := 0; i < len(recent)-1; i++ {
|
||||
if recent[i].CreatedTS.Before(recent[i+1].CreatedTS) {
|
||||
if recent[i].StartedAt.Before(recent[i+1].StartedAt) {
|
||||
t.Error("snapshots not in descending order")
|
||||
}
|
||||
}
|
||||
@@ -162,7 +164,8 @@ func TestSnapshotRepositoryDuplicate(t *testing.T) {
|
||||
ID: "2024-01-01T12:00:00Z",
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "1.0.0",
|
||||
CreatedTS: time.Now().Truncate(time.Second),
|
||||
StartedAt: time.Now().Truncate(time.Second),
|
||||
CompletedAt: nil,
|
||||
FileCount: 100,
|
||||
ChunkCount: 500,
|
||||
BlobCount: 10,
|
||||
|
||||
147
internal/database/uploads.go
Normal file
147
internal/database/uploads.go
Normal file
@@ -0,0 +1,147 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
)
|
||||
|
||||
// Upload represents a blob upload record
|
||||
type Upload struct {
|
||||
BlobHash string
|
||||
SnapshotID string
|
||||
UploadedAt time.Time
|
||||
Size int64
|
||||
DurationMs int64
|
||||
}
|
||||
|
||||
// UploadRepository handles upload records
|
||||
type UploadRepository struct {
|
||||
conn *sql.DB
|
||||
}
|
||||
|
||||
// NewUploadRepository creates a new upload repository
|
||||
func NewUploadRepository(conn *sql.DB) *UploadRepository {
|
||||
return &UploadRepository{conn: conn}
|
||||
}
|
||||
|
||||
// Create inserts a new upload record
|
||||
func (r *UploadRepository) Create(ctx context.Context, tx *sql.Tx, upload *Upload) error {
|
||||
query := `
|
||||
INSERT INTO uploads (blob_hash, snapshot_id, uploaded_at, size, duration_ms)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
`
|
||||
|
||||
var err error
|
||||
if tx != nil {
|
||||
_, err = tx.ExecContext(ctx, query, upload.BlobHash, upload.SnapshotID, upload.UploadedAt, upload.Size, upload.DurationMs)
|
||||
} else {
|
||||
_, err = r.conn.ExecContext(ctx, query, upload.BlobHash, upload.SnapshotID, upload.UploadedAt, upload.Size, upload.DurationMs)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// GetByBlobHash retrieves an upload record by blob hash
|
||||
func (r *UploadRepository) GetByBlobHash(ctx context.Context, blobHash string) (*Upload, error) {
|
||||
query := `
|
||||
SELECT blob_hash, uploaded_at, size, duration_ms
|
||||
FROM uploads
|
||||
WHERE blob_hash = ?
|
||||
`
|
||||
|
||||
var upload Upload
|
||||
err := r.conn.QueryRowContext(ctx, query, blobHash).Scan(
|
||||
&upload.BlobHash,
|
||||
&upload.UploadedAt,
|
||||
&upload.Size,
|
||||
&upload.DurationMs,
|
||||
)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &upload, nil
|
||||
}
|
||||
|
||||
// GetRecentUploads retrieves recent uploads ordered by upload time
|
||||
func (r *UploadRepository) GetRecentUploads(ctx context.Context, limit int) ([]*Upload, error) {
|
||||
query := `
|
||||
SELECT blob_hash, uploaded_at, size, duration_ms
|
||||
FROM uploads
|
||||
ORDER BY uploaded_at DESC
|
||||
LIMIT ?
|
||||
`
|
||||
|
||||
rows, err := r.conn.QueryContext(ctx, query, limit)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer func() {
|
||||
if err := rows.Close(); err != nil {
|
||||
log.Error("failed to close rows", "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
var uploads []*Upload
|
||||
for rows.Next() {
|
||||
var upload Upload
|
||||
if err := rows.Scan(&upload.BlobHash, &upload.UploadedAt, &upload.Size, &upload.DurationMs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
uploads = append(uploads, &upload)
|
||||
}
|
||||
|
||||
return uploads, rows.Err()
|
||||
}
|
||||
|
||||
// GetUploadStats returns aggregate statistics for uploads
|
||||
func (r *UploadRepository) GetUploadStats(ctx context.Context, since time.Time) (*UploadStats, error) {
|
||||
query := `
|
||||
SELECT
|
||||
COUNT(*) as count,
|
||||
COALESCE(SUM(size), 0) as total_size,
|
||||
COALESCE(AVG(duration_ms), 0) as avg_duration_ms,
|
||||
COALESCE(MIN(duration_ms), 0) as min_duration_ms,
|
||||
COALESCE(MAX(duration_ms), 0) as max_duration_ms
|
||||
FROM uploads
|
||||
WHERE uploaded_at >= ?
|
||||
`
|
||||
|
||||
var stats UploadStats
|
||||
err := r.conn.QueryRowContext(ctx, query, since).Scan(
|
||||
&stats.Count,
|
||||
&stats.TotalSize,
|
||||
&stats.AvgDurationMs,
|
||||
&stats.MinDurationMs,
|
||||
&stats.MaxDurationMs,
|
||||
)
|
||||
|
||||
return &stats, err
|
||||
}
|
||||
|
||||
// UploadStats contains aggregate upload statistics
|
||||
type UploadStats struct {
|
||||
Count int64
|
||||
TotalSize int64
|
||||
AvgDurationMs float64
|
||||
MinDurationMs int64
|
||||
MaxDurationMs int64
|
||||
}
|
||||
|
||||
// GetCountBySnapshot returns the count of uploads for a specific snapshot
|
||||
func (r *UploadRepository) GetCountBySnapshot(ctx context.Context, snapshotID string) (int64, error) {
|
||||
query := `SELECT COUNT(*) FROM uploads WHERE snapshot_id = ?`
|
||||
var count int64
|
||||
err := r.conn.QueryRowContext(ctx, query, snapshotID).Scan(&count)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return count, nil
|
||||
}
|
||||
@@ -4,13 +4,16 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// these get populated from main() and copied into the Globals object.
|
||||
var (
|
||||
Appname string = "vaultik"
|
||||
Version string = "dev"
|
||||
Commit string = "unknown"
|
||||
)
|
||||
// Appname is the application name, populated from main().
|
||||
var Appname string = "vaultik"
|
||||
|
||||
// Version is the application version, populated from main().
|
||||
var Version string = "dev"
|
||||
|
||||
// Commit is the git commit hash, populated from main().
|
||||
var Commit string = "unknown"
|
||||
|
||||
// Globals contains application-wide configuration and metadata.
|
||||
type Globals struct {
|
||||
Appname string
|
||||
Version string
|
||||
@@ -18,13 +21,11 @@ type Globals struct {
|
||||
StartTime time.Time
|
||||
}
|
||||
|
||||
// New creates and returns a new Globals instance initialized with the package-level variables.
|
||||
func New() (*Globals, error) {
|
||||
n := &Globals{
|
||||
Appname: Appname,
|
||||
Version: Version,
|
||||
Commit: Commit,
|
||||
StartTime: time.Now(),
|
||||
}
|
||||
|
||||
return n, nil
|
||||
return &Globals{
|
||||
Appname: Appname,
|
||||
Version: Version,
|
||||
Commit: Commit,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -2,35 +2,29 @@ package globals
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"go.uber.org/fx"
|
||||
"go.uber.org/fx/fxtest"
|
||||
)
|
||||
|
||||
// TestGlobalsNew ensures the globals package initializes correctly
|
||||
func TestGlobalsNew(t *testing.T) {
|
||||
app := fxtest.New(t,
|
||||
fx.Provide(New),
|
||||
fx.Invoke(func(g *Globals) {
|
||||
if g == nil {
|
||||
t.Fatal("Globals instance is nil")
|
||||
}
|
||||
g, err := New()
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create Globals: %v", err)
|
||||
}
|
||||
|
||||
if g.Appname != "vaultik" {
|
||||
t.Errorf("Expected Appname to be 'vaultik', got '%s'", g.Appname)
|
||||
}
|
||||
if g == nil {
|
||||
t.Fatal("Globals instance is nil")
|
||||
}
|
||||
|
||||
// Version and Commit will be "dev" and "unknown" by default
|
||||
if g.Version == "" {
|
||||
t.Error("Version should not be empty")
|
||||
}
|
||||
if g.Appname != "vaultik" {
|
||||
t.Errorf("Expected Appname to be 'vaultik', got '%s'", g.Appname)
|
||||
}
|
||||
|
||||
if g.Commit == "" {
|
||||
t.Error("Commit should not be empty")
|
||||
}
|
||||
}),
|
||||
)
|
||||
// Version and Commit will be "dev" and "unknown" by default
|
||||
if g.Version == "" {
|
||||
t.Error("Version should not be empty")
|
||||
}
|
||||
|
||||
app.RequireStart()
|
||||
app.RequireStop()
|
||||
if g.Commit == "" {
|
||||
t.Error("Commit should not be empty")
|
||||
}
|
||||
}
|
||||
|
||||
181
internal/log/log.go
Normal file
181
internal/log/log.go
Normal file
@@ -0,0 +1,181 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/term"
|
||||
)
|
||||
|
||||
// LogLevel represents the logging level.
|
||||
type LogLevel int
|
||||
|
||||
const (
|
||||
// LevelFatal represents a fatal error level that will exit the program.
|
||||
LevelFatal LogLevel = iota
|
||||
// LevelError represents an error level.
|
||||
LevelError
|
||||
// LevelWarn represents a warning level.
|
||||
LevelWarn
|
||||
// LevelNotice represents a notice level (mapped to Info in slog).
|
||||
LevelNotice
|
||||
// LevelInfo represents an informational level.
|
||||
LevelInfo
|
||||
// LevelDebug represents a debug level.
|
||||
LevelDebug
|
||||
)
|
||||
|
||||
// Config holds logger configuration.
|
||||
type Config struct {
|
||||
Verbose bool
|
||||
Debug bool
|
||||
Cron bool
|
||||
}
|
||||
|
||||
var logger *slog.Logger
|
||||
|
||||
// Initialize sets up the global logger based on the provided configuration.
|
||||
func Initialize(cfg Config) {
|
||||
// Determine log level based on configuration
|
||||
var level slog.Level
|
||||
|
||||
if cfg.Cron {
|
||||
// In cron mode, only show fatal errors (which we'll handle specially)
|
||||
level = slog.LevelError
|
||||
} else if cfg.Debug || strings.Contains(os.Getenv("GODEBUG"), "vaultik") {
|
||||
level = slog.LevelDebug
|
||||
} else if cfg.Verbose {
|
||||
level = slog.LevelInfo
|
||||
} else {
|
||||
level = slog.LevelWarn
|
||||
}
|
||||
|
||||
// Create handler with appropriate level
|
||||
opts := &slog.HandlerOptions{
|
||||
Level: level,
|
||||
}
|
||||
|
||||
// Check if stdout is a TTY
|
||||
if term.IsTerminal(int(os.Stdout.Fd())) {
|
||||
// Use colorized TTY handler
|
||||
logger = slog.New(NewTTYHandler(os.Stdout, opts))
|
||||
} else {
|
||||
// Use JSON format for non-TTY output
|
||||
logger = slog.New(slog.NewJSONHandler(os.Stdout, opts))
|
||||
}
|
||||
|
||||
// Set as default logger
|
||||
slog.SetDefault(logger)
|
||||
}
|
||||
|
||||
// getCaller returns the caller information as a string
|
||||
func getCaller(skip int) string {
|
||||
_, file, line, ok := runtime.Caller(skip)
|
||||
if !ok {
|
||||
return "unknown"
|
||||
}
|
||||
return fmt.Sprintf("%s:%d", filepath.Base(file), line)
|
||||
}
|
||||
|
||||
// Fatal logs a fatal error message and exits the program with code 1.
|
||||
func Fatal(msg string, args ...any) {
|
||||
if logger != nil {
|
||||
// Add caller info to args
|
||||
args = append(args, "caller", getCaller(2))
|
||||
logger.Error(msg, args...)
|
||||
}
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Fatalf logs a formatted fatal error message and exits the program with code 1.
|
||||
func Fatalf(format string, args ...any) {
|
||||
Fatal(fmt.Sprintf(format, args...))
|
||||
}
|
||||
|
||||
// Error logs an error message.
|
||||
func Error(msg string, args ...any) {
|
||||
if logger != nil {
|
||||
args = append(args, "caller", getCaller(2))
|
||||
logger.Error(msg, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// Errorf logs a formatted error message.
|
||||
func Errorf(format string, args ...any) {
|
||||
Error(fmt.Sprintf(format, args...))
|
||||
}
|
||||
|
||||
// Warn logs a warning message.
|
||||
func Warn(msg string, args ...any) {
|
||||
if logger != nil {
|
||||
args = append(args, "caller", getCaller(2))
|
||||
logger.Warn(msg, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// Warnf logs a formatted warning message.
|
||||
func Warnf(format string, args ...any) {
|
||||
Warn(fmt.Sprintf(format, args...))
|
||||
}
|
||||
|
||||
// Notice logs a notice message (mapped to Info level).
|
||||
func Notice(msg string, args ...any) {
|
||||
if logger != nil {
|
||||
args = append(args, "caller", getCaller(2))
|
||||
logger.Info(msg, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// Noticef logs a formatted notice message.
|
||||
func Noticef(format string, args ...any) {
|
||||
Notice(fmt.Sprintf(format, args...))
|
||||
}
|
||||
|
||||
// Info logs an informational message.
|
||||
func Info(msg string, args ...any) {
|
||||
if logger != nil {
|
||||
args = append(args, "caller", getCaller(2))
|
||||
logger.Info(msg, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// Infof logs a formatted informational message.
|
||||
func Infof(format string, args ...any) {
|
||||
Info(fmt.Sprintf(format, args...))
|
||||
}
|
||||
|
||||
// Debug logs a debug message.
|
||||
func Debug(msg string, args ...any) {
|
||||
if logger != nil {
|
||||
args = append(args, "caller", getCaller(2))
|
||||
logger.Debug(msg, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// Debugf logs a formatted debug message.
|
||||
func Debugf(format string, args ...any) {
|
||||
Debug(fmt.Sprintf(format, args...))
|
||||
}
|
||||
|
||||
// With returns a logger with additional context attributes.
|
||||
func With(args ...any) *slog.Logger {
|
||||
if logger != nil {
|
||||
return logger.With(args...)
|
||||
}
|
||||
return slog.Default()
|
||||
}
|
||||
|
||||
// WithContext returns a logger with the provided context.
|
||||
func WithContext(ctx context.Context) *slog.Logger {
|
||||
return logger
|
||||
}
|
||||
|
||||
// Logger returns the underlying slog.Logger instance.
|
||||
func Logger() *slog.Logger {
|
||||
return logger
|
||||
}
|
||||
24
internal/log/module.go
Normal file
24
internal/log/module.go
Normal file
@@ -0,0 +1,24 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// Module exports logging functionality for dependency injection.
|
||||
var Module = fx.Module("log",
|
||||
fx.Invoke(func(cfg Config) {
|
||||
Initialize(cfg)
|
||||
}),
|
||||
)
|
||||
|
||||
// New creates a new logger configuration from provided options.
|
||||
func New(opts LogOptions) Config {
|
||||
return Config(opts)
|
||||
}
|
||||
|
||||
// LogOptions are provided by the CLI.
|
||||
type LogOptions struct {
|
||||
Verbose bool
|
||||
Debug bool
|
||||
Cron bool
|
||||
}
|
||||
140
internal/log/tty_handler.go
Normal file
140
internal/log/tty_handler.go
Normal file
@@ -0,0 +1,140 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ANSI color codes
|
||||
const (
|
||||
colorReset = "\033[0m"
|
||||
colorRed = "\033[31m"
|
||||
colorYellow = "\033[33m"
|
||||
colorBlue = "\033[34m"
|
||||
colorGray = "\033[90m"
|
||||
colorGreen = "\033[32m"
|
||||
colorCyan = "\033[36m"
|
||||
colorBold = "\033[1m"
|
||||
)
|
||||
|
||||
// TTYHandler is a custom slog handler for TTY output with colors.
|
||||
type TTYHandler struct {
|
||||
opts slog.HandlerOptions
|
||||
mu sync.Mutex
|
||||
out io.Writer
|
||||
}
|
||||
|
||||
// NewTTYHandler creates a new TTY handler with colored output.
|
||||
func NewTTYHandler(out io.Writer, opts *slog.HandlerOptions) *TTYHandler {
|
||||
if opts == nil {
|
||||
opts = &slog.HandlerOptions{}
|
||||
}
|
||||
return &TTYHandler{
|
||||
out: out,
|
||||
opts: *opts,
|
||||
}
|
||||
}
|
||||
|
||||
// Enabled reports whether the handler handles records at the given level.
|
||||
func (h *TTYHandler) Enabled(_ context.Context, level slog.Level) bool {
|
||||
return level >= h.opts.Level.Level()
|
||||
}
|
||||
|
||||
// Handle writes the log record to the output with color formatting.
|
||||
func (h *TTYHandler) Handle(_ context.Context, r slog.Record) error {
|
||||
h.mu.Lock()
|
||||
defer h.mu.Unlock()
|
||||
|
||||
// Format timestamp
|
||||
timestamp := r.Time.Format("15:04:05")
|
||||
|
||||
// Level and color
|
||||
level := r.Level.String()
|
||||
var levelColor string
|
||||
switch r.Level {
|
||||
case slog.LevelDebug:
|
||||
levelColor = colorGray
|
||||
level = "DEBUG"
|
||||
case slog.LevelInfo:
|
||||
levelColor = colorGreen
|
||||
level = "INFO "
|
||||
case slog.LevelWarn:
|
||||
levelColor = colorYellow
|
||||
level = "WARN "
|
||||
case slog.LevelError:
|
||||
levelColor = colorRed
|
||||
level = "ERROR"
|
||||
default:
|
||||
levelColor = colorReset
|
||||
}
|
||||
|
||||
// Print main message
|
||||
_, _ = fmt.Fprintf(h.out, "%s%s%s %s%s%s %s%s%s",
|
||||
colorGray, timestamp, colorReset,
|
||||
levelColor, level, colorReset,
|
||||
colorBold, r.Message, colorReset)
|
||||
|
||||
// Print attributes
|
||||
r.Attrs(func(a slog.Attr) bool {
|
||||
value := a.Value.String()
|
||||
// Special handling for certain attribute types
|
||||
switch a.Value.Kind() {
|
||||
case slog.KindDuration:
|
||||
if d, ok := a.Value.Any().(time.Duration); ok {
|
||||
value = formatDuration(d)
|
||||
}
|
||||
case slog.KindInt64:
|
||||
if a.Key == "bytes" {
|
||||
value = formatBytes(a.Value.Int64())
|
||||
}
|
||||
}
|
||||
|
||||
_, _ = fmt.Fprintf(h.out, " %s%s%s=%s%s%s",
|
||||
colorCyan, a.Key, colorReset,
|
||||
colorBlue, value, colorReset)
|
||||
return true
|
||||
})
|
||||
|
||||
_, _ = fmt.Fprintln(h.out)
|
||||
return nil
|
||||
}
|
||||
|
||||
// WithAttrs returns a new handler with the given attributes.
|
||||
func (h *TTYHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
|
||||
return h // Simplified for now
|
||||
}
|
||||
|
||||
// WithGroup returns a new handler with the given group name.
|
||||
func (h *TTYHandler) WithGroup(name string) slog.Handler {
|
||||
return h // Simplified for now
|
||||
}
|
||||
|
||||
// formatDuration formats a duration in a human-readable way
|
||||
func formatDuration(d time.Duration) string {
|
||||
if d < time.Millisecond {
|
||||
return fmt.Sprintf("%dµs", d.Microseconds())
|
||||
} else if d < time.Second {
|
||||
return fmt.Sprintf("%dms", d.Milliseconds())
|
||||
} else if d < time.Minute {
|
||||
return fmt.Sprintf("%.1fs", d.Seconds())
|
||||
}
|
||||
return d.String()
|
||||
}
|
||||
|
||||
// formatBytes formats bytes in a human-readable way
|
||||
func formatBytes(b int64) string {
|
||||
const unit = 1024
|
||||
if b < unit {
|
||||
return fmt.Sprintf("%d B", b)
|
||||
}
|
||||
div, exp := int64(unit), 0
|
||||
for n := b / unit; n >= unit; n /= unit {
|
||||
div *= unit
|
||||
exp++
|
||||
}
|
||||
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "KMGTPE"[exp])
|
||||
}
|
||||
108
internal/pidlock/pidlock.go
Normal file
108
internal/pidlock/pidlock.go
Normal file
@@ -0,0 +1,108 @@
|
||||
// Package pidlock provides process-level locking using PID files.
|
||||
// It prevents multiple instances of vaultik from running simultaneously,
|
||||
// which would cause database locking conflicts.
|
||||
package pidlock
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// ErrAlreadyRunning indicates another vaultik instance is running.
|
||||
var ErrAlreadyRunning = errors.New("another vaultik instance is already running")
|
||||
|
||||
// Lock represents an acquired PID lock.
|
||||
type Lock struct {
|
||||
path string
|
||||
}
|
||||
|
||||
// Acquire attempts to acquire a PID lock in the specified directory.
|
||||
// If the lock file exists and the process is still running, it returns
|
||||
// ErrAlreadyRunning with details about the existing process.
|
||||
// On success, it writes the current PID to the lock file and returns
|
||||
// a Lock that must be released with Release().
|
||||
func Acquire(lockDir string) (*Lock, error) {
|
||||
// Ensure lock directory exists
|
||||
if err := os.MkdirAll(lockDir, 0700); err != nil {
|
||||
return nil, fmt.Errorf("creating lock directory: %w", err)
|
||||
}
|
||||
|
||||
lockPath := filepath.Join(lockDir, "vaultik.pid")
|
||||
|
||||
// Check for existing lock
|
||||
existingPID, err := readPIDFile(lockPath)
|
||||
if err == nil {
|
||||
// Lock file exists, check if process is running
|
||||
if isProcessRunning(existingPID) {
|
||||
return nil, fmt.Errorf("%w (PID %d)", ErrAlreadyRunning, existingPID)
|
||||
}
|
||||
// Process is not running, stale lock file - we can take over
|
||||
}
|
||||
|
||||
// Write our PID
|
||||
pid := os.Getpid()
|
||||
if err := os.WriteFile(lockPath, []byte(strconv.Itoa(pid)), 0600); err != nil {
|
||||
return nil, fmt.Errorf("writing PID file: %w", err)
|
||||
}
|
||||
|
||||
return &Lock{path: lockPath}, nil
|
||||
}
|
||||
|
||||
// Release removes the PID lock file.
|
||||
// It is safe to call Release multiple times.
|
||||
func (l *Lock) Release() error {
|
||||
if l == nil || l.path == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify we still own the lock (our PID is in the file)
|
||||
existingPID, err := readPIDFile(l.path)
|
||||
if err != nil {
|
||||
// File already gone or unreadable - that's fine
|
||||
return nil
|
||||
}
|
||||
|
||||
if existingPID != os.Getpid() {
|
||||
// Someone else wrote to our lock file - don't remove it
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := os.Remove(l.path); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("removing PID file: %w", err)
|
||||
}
|
||||
|
||||
l.path = "" // Prevent double-release
|
||||
return nil
|
||||
}
|
||||
|
||||
// readPIDFile reads and parses the PID from a lock file.
|
||||
func readPIDFile(path string) (int, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
pid, err := strconv.Atoi(strings.TrimSpace(string(data)))
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("parsing PID: %w", err)
|
||||
}
|
||||
|
||||
return pid, nil
|
||||
}
|
||||
|
||||
// isProcessRunning checks if a process with the given PID is running.
|
||||
func isProcessRunning(pid int) bool {
|
||||
process, err := os.FindProcess(pid)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// On Unix, FindProcess always succeeds. We need to send signal 0 to check.
|
||||
err = process.Signal(syscall.Signal(0))
|
||||
return err == nil
|
||||
}
|
||||
108
internal/pidlock/pidlock_test.go
Normal file
108
internal/pidlock/pidlock_test.go
Normal file
@@ -0,0 +1,108 @@
|
||||
package pidlock
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestAcquireAndRelease(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// Acquire lock
|
||||
lock, err := Acquire(tmpDir)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, lock)
|
||||
|
||||
// Verify PID file exists with our PID
|
||||
data, err := os.ReadFile(filepath.Join(tmpDir, "vaultik.pid"))
|
||||
require.NoError(t, err)
|
||||
pid, err := strconv.Atoi(string(data))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, os.Getpid(), pid)
|
||||
|
||||
// Release lock
|
||||
err = lock.Release()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify PID file is gone
|
||||
_, err = os.Stat(filepath.Join(tmpDir, "vaultik.pid"))
|
||||
assert.True(t, os.IsNotExist(err))
|
||||
}
|
||||
|
||||
func TestAcquireBlocksSecondInstance(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// Acquire first lock
|
||||
lock1, err := Acquire(tmpDir)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, lock1)
|
||||
defer func() { _ = lock1.Release() }()
|
||||
|
||||
// Try to acquire second lock - should fail
|
||||
lock2, err := Acquire(tmpDir)
|
||||
assert.ErrorIs(t, err, ErrAlreadyRunning)
|
||||
assert.Nil(t, lock2)
|
||||
}
|
||||
|
||||
func TestAcquireWithStaleLock(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// Write a stale PID file (PID that doesn't exist)
|
||||
stalePID := 999999999 // Unlikely to be a real process
|
||||
pidPath := filepath.Join(tmpDir, "vaultik.pid")
|
||||
err := os.WriteFile(pidPath, []byte(strconv.Itoa(stalePID)), 0600)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Should be able to acquire lock (stale lock is cleaned up)
|
||||
lock, err := Acquire(tmpDir)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, lock)
|
||||
defer func() { _ = lock.Release() }()
|
||||
|
||||
// Verify our PID is now in the file
|
||||
data, err := os.ReadFile(pidPath)
|
||||
require.NoError(t, err)
|
||||
pid, err := strconv.Atoi(string(data))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, os.Getpid(), pid)
|
||||
}
|
||||
|
||||
func TestReleaseIsIdempotent(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
lock, err := Acquire(tmpDir)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Release multiple times - should not error
|
||||
err = lock.Release()
|
||||
require.NoError(t, err)
|
||||
|
||||
err = lock.Release()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestReleaseNilLock(t *testing.T) {
|
||||
var lock *Lock
|
||||
err := lock.Release()
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestAcquireCreatesDirectory(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
nestedDir := filepath.Join(tmpDir, "nested", "dir")
|
||||
|
||||
lock, err := Acquire(nestedDir)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, lock)
|
||||
defer func() { _ = lock.Release() }()
|
||||
|
||||
// Verify directory was created
|
||||
info, err := os.Stat(nestedDir)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, info.IsDir())
|
||||
}
|
||||
326
internal/s3/client.go
Normal file
326
internal/s3/client.go
Normal file
@@ -0,0 +1,326 @@
|
||||
package s3
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/aws/aws-sdk-go-v2/aws"
|
||||
"github.com/aws/aws-sdk-go-v2/config"
|
||||
"github.com/aws/aws-sdk-go-v2/credentials"
|
||||
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
|
||||
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||
)
|
||||
|
||||
// Client wraps the AWS S3 client for vaultik operations.
|
||||
// It provides a simplified interface for S3 operations with automatic
|
||||
// prefix handling and connection management. All operations are performed
|
||||
// within the configured bucket and prefix.
|
||||
type Client struct {
|
||||
s3Client *s3.Client
|
||||
bucket string
|
||||
prefix string
|
||||
endpoint string
|
||||
}
|
||||
|
||||
// Config contains S3 client configuration.
|
||||
// All fields are required except Prefix, which defaults to an empty string.
|
||||
// The Endpoint field should include the protocol (http:// or https://).
|
||||
type Config struct {
|
||||
Endpoint string
|
||||
Bucket string
|
||||
Prefix string
|
||||
AccessKeyID string
|
||||
SecretAccessKey string
|
||||
Region string
|
||||
}
|
||||
|
||||
// NewClient creates a new S3 client with the provided configuration.
|
||||
// It establishes a connection to the S3-compatible storage service and
|
||||
// validates the credentials. The client uses static credentials and
|
||||
// path-style URLs for compatibility with various S3-compatible services.
|
||||
func NewClient(ctx context.Context, cfg Config) (*Client, error) {
|
||||
// Create AWS config
|
||||
awsCfg, err := config.LoadDefaultConfig(ctx,
|
||||
config.WithRegion(cfg.Region),
|
||||
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
|
||||
cfg.AccessKeyID,
|
||||
cfg.SecretAccessKey,
|
||||
"",
|
||||
)),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Configure custom endpoint if provided
|
||||
s3Opts := func(o *s3.Options) {
|
||||
if cfg.Endpoint != "" {
|
||||
o.BaseEndpoint = aws.String(cfg.Endpoint)
|
||||
o.UsePathStyle = true
|
||||
}
|
||||
}
|
||||
|
||||
s3Client := s3.NewFromConfig(awsCfg, s3Opts)
|
||||
|
||||
return &Client{
|
||||
s3Client: s3Client,
|
||||
bucket: cfg.Bucket,
|
||||
prefix: cfg.Prefix,
|
||||
endpoint: cfg.Endpoint,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// PutObject uploads an object to S3 with the specified key.
|
||||
// The key is automatically prefixed with the configured prefix.
|
||||
// The data parameter should be a reader containing the object data.
|
||||
// Returns an error if the upload fails.
|
||||
func (c *Client) PutObject(ctx context.Context, key string, data io.Reader) error {
|
||||
fullKey := c.prefix + key
|
||||
_, err := c.s3Client.PutObject(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(c.bucket),
|
||||
Key: aws.String(fullKey),
|
||||
Body: data,
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// ProgressCallback is called during upload progress with bytes uploaded so far.
|
||||
// The callback should return an error to cancel the upload.
|
||||
type ProgressCallback func(bytesUploaded int64) error
|
||||
|
||||
// PutObjectWithProgress uploads an object to S3 with progress tracking.
|
||||
// The key is automatically prefixed with the configured prefix.
|
||||
// The size parameter must be the exact size of the data to upload.
|
||||
// The progress callback is called periodically with the number of bytes uploaded.
|
||||
// Returns an error if the upload fails.
|
||||
func (c *Client) PutObjectWithProgress(ctx context.Context, key string, data io.Reader, size int64, progress ProgressCallback) error {
|
||||
fullKey := c.prefix + key
|
||||
|
||||
// Create an uploader with the S3 client
|
||||
uploader := manager.NewUploader(c.s3Client, func(u *manager.Uploader) {
|
||||
// Set part size to 10MB for better progress granularity
|
||||
u.PartSize = 10 * 1024 * 1024
|
||||
})
|
||||
|
||||
// Create a progress reader that tracks upload progress
|
||||
pr := &progressReader{
|
||||
reader: data,
|
||||
size: size,
|
||||
callback: progress,
|
||||
read: 0,
|
||||
}
|
||||
|
||||
// Upload the file
|
||||
_, err := uploader.Upload(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(c.bucket),
|
||||
Key: aws.String(fullKey),
|
||||
Body: pr,
|
||||
})
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// GetObject downloads an object from S3 with the specified key.
|
||||
// The key is automatically prefixed with the configured prefix.
|
||||
// Returns a ReadCloser containing the object data. The caller must
|
||||
// close the returned reader when done to avoid resource leaks.
|
||||
func (c *Client) GetObject(ctx context.Context, key string) (io.ReadCloser, error) {
|
||||
fullKey := c.prefix + key
|
||||
result, err := c.s3Client.GetObject(ctx, &s3.GetObjectInput{
|
||||
Bucket: aws.String(c.bucket),
|
||||
Key: aws.String(fullKey),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return result.Body, nil
|
||||
}
|
||||
|
||||
// DeleteObject removes an object from S3 with the specified key.
|
||||
// The key is automatically prefixed with the configured prefix.
|
||||
// No error is returned if the object doesn't exist.
|
||||
func (c *Client) DeleteObject(ctx context.Context, key string) error {
|
||||
fullKey := c.prefix + key
|
||||
_, err := c.s3Client.DeleteObject(ctx, &s3.DeleteObjectInput{
|
||||
Bucket: aws.String(c.bucket),
|
||||
Key: aws.String(fullKey),
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// ListObjects lists all objects with the given prefix.
|
||||
// The prefix is combined with the client's configured prefix.
|
||||
// Returns a slice of object keys with the base prefix removed.
|
||||
// This method loads all matching keys into memory, so use
|
||||
// ListObjectsStream for large result sets.
|
||||
func (c *Client) ListObjects(ctx context.Context, prefix string) ([]string, error) {
|
||||
fullPrefix := c.prefix + prefix
|
||||
|
||||
var keys []string
|
||||
paginator := s3.NewListObjectsV2Paginator(c.s3Client, &s3.ListObjectsV2Input{
|
||||
Bucket: aws.String(c.bucket),
|
||||
Prefix: aws.String(fullPrefix),
|
||||
})
|
||||
|
||||
for paginator.HasMorePages() {
|
||||
page, err := paginator.NextPage(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, obj := range page.Contents {
|
||||
if obj.Key != nil {
|
||||
// Remove the base prefix from the key
|
||||
key := *obj.Key
|
||||
if len(key) > len(c.prefix) {
|
||||
key = key[len(c.prefix):]
|
||||
}
|
||||
keys = append(keys, key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
// HeadObject checks if an object exists in S3.
|
||||
// Returns true if the object exists, false otherwise.
|
||||
// The key is automatically prefixed with the configured prefix.
|
||||
// Note: This method returns false for any error, not just "not found".
|
||||
func (c *Client) HeadObject(ctx context.Context, key string) (bool, error) {
|
||||
fullKey := c.prefix + key
|
||||
_, err := c.s3Client.HeadObject(ctx, &s3.HeadObjectInput{
|
||||
Bucket: aws.String(c.bucket),
|
||||
Key: aws.String(fullKey),
|
||||
})
|
||||
if err != nil {
|
||||
// Check if it's a not found error
|
||||
// TODO: Add proper error type checking
|
||||
return false, nil
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// ObjectInfo contains information about an S3 object.
|
||||
// It is used by ListObjectsStream to return object metadata
|
||||
// along with any errors encountered during listing.
|
||||
type ObjectInfo struct {
|
||||
Key string
|
||||
Size int64
|
||||
Err error
|
||||
}
|
||||
|
||||
// ListObjectsStream lists objects with the given prefix and returns a channel.
|
||||
// This method is preferred for large result sets as it streams results
|
||||
// instead of loading everything into memory. The channel is closed when
|
||||
// listing is complete or an error occurs. If an error occurs, it will be
|
||||
// sent as the last item with the Err field set. The recursive parameter
|
||||
// is currently unused but reserved for future use.
|
||||
func (c *Client) ListObjectsStream(ctx context.Context, prefix string, recursive bool) <-chan ObjectInfo {
|
||||
ch := make(chan ObjectInfo)
|
||||
|
||||
go func() {
|
||||
defer close(ch)
|
||||
|
||||
fullPrefix := c.prefix + prefix
|
||||
|
||||
paginator := s3.NewListObjectsV2Paginator(c.s3Client, &s3.ListObjectsV2Input{
|
||||
Bucket: aws.String(c.bucket),
|
||||
Prefix: aws.String(fullPrefix),
|
||||
})
|
||||
|
||||
for paginator.HasMorePages() {
|
||||
page, err := paginator.NextPage(ctx)
|
||||
if err != nil {
|
||||
ch <- ObjectInfo{Err: err}
|
||||
return
|
||||
}
|
||||
|
||||
for _, obj := range page.Contents {
|
||||
if obj.Key != nil && obj.Size != nil {
|
||||
// Remove the base prefix from the key
|
||||
key := *obj.Key
|
||||
if len(key) > len(c.prefix) {
|
||||
key = key[len(c.prefix):]
|
||||
}
|
||||
ch <- ObjectInfo{
|
||||
Key: key,
|
||||
Size: *obj.Size,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return ch
|
||||
}
|
||||
|
||||
// StatObject returns information about an object without downloading it.
|
||||
// The key is automatically prefixed with the configured prefix.
|
||||
// Returns an ObjectInfo struct with the object's metadata.
|
||||
// Returns an error if the object doesn't exist or if the operation fails.
|
||||
func (c *Client) StatObject(ctx context.Context, key string) (*ObjectInfo, error) {
|
||||
fullKey := c.prefix + key
|
||||
result, err := c.s3Client.HeadObject(ctx, &s3.HeadObjectInput{
|
||||
Bucket: aws.String(c.bucket),
|
||||
Key: aws.String(fullKey),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
size := int64(0)
|
||||
if result.ContentLength != nil {
|
||||
size = *result.ContentLength
|
||||
}
|
||||
|
||||
return &ObjectInfo{
|
||||
Key: key,
|
||||
Size: size,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// RemoveObject deletes an object from S3 (alias for DeleteObject).
|
||||
// This method exists for API compatibility and simply calls DeleteObject.
|
||||
func (c *Client) RemoveObject(ctx context.Context, key string) error {
|
||||
return c.DeleteObject(ctx, key)
|
||||
}
|
||||
|
||||
// BucketName returns the configured S3 bucket name.
|
||||
// This is useful for displaying configuration information.
|
||||
func (c *Client) BucketName() string {
|
||||
return c.bucket
|
||||
}
|
||||
|
||||
// Endpoint returns the S3 endpoint URL.
|
||||
// If no custom endpoint was configured, returns the default AWS S3 endpoint.
|
||||
// This is useful for displaying configuration information.
|
||||
func (c *Client) Endpoint() string {
|
||||
if c.endpoint == "" {
|
||||
return "s3.amazonaws.com"
|
||||
}
|
||||
return c.endpoint
|
||||
}
|
||||
|
||||
// progressReader wraps an io.Reader to track reading progress
|
||||
type progressReader struct {
|
||||
reader io.Reader
|
||||
size int64
|
||||
read int64
|
||||
callback ProgressCallback
|
||||
}
|
||||
|
||||
// Read implements io.Reader
|
||||
func (pr *progressReader) Read(p []byte) (int, error) {
|
||||
n, err := pr.reader.Read(p)
|
||||
if n > 0 {
|
||||
atomic.AddInt64(&pr.read, int64(n))
|
||||
if pr.callback != nil {
|
||||
if callbackErr := pr.callback(atomic.LoadInt64(&pr.read)); callbackErr != nil {
|
||||
return n, callbackErr
|
||||
}
|
||||
}
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
98
internal/s3/client_test.go
Normal file
98
internal/s3/client_test.go
Normal file
@@ -0,0 +1,98 @@
|
||||
package s3_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/s3"
|
||||
)
|
||||
|
||||
func TestClient(t *testing.T) {
|
||||
ts := NewTestServer(t)
|
||||
defer func() {
|
||||
if err := ts.Cleanup(); err != nil {
|
||||
t.Errorf("cleanup failed: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Create client
|
||||
client, err := s3.NewClient(ctx, s3.Config{
|
||||
Endpoint: testEndpoint,
|
||||
Bucket: testBucket,
|
||||
Prefix: "test-prefix/",
|
||||
AccessKeyID: testAccessKey,
|
||||
SecretAccessKey: testSecretKey,
|
||||
Region: testRegion,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create client: %v", err)
|
||||
}
|
||||
|
||||
// Test PutObject
|
||||
testKey := "foo/bar.txt"
|
||||
testData := []byte("test data")
|
||||
err = client.PutObject(ctx, testKey, bytes.NewReader(testData))
|
||||
if err != nil {
|
||||
t.Fatalf("failed to put object: %v", err)
|
||||
}
|
||||
|
||||
// Test GetObject
|
||||
reader, err := client.GetObject(ctx, testKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get object: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := reader.Close(); err != nil {
|
||||
t.Errorf("failed to close reader: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
data, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read data: %v", err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(data, testData) {
|
||||
t.Errorf("data mismatch: got %q, want %q", data, testData)
|
||||
}
|
||||
|
||||
// Test HeadObject
|
||||
exists, err := client.HeadObject(ctx, testKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to head object: %v", err)
|
||||
}
|
||||
if !exists {
|
||||
t.Error("expected object to exist")
|
||||
}
|
||||
|
||||
// Test ListObjects
|
||||
keys, err := client.ListObjects(ctx, "foo/")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to list objects: %v", err)
|
||||
}
|
||||
if len(keys) != 1 {
|
||||
t.Errorf("expected 1 key, got %d", len(keys))
|
||||
}
|
||||
if keys[0] != testKey {
|
||||
t.Errorf("unexpected key: got %s, want %s", keys[0], testKey)
|
||||
}
|
||||
|
||||
// Test DeleteObject
|
||||
err = client.DeleteObject(ctx, testKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete object: %v", err)
|
||||
}
|
||||
|
||||
// Verify deletion
|
||||
exists, err = client.HeadObject(ctx, testKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to head object after deletion: %v", err)
|
||||
}
|
||||
if exists {
|
||||
t.Error("expected object to not exist after deletion")
|
||||
}
|
||||
}
|
||||
42
internal/s3/module.go
Normal file
42
internal/s3/module.go
Normal file
@@ -0,0 +1,42 @@
|
||||
package s3
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// Module exports S3 functionality as an fx module.
|
||||
// It provides automatic dependency injection for the S3 client,
|
||||
// configuring it based on the application's configuration settings.
|
||||
var Module = fx.Module("s3",
|
||||
fx.Provide(
|
||||
provideClient,
|
||||
),
|
||||
)
|
||||
|
||||
func provideClient(lc fx.Lifecycle, cfg *config.Config) (*Client, error) {
|
||||
ctx := context.Background()
|
||||
|
||||
client, err := NewClient(ctx, Config{
|
||||
Endpoint: cfg.S3.Endpoint,
|
||||
Bucket: cfg.S3.Bucket,
|
||||
Prefix: cfg.S3.Prefix,
|
||||
AccessKeyID: cfg.S3.AccessKeyID,
|
||||
SecretAccessKey: cfg.S3.SecretAccessKey,
|
||||
Region: cfg.S3.Region,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lc.Append(fx.Hook{
|
||||
OnStop: func(ctx context.Context) error {
|
||||
// S3 client doesn't need explicit cleanup
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
return client, nil
|
||||
}
|
||||
306
internal/s3/s3_test.go
Normal file
306
internal/s3/s3_test.go
Normal file
@@ -0,0 +1,306 @@
|
||||
package s3_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go-v2/aws"
|
||||
"github.com/aws/aws-sdk-go-v2/config"
|
||||
"github.com/aws/aws-sdk-go-v2/credentials"
|
||||
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||
"github.com/aws/smithy-go/logging"
|
||||
"github.com/johannesboyne/gofakes3"
|
||||
"github.com/johannesboyne/gofakes3/backend/s3mem"
|
||||
)
|
||||
|
||||
const (
|
||||
testBucket = "test-bucket"
|
||||
testRegion = "us-east-1"
|
||||
testAccessKey = "test-access-key"
|
||||
testSecretKey = "test-secret-key"
|
||||
testEndpoint = "http://localhost:9999"
|
||||
)
|
||||
|
||||
// TestServer represents an in-process S3-compatible test server
|
||||
type TestServer struct {
|
||||
server *http.Server
|
||||
backend gofakes3.Backend
|
||||
s3Client *s3.Client
|
||||
tempDir string
|
||||
logBuf *bytes.Buffer
|
||||
}
|
||||
|
||||
// NewTestServer creates and starts a new test server
|
||||
func NewTestServer(t *testing.T) *TestServer {
|
||||
// Create temp directory for any file operations
|
||||
tempDir, err := os.MkdirTemp("", "vaultik-s3-test-*")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create temp dir: %v", err)
|
||||
}
|
||||
|
||||
// Create in-memory backend
|
||||
backend := s3mem.New()
|
||||
faker := gofakes3.New(backend)
|
||||
|
||||
// Create HTTP server
|
||||
server := &http.Server{
|
||||
Addr: "localhost:9999",
|
||||
Handler: faker.Server(),
|
||||
}
|
||||
|
||||
// Start server in background
|
||||
go func() {
|
||||
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
t.Logf("test server error: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Wait for server to be ready
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Create a buffer to capture logs
|
||||
logBuf := &bytes.Buffer{}
|
||||
|
||||
// Create S3 client with custom logger
|
||||
cfg, err := config.LoadDefaultConfig(context.Background(),
|
||||
config.WithRegion(testRegion),
|
||||
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
|
||||
testAccessKey,
|
||||
testSecretKey,
|
||||
"",
|
||||
)),
|
||||
config.WithClientLogMode(aws.LogRetries|aws.LogRequestWithBody|aws.LogResponseWithBody),
|
||||
config.WithLogger(logging.LoggerFunc(func(classification logging.Classification, format string, v ...interface{}) {
|
||||
// Capture logs to buffer instead of stdout
|
||||
fmt.Fprintf(logBuf, "SDK %s %s %s\n",
|
||||
time.Now().Format("2006/01/02 15:04:05"),
|
||||
string(classification),
|
||||
fmt.Sprintf(format, v...))
|
||||
})),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create AWS config: %v", err)
|
||||
}
|
||||
|
||||
s3Client := s3.NewFromConfig(cfg, func(o *s3.Options) {
|
||||
o.BaseEndpoint = aws.String(testEndpoint)
|
||||
o.UsePathStyle = true
|
||||
})
|
||||
|
||||
ts := &TestServer{
|
||||
server: server,
|
||||
backend: backend,
|
||||
s3Client: s3Client,
|
||||
tempDir: tempDir,
|
||||
logBuf: logBuf,
|
||||
}
|
||||
|
||||
// Register cleanup to show logs on test failure
|
||||
t.Cleanup(func() {
|
||||
if t.Failed() && logBuf.Len() > 0 {
|
||||
t.Logf("S3 SDK Debug Output:\n%s", logBuf.String())
|
||||
}
|
||||
})
|
||||
|
||||
// Create test bucket
|
||||
_, err = s3Client.CreateBucket(context.Background(), &s3.CreateBucketInput{
|
||||
Bucket: aws.String(testBucket),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test bucket: %v", err)
|
||||
}
|
||||
|
||||
return ts
|
||||
}
|
||||
|
||||
// Cleanup shuts down the server and removes temp directory
|
||||
func (ts *TestServer) Cleanup() error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := ts.server.Shutdown(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return os.RemoveAll(ts.tempDir)
|
||||
}
|
||||
|
||||
// Client returns the S3 client configured for the test server
|
||||
func (ts *TestServer) Client() *s3.Client {
|
||||
return ts.s3Client
|
||||
}
|
||||
|
||||
// TestBasicS3Operations tests basic store and retrieve operations
|
||||
func TestBasicS3Operations(t *testing.T) {
|
||||
ts := NewTestServer(t)
|
||||
defer func() {
|
||||
if err := ts.Cleanup(); err != nil {
|
||||
t.Errorf("cleanup failed: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
ctx := context.Background()
|
||||
client := ts.Client()
|
||||
|
||||
// Test data
|
||||
testKey := "test/file.txt"
|
||||
testData := []byte("Hello, S3 test!")
|
||||
|
||||
// Put object
|
||||
_, err := client.PutObject(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(testBucket),
|
||||
Key: aws.String(testKey),
|
||||
Body: bytes.NewReader(testData),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to put object: %v", err)
|
||||
}
|
||||
|
||||
// Get object
|
||||
result, err := client.GetObject(ctx, &s3.GetObjectInput{
|
||||
Bucket: aws.String(testBucket),
|
||||
Key: aws.String(testKey),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get object: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := result.Body.Close(); err != nil {
|
||||
t.Errorf("failed to close body: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Read and verify data
|
||||
data, err := io.ReadAll(result.Body)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read object body: %v", err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(data, testData) {
|
||||
t.Errorf("retrieved data mismatch: got %q, want %q", data, testData)
|
||||
}
|
||||
}
|
||||
|
||||
// TestBlobOperations tests blob storage patterns for vaultik
|
||||
func TestBlobOperations(t *testing.T) {
|
||||
ts := NewTestServer(t)
|
||||
defer func() {
|
||||
if err := ts.Cleanup(); err != nil {
|
||||
t.Errorf("cleanup failed: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
ctx := context.Background()
|
||||
client := ts.Client()
|
||||
|
||||
// Test blob storage with prefix structure
|
||||
blobHash := "aabbccddee112233445566778899aabbccddee11"
|
||||
blobKey := filepath.Join("blobs", blobHash[:2], blobHash[2:4], blobHash+".zst.age")
|
||||
blobData := []byte("compressed and encrypted blob data")
|
||||
|
||||
// Store blob
|
||||
_, err := client.PutObject(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(testBucket),
|
||||
Key: aws.String(blobKey),
|
||||
Body: bytes.NewReader(blobData),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to store blob: %v", err)
|
||||
}
|
||||
|
||||
// List objects with prefix
|
||||
listResult, err := client.ListObjectsV2(ctx, &s3.ListObjectsV2Input{
|
||||
Bucket: aws.String(testBucket),
|
||||
Prefix: aws.String("blobs/aa/"),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to list objects: %v", err)
|
||||
}
|
||||
|
||||
if len(listResult.Contents) != 1 {
|
||||
t.Errorf("expected 1 object, got %d", len(listResult.Contents))
|
||||
}
|
||||
|
||||
if listResult.Contents[0].Key != nil && *listResult.Contents[0].Key != blobKey {
|
||||
t.Errorf("unexpected key: got %s, want %s", *listResult.Contents[0].Key, blobKey)
|
||||
}
|
||||
|
||||
// Delete blob
|
||||
_, err = client.DeleteObject(ctx, &s3.DeleteObjectInput{
|
||||
Bucket: aws.String(testBucket),
|
||||
Key: aws.String(blobKey),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to delete blob: %v", err)
|
||||
}
|
||||
|
||||
// Verify deletion
|
||||
_, err = client.GetObject(ctx, &s3.GetObjectInput{
|
||||
Bucket: aws.String(testBucket),
|
||||
Key: aws.String(blobKey),
|
||||
})
|
||||
if err == nil {
|
||||
t.Error("expected error getting deleted object, got nil")
|
||||
}
|
||||
}
|
||||
|
||||
// TestMetadataOperations tests metadata storage patterns
|
||||
func TestMetadataOperations(t *testing.T) {
|
||||
ts := NewTestServer(t)
|
||||
defer func() {
|
||||
if err := ts.Cleanup(); err != nil {
|
||||
t.Errorf("cleanup failed: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
ctx := context.Background()
|
||||
client := ts.Client()
|
||||
|
||||
// Test metadata storage
|
||||
snapshotID := "2024-01-01T12:00:00Z"
|
||||
metadataKey := filepath.Join("metadata", snapshotID+".sqlite.age")
|
||||
metadataData := []byte("encrypted sqlite database")
|
||||
|
||||
// Store metadata
|
||||
_, err := client.PutObject(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(testBucket),
|
||||
Key: aws.String(metadataKey),
|
||||
Body: bytes.NewReader(metadataData),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to store metadata: %v", err)
|
||||
}
|
||||
|
||||
// Store manifest
|
||||
manifestKey := filepath.Join("metadata", snapshotID+".manifest.json.zst")
|
||||
manifestData := []byte(`{"snapshot_id":"2024-01-01T12:00:00Z","blob_hashes":["hash1","hash2"]}`)
|
||||
|
||||
_, err = client.PutObject(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(testBucket),
|
||||
Key: aws.String(manifestKey),
|
||||
Body: bytes.NewReader(manifestData),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to store manifest: %v", err)
|
||||
}
|
||||
|
||||
// List metadata objects
|
||||
listResult, err := client.ListObjectsV2(ctx, &s3.ListObjectsV2Input{
|
||||
Bucket: aws.String(testBucket),
|
||||
Prefix: aws.String("metadata/"),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to list metadata: %v", err)
|
||||
}
|
||||
|
||||
if len(listResult.Contents) != 2 {
|
||||
t.Errorf("expected 2 metadata objects, got %d", len(listResult.Contents))
|
||||
}
|
||||
}
|
||||
532
internal/snapshot/backup_test.go
Normal file
532
internal/snapshot/backup_test.go
Normal file
@@ -0,0 +1,532 @@
|
||||
package snapshot
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"testing/fstest"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
)
|
||||
|
||||
// MockS3Client is a mock implementation of S3 operations for testing
|
||||
type MockS3Client struct {
|
||||
storage map[string][]byte
|
||||
}
|
||||
|
||||
func NewMockS3Client() *MockS3Client {
|
||||
return &MockS3Client{
|
||||
storage: make(map[string][]byte),
|
||||
}
|
||||
}
|
||||
|
||||
func (m *MockS3Client) PutBlob(ctx context.Context, hash string, data []byte) error {
|
||||
m.storage[hash] = data
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MockS3Client) GetBlob(ctx context.Context, hash string) ([]byte, error) {
|
||||
data, ok := m.storage[hash]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("blob not found: %s", hash)
|
||||
}
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func (m *MockS3Client) BlobExists(ctx context.Context, hash string) (bool, error) {
|
||||
_, ok := m.storage[hash]
|
||||
return ok, nil
|
||||
}
|
||||
|
||||
func (m *MockS3Client) CreateBucket(ctx context.Context, bucket string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestBackupWithInMemoryFS(t *testing.T) {
|
||||
// Create a temporary directory for the database
|
||||
tempDir := t.TempDir()
|
||||
dbPath := filepath.Join(tempDir, "test.db")
|
||||
|
||||
// Create test filesystem
|
||||
testFS := fstest.MapFS{
|
||||
"file1.txt": &fstest.MapFile{
|
||||
Data: []byte("Hello, World!"),
|
||||
Mode: 0644,
|
||||
ModTime: time.Now(),
|
||||
},
|
||||
"dir1/file2.txt": &fstest.MapFile{
|
||||
Data: []byte("This is a test file with some content."),
|
||||
Mode: 0755,
|
||||
ModTime: time.Now(),
|
||||
},
|
||||
"dir1/subdir/file3.txt": &fstest.MapFile{
|
||||
Data: []byte("Another file in a subdirectory."),
|
||||
Mode: 0600,
|
||||
ModTime: time.Now(),
|
||||
},
|
||||
"largefile.bin": &fstest.MapFile{
|
||||
Data: generateLargeFileContent(10 * 1024 * 1024), // 10MB file with varied content
|
||||
Mode: 0644,
|
||||
ModTime: time.Now(),
|
||||
},
|
||||
}
|
||||
|
||||
// Initialize the database
|
||||
ctx := context.Background()
|
||||
db, err := database.New(ctx, dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create database: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
t.Logf("Failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create mock S3 client
|
||||
s3Client := NewMockS3Client()
|
||||
|
||||
// Run backup
|
||||
backupEngine := &BackupEngine{
|
||||
repos: repos,
|
||||
s3Client: s3Client,
|
||||
}
|
||||
|
||||
snapshotID, err := backupEngine.Backup(ctx, testFS, ".")
|
||||
if err != nil {
|
||||
t.Fatalf("Backup failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify snapshot was created
|
||||
snapshot, err := repos.Snapshots.GetByID(ctx, snapshotID)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to get snapshot: %v", err)
|
||||
}
|
||||
|
||||
if snapshot == nil {
|
||||
t.Fatal("Snapshot not found")
|
||||
}
|
||||
|
||||
if snapshot.FileCount == 0 {
|
||||
t.Error("Expected snapshot to have files")
|
||||
}
|
||||
|
||||
// Verify files in database
|
||||
files, err := repos.Files.ListByPrefix(ctx, "")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to list files: %v", err)
|
||||
}
|
||||
|
||||
expectedFiles := map[string]bool{
|
||||
"file1.txt": true,
|
||||
"dir1/file2.txt": true,
|
||||
"dir1/subdir/file3.txt": true,
|
||||
"largefile.bin": true,
|
||||
}
|
||||
|
||||
if len(files) != len(expectedFiles) {
|
||||
t.Errorf("Expected %d files, got %d", len(expectedFiles), len(files))
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if !expectedFiles[file.Path] {
|
||||
t.Errorf("Unexpected file in database: %s", file.Path)
|
||||
}
|
||||
delete(expectedFiles, file.Path)
|
||||
|
||||
// Verify file metadata
|
||||
fsFile := testFS[file.Path]
|
||||
if fsFile == nil {
|
||||
t.Errorf("File %s not found in test filesystem", file.Path)
|
||||
continue
|
||||
}
|
||||
|
||||
if file.Size != int64(len(fsFile.Data)) {
|
||||
t.Errorf("File %s: expected size %d, got %d", file.Path, len(fsFile.Data), file.Size)
|
||||
}
|
||||
|
||||
if file.Mode != uint32(fsFile.Mode) {
|
||||
t.Errorf("File %s: expected mode %o, got %o", file.Path, fsFile.Mode, file.Mode)
|
||||
}
|
||||
}
|
||||
|
||||
if len(expectedFiles) > 0 {
|
||||
t.Errorf("Files not found in database: %v", expectedFiles)
|
||||
}
|
||||
|
||||
// Verify chunks
|
||||
chunks, err := repos.Chunks.List(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to list chunks: %v", err)
|
||||
}
|
||||
|
||||
if len(chunks) == 0 {
|
||||
t.Error("No chunks found in database")
|
||||
}
|
||||
|
||||
// The large file should create 10 chunks (10MB / 1MB chunk size)
|
||||
// Plus the small files
|
||||
minExpectedChunks := 10 + 3
|
||||
if len(chunks) < minExpectedChunks {
|
||||
t.Errorf("Expected at least %d chunks, got %d", minExpectedChunks, len(chunks))
|
||||
}
|
||||
|
||||
// Verify at least one blob was created and uploaded
|
||||
// We can't list blobs directly, but we can check via snapshot blobs
|
||||
blobHashes, err := repos.Snapshots.GetBlobHashes(ctx, snapshotID)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to get blob hashes: %v", err)
|
||||
}
|
||||
if len(blobHashes) == 0 {
|
||||
t.Error("Expected at least one blob to be created")
|
||||
}
|
||||
|
||||
for _, blobHash := range blobHashes {
|
||||
// Check blob exists in mock S3
|
||||
exists, err := s3Client.BlobExists(ctx, blobHash)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to check blob %s: %v", blobHash, err)
|
||||
}
|
||||
if !exists {
|
||||
t.Errorf("Blob %s not found in S3", blobHash)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackupDeduplication(t *testing.T) {
|
||||
// Create a temporary directory for the database
|
||||
tempDir := t.TempDir()
|
||||
dbPath := filepath.Join(tempDir, "test.db")
|
||||
|
||||
// Create test filesystem with duplicate content
|
||||
testFS := fstest.MapFS{
|
||||
"file1.txt": &fstest.MapFile{
|
||||
Data: []byte("Duplicate content"),
|
||||
Mode: 0644,
|
||||
ModTime: time.Now(),
|
||||
},
|
||||
"file2.txt": &fstest.MapFile{
|
||||
Data: []byte("Duplicate content"),
|
||||
Mode: 0644,
|
||||
ModTime: time.Now(),
|
||||
},
|
||||
"file3.txt": &fstest.MapFile{
|
||||
Data: []byte("Unique content"),
|
||||
Mode: 0644,
|
||||
ModTime: time.Now(),
|
||||
},
|
||||
}
|
||||
|
||||
// Initialize the database
|
||||
ctx := context.Background()
|
||||
db, err := database.New(ctx, dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create database: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
t.Logf("Failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create mock S3 client
|
||||
s3Client := NewMockS3Client()
|
||||
|
||||
// Run backup
|
||||
backupEngine := &BackupEngine{
|
||||
repos: repos,
|
||||
s3Client: s3Client,
|
||||
}
|
||||
|
||||
_, err = backupEngine.Backup(ctx, testFS, ".")
|
||||
if err != nil {
|
||||
t.Fatalf("Backup failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify deduplication
|
||||
chunks, err := repos.Chunks.List(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to list chunks: %v", err)
|
||||
}
|
||||
|
||||
// Should have only 2 unique chunks (duplicate content + unique content)
|
||||
if len(chunks) != 2 {
|
||||
t.Errorf("Expected 2 unique chunks, got %d", len(chunks))
|
||||
}
|
||||
|
||||
// Verify chunk references
|
||||
for _, chunk := range chunks {
|
||||
files, err := repos.ChunkFiles.GetByChunkHash(ctx, chunk.ChunkHash)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to get files for chunk %s: %v", chunk.ChunkHash, err)
|
||||
}
|
||||
|
||||
// The duplicate content chunk should be referenced by 2 files
|
||||
if chunk.Size == int64(len("Duplicate content")) && len(files) != 2 {
|
||||
t.Errorf("Expected duplicate chunk to be referenced by 2 files, got %d", len(files))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// BackupEngine performs backup operations
|
||||
type BackupEngine struct {
|
||||
repos *database.Repositories
|
||||
s3Client interface {
|
||||
PutBlob(ctx context.Context, hash string, data []byte) error
|
||||
BlobExists(ctx context.Context, hash string) (bool, error)
|
||||
}
|
||||
}
|
||||
|
||||
// Backup performs a backup of the given filesystem
|
||||
func (b *BackupEngine) Backup(ctx context.Context, fsys fs.FS, root string) (string, error) {
|
||||
// Create a new snapshot
|
||||
hostname, _ := os.Hostname()
|
||||
snapshotID := time.Now().Format(time.RFC3339)
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID,
|
||||
Hostname: hostname,
|
||||
VaultikVersion: "test",
|
||||
StartedAt: time.Now(),
|
||||
CompletedAt: nil,
|
||||
}
|
||||
|
||||
// Create initial snapshot record
|
||||
err := b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return b.repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Track counters
|
||||
var fileCount, chunkCount, blobCount, totalSize, blobSize int64
|
||||
|
||||
// Track which chunks we've seen to handle deduplication
|
||||
processedChunks := make(map[string]bool)
|
||||
|
||||
// Scan the filesystem and process files
|
||||
err = fs.WalkDir(fsys, root, func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Skip directories
|
||||
if d.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get file info
|
||||
info, err := d.Info()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Handle symlinks
|
||||
if info.Mode()&fs.ModeSymlink != 0 {
|
||||
// For testing, we'll skip symlinks since fstest doesn't support them well
|
||||
return nil
|
||||
}
|
||||
|
||||
// Create file record in a short transaction
|
||||
file := &database.File{
|
||||
Path: path,
|
||||
Size: info.Size(),
|
||||
Mode: uint32(info.Mode()),
|
||||
MTime: info.ModTime(),
|
||||
CTime: info.ModTime(), // Use mtime as ctime for test
|
||||
UID: 1000, // Default UID for test
|
||||
GID: 1000, // Default GID for test
|
||||
}
|
||||
err = b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return b.repos.Files.Create(ctx, tx, file)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fileCount++
|
||||
totalSize += info.Size()
|
||||
|
||||
// Read and process file in chunks
|
||||
f, err := fsys.Open(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() {
|
||||
if err := f.Close(); err != nil {
|
||||
// Log but don't fail since we're already in an error path potentially
|
||||
fmt.Fprintf(os.Stderr, "Failed to close file: %v\n", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Process file in chunks
|
||||
chunkIndex := 0
|
||||
buffer := make([]byte, defaultChunkSize)
|
||||
|
||||
for {
|
||||
n, err := f.Read(buffer)
|
||||
if err != nil && err != io.EOF {
|
||||
return err
|
||||
}
|
||||
if n == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
chunkData := buffer[:n]
|
||||
chunkHash := calculateHash(chunkData)
|
||||
|
||||
// Check if chunk already exists (outside of transaction)
|
||||
existingChunk, _ := b.repos.Chunks.GetByHash(ctx, chunkHash)
|
||||
if existingChunk == nil {
|
||||
// Create new chunk in a short transaction
|
||||
err = b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
chunk := &database.Chunk{
|
||||
ChunkHash: chunkHash,
|
||||
Size: int64(n),
|
||||
}
|
||||
return b.repos.Chunks.Create(ctx, tx, chunk)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
processedChunks[chunkHash] = true
|
||||
}
|
||||
|
||||
// Create file-chunk mapping in a short transaction
|
||||
err = b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
fileChunk := &database.FileChunk{
|
||||
FileID: file.ID,
|
||||
Idx: chunkIndex,
|
||||
ChunkHash: chunkHash,
|
||||
}
|
||||
return b.repos.FileChunks.Create(ctx, tx, fileChunk)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create chunk-file mapping in a short transaction
|
||||
err = b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
chunkFile := &database.ChunkFile{
|
||||
ChunkHash: chunkHash,
|
||||
FileID: file.ID,
|
||||
FileOffset: int64(chunkIndex * defaultChunkSize),
|
||||
Length: int64(n),
|
||||
}
|
||||
return b.repos.ChunkFiles.Create(ctx, tx, chunkFile)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
chunkIndex++
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// After all files are processed, create blobs for new chunks
|
||||
for chunkHash := range processedChunks {
|
||||
// Get chunk data (outside of transaction)
|
||||
chunk, err := b.repos.Chunks.GetByHash(ctx, chunkHash)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
chunkCount++
|
||||
|
||||
// In a real system, blobs would contain multiple chunks and be encrypted
|
||||
// For testing, we'll create a blob with a "blob-" prefix to differentiate
|
||||
blobHash := "blob-" + chunkHash
|
||||
|
||||
// For the test, we'll create dummy data since we don't have the original
|
||||
dummyData := []byte(chunkHash)
|
||||
|
||||
// Upload to S3 as a blob
|
||||
if err := b.s3Client.PutBlob(ctx, blobHash, dummyData); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Create blob entry in a short transaction
|
||||
err = b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
blob := &database.Blob{
|
||||
ID: "test-blob-" + blobHash[:8],
|
||||
Hash: blobHash,
|
||||
CreatedTS: time.Now(),
|
||||
}
|
||||
return b.repos.Blobs.Create(ctx, tx, blob)
|
||||
})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
blobCount++
|
||||
blobSize += chunk.Size
|
||||
|
||||
// Create blob-chunk mapping in a short transaction
|
||||
err = b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
blobChunk := &database.BlobChunk{
|
||||
BlobID: "test-blob-" + blobHash[:8],
|
||||
ChunkHash: chunkHash,
|
||||
Offset: 0,
|
||||
Length: chunk.Size,
|
||||
}
|
||||
return b.repos.BlobChunks.Create(ctx, tx, blobChunk)
|
||||
})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Add blob to snapshot in a short transaction
|
||||
err = b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return b.repos.Snapshots.AddBlob(ctx, tx, snapshotID, "test-blob-"+blobHash[:8], blobHash)
|
||||
})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
// Update snapshot with final counts
|
||||
err = b.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return b.repos.Snapshots.UpdateCounts(ctx, tx, snapshotID, fileCount, chunkCount, blobCount, totalSize, blobSize)
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return snapshotID, nil
|
||||
}
|
||||
|
||||
func calculateHash(data []byte) string {
|
||||
h := sha256.New()
|
||||
h.Write(data)
|
||||
return fmt.Sprintf("%x", h.Sum(nil))
|
||||
}
|
||||
|
||||
func generateLargeFileContent(size int) []byte {
|
||||
data := make([]byte, size)
|
||||
// Fill with pattern that changes every chunk to avoid deduplication
|
||||
for i := 0; i < size; i++ {
|
||||
chunkNum := i / defaultChunkSize
|
||||
data[i] = byte((i + chunkNum) % 256)
|
||||
}
|
||||
return data
|
||||
}
|
||||
|
||||
const defaultChunkSize = 1024 * 1024 // 1MB chunks
|
||||
237
internal/snapshot/file_change_test.go
Normal file
237
internal/snapshot/file_change_test.go
Normal file
@@ -0,0 +1,237 @@
|
||||
package snapshot_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
||||
"github.com/spf13/afero"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// TestFileContentChange verifies that when a file's content changes,
|
||||
// the old chunks are properly disassociated
|
||||
func TestFileContentChange(t *testing.T) {
|
||||
// Initialize logger for tests
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
// Create in-memory filesystem
|
||||
fs := afero.NewMemMapFs()
|
||||
|
||||
// Create initial file
|
||||
err := afero.WriteFile(fs, "/test.txt", []byte("Initial content"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create test database
|
||||
db, err := database.NewTestDB()
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
t.Errorf("failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create scanner
|
||||
scanner := snapshot.NewScanner(snapshot.ScannerConfig{
|
||||
FS: fs,
|
||||
ChunkSize: int64(1024 * 16), // 16KB chunks for testing
|
||||
Repositories: repos,
|
||||
MaxBlobSize: int64(1024 * 1024), // 1MB blobs
|
||||
CompressionLevel: 3,
|
||||
AgeRecipients: []string{"age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"}, // Test public key
|
||||
})
|
||||
|
||||
// Create first snapshot
|
||||
ctx := context.Background()
|
||||
snapshotID1 := "snapshot1"
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID1,
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "test",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// First scan - should create chunks for initial content
|
||||
result1, err := scanner.Scan(ctx, "/", snapshotID1)
|
||||
require.NoError(t, err)
|
||||
t.Logf("First scan: %d files scanned", result1.FilesScanned)
|
||||
|
||||
// Get file chunks from first scan
|
||||
fileChunks1, err := repos.FileChunks.GetByPath(ctx, "/test.txt")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, fileChunks1, 1) // Small file = 1 chunk
|
||||
oldChunkHash := fileChunks1[0].ChunkHash
|
||||
|
||||
// Get chunk files from first scan
|
||||
chunkFiles1, err := repos.ChunkFiles.GetByFilePath(ctx, "/test.txt")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, chunkFiles1, 1)
|
||||
|
||||
// Modify the file
|
||||
time.Sleep(10 * time.Millisecond) // Ensure mtime changes
|
||||
err = afero.WriteFile(fs, "/test.txt", []byte("Modified content with different data"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create second snapshot
|
||||
snapshotID2 := "snapshot2"
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID2,
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "test",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Second scan - should create new chunks and remove old associations
|
||||
result2, err := scanner.Scan(ctx, "/", snapshotID2)
|
||||
require.NoError(t, err)
|
||||
t.Logf("Second scan: %d files scanned", result2.FilesScanned)
|
||||
|
||||
// Get file chunks from second scan
|
||||
fileChunks2, err := repos.FileChunks.GetByPath(ctx, "/test.txt")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, fileChunks2, 1) // Still 1 chunk but different hash
|
||||
newChunkHash := fileChunks2[0].ChunkHash
|
||||
|
||||
// Verify the chunk hashes are different
|
||||
assert.NotEqual(t, oldChunkHash, newChunkHash, "Chunk hash should change when content changes")
|
||||
|
||||
// Get chunk files from second scan
|
||||
chunkFiles2, err := repos.ChunkFiles.GetByFilePath(ctx, "/test.txt")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, chunkFiles2, 1)
|
||||
assert.Equal(t, newChunkHash, chunkFiles2[0].ChunkHash)
|
||||
|
||||
// Verify old chunk still exists (it's still valid data)
|
||||
oldChunk, err := repos.Chunks.GetByHash(ctx, oldChunkHash)
|
||||
require.NoError(t, err)
|
||||
assert.NotNil(t, oldChunk)
|
||||
|
||||
// Verify new chunk exists
|
||||
newChunk, err := repos.Chunks.GetByHash(ctx, newChunkHash)
|
||||
require.NoError(t, err)
|
||||
assert.NotNil(t, newChunk)
|
||||
|
||||
// Verify that chunk_files for old chunk no longer references this file
|
||||
oldChunkFiles, err := repos.ChunkFiles.GetByChunkHash(ctx, oldChunkHash)
|
||||
require.NoError(t, err)
|
||||
for _, cf := range oldChunkFiles {
|
||||
file, err := repos.Files.GetByID(ctx, cf.FileID)
|
||||
require.NoError(t, err)
|
||||
assert.NotEqual(t, "/data/test.txt", file.Path, "Old chunk should not be associated with the modified file")
|
||||
}
|
||||
}
|
||||
|
||||
// TestMultipleFileChanges verifies handling of multiple file changes in one scan
|
||||
func TestMultipleFileChanges(t *testing.T) {
|
||||
// Initialize logger for tests
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
// Create in-memory filesystem
|
||||
fs := afero.NewMemMapFs()
|
||||
|
||||
// Create initial files
|
||||
files := map[string]string{
|
||||
"/file1.txt": "Content 1",
|
||||
"/file2.txt": "Content 2",
|
||||
"/file3.txt": "Content 3",
|
||||
}
|
||||
|
||||
for path, content := range files {
|
||||
err := afero.WriteFile(fs, path, []byte(content), 0644)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// Create test database
|
||||
db, err := database.NewTestDB()
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
t.Errorf("failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create scanner
|
||||
scanner := snapshot.NewScanner(snapshot.ScannerConfig{
|
||||
FS: fs,
|
||||
ChunkSize: int64(1024 * 16), // 16KB chunks for testing
|
||||
Repositories: repos,
|
||||
MaxBlobSize: int64(1024 * 1024), // 1MB blobs
|
||||
CompressionLevel: 3,
|
||||
AgeRecipients: []string{"age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"}, // Test public key
|
||||
})
|
||||
|
||||
// Create first snapshot
|
||||
ctx := context.Background()
|
||||
snapshotID1 := "snapshot1"
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID1,
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "test",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// First scan
|
||||
result1, err := scanner.Scan(ctx, "/", snapshotID1)
|
||||
require.NoError(t, err)
|
||||
// Only regular files are counted, not directories
|
||||
assert.Equal(t, 3, result1.FilesScanned)
|
||||
|
||||
// Modify two files
|
||||
time.Sleep(10 * time.Millisecond) // Ensure mtime changes
|
||||
err = afero.WriteFile(fs, "/file1.txt", []byte("Modified content 1"), 0644)
|
||||
require.NoError(t, err)
|
||||
err = afero.WriteFile(fs, "/file3.txt", []byte("Modified content 3"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create second snapshot
|
||||
snapshotID2 := "snapshot2"
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID2,
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "test",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Second scan
|
||||
result2, err := scanner.Scan(ctx, "/", snapshotID2)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Only regular files are counted, not directories
|
||||
assert.Equal(t, 3, result2.FilesScanned)
|
||||
|
||||
// Verify each file has exactly one set of chunks
|
||||
for path := range files {
|
||||
fileChunks, err := repos.FileChunks.GetByPath(ctx, path)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, fileChunks, 1, "File %s should have exactly 1 chunk association", path)
|
||||
|
||||
chunkFiles, err := repos.ChunkFiles.GetByFilePath(ctx, path)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, chunkFiles, 1, "File %s should have exactly 1 chunk-file association", path)
|
||||
}
|
||||
}
|
||||
70
internal/snapshot/manifest.go
Normal file
70
internal/snapshot/manifest.go
Normal file
@@ -0,0 +1,70 @@
|
||||
package snapshot
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/klauspost/compress/zstd"
|
||||
)
|
||||
|
||||
// Manifest represents the structure of a snapshot's blob manifest
|
||||
type Manifest struct {
|
||||
SnapshotID string `json:"snapshot_id"`
|
||||
Timestamp string `json:"timestamp"`
|
||||
BlobCount int `json:"blob_count"`
|
||||
TotalCompressedSize int64 `json:"total_compressed_size"`
|
||||
Blobs []BlobInfo `json:"blobs"`
|
||||
}
|
||||
|
||||
// BlobInfo represents information about a single blob in the manifest
|
||||
type BlobInfo struct {
|
||||
Hash string `json:"hash"`
|
||||
CompressedSize int64 `json:"compressed_size"`
|
||||
}
|
||||
|
||||
// DecodeManifest decodes a manifest from a reader containing compressed JSON
|
||||
func DecodeManifest(r io.Reader) (*Manifest, error) {
|
||||
// Decompress using zstd
|
||||
zr, err := zstd.NewReader(r)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating zstd reader: %w", err)
|
||||
}
|
||||
defer zr.Close()
|
||||
|
||||
// Decode JSON manifest
|
||||
var manifest Manifest
|
||||
if err := json.NewDecoder(zr).Decode(&manifest); err != nil {
|
||||
return nil, fmt.Errorf("decoding manifest: %w", err)
|
||||
}
|
||||
|
||||
return &manifest, nil
|
||||
}
|
||||
|
||||
// EncodeManifest encodes a manifest to compressed JSON
|
||||
func EncodeManifest(manifest *Manifest, compressionLevel int) ([]byte, error) {
|
||||
// Marshal to JSON
|
||||
jsonData, err := json.MarshalIndent(manifest, "", " ")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshaling manifest: %w", err)
|
||||
}
|
||||
|
||||
// Compress using zstd
|
||||
var compressedBuf bytes.Buffer
|
||||
writer, err := zstd.NewWriter(&compressedBuf, zstd.WithEncoderLevel(zstd.EncoderLevelFromZstd(compressionLevel)))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating zstd writer: %w", err)
|
||||
}
|
||||
|
||||
if _, err := writer.Write(jsonData); err != nil {
|
||||
_ = writer.Close()
|
||||
return nil, fmt.Errorf("writing compressed data: %w", err)
|
||||
}
|
||||
|
||||
if err := writer.Close(); err != nil {
|
||||
return nil, fmt.Errorf("closing zstd writer: %w", err)
|
||||
}
|
||||
|
||||
return compressedBuf.Bytes(), nil
|
||||
}
|
||||
43
internal/snapshot/module.go
Normal file
43
internal/snapshot/module.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package snapshot
|
||||
|
||||
import (
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/storage"
|
||||
"github.com/spf13/afero"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// ScannerParams holds parameters for scanner creation
|
||||
type ScannerParams struct {
|
||||
EnableProgress bool
|
||||
Fs afero.Fs
|
||||
}
|
||||
|
||||
// Module exports backup functionality as an fx module.
|
||||
// It provides a ScannerFactory that can create Scanner instances
|
||||
// with custom parameters while sharing common dependencies.
|
||||
var Module = fx.Module("backup",
|
||||
fx.Provide(
|
||||
provideScannerFactory,
|
||||
NewSnapshotManager,
|
||||
),
|
||||
)
|
||||
|
||||
// ScannerFactory creates scanners with custom parameters
|
||||
type ScannerFactory func(params ScannerParams) *Scanner
|
||||
|
||||
func provideScannerFactory(cfg *config.Config, repos *database.Repositories, storer storage.Storer) ScannerFactory {
|
||||
return func(params ScannerParams) *Scanner {
|
||||
return NewScanner(ScannerConfig{
|
||||
FS: params.Fs,
|
||||
ChunkSize: cfg.ChunkSize.Int64(),
|
||||
Repositories: repos,
|
||||
Storage: storer,
|
||||
MaxBlobSize: cfg.BlobSizeLimit.Int64(),
|
||||
CompressionLevel: cfg.CompressionLevel,
|
||||
AgeRecipients: cfg.AgeRecipients,
|
||||
EnableProgress: params.EnableProgress,
|
||||
})
|
||||
}
|
||||
}
|
||||
419
internal/snapshot/progress.go
Normal file
419
internal/snapshot/progress.go
Normal file
@@ -0,0 +1,419 @@
|
||||
package snapshot
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/signal"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"github.com/dustin/go-humanize"
|
||||
)
|
||||
|
||||
const (
|
||||
// SummaryInterval defines how often one-line status updates are printed.
|
||||
// These updates show current progress, ETA, and the file being processed.
|
||||
SummaryInterval = 10 * time.Second
|
||||
|
||||
// DetailInterval defines how often multi-line detailed status reports are printed.
|
||||
// These reports include comprehensive statistics about files, chunks, blobs, and uploads.
|
||||
DetailInterval = 60 * time.Second
|
||||
|
||||
// UploadProgressInterval defines how often upload progress messages are logged.
|
||||
UploadProgressInterval = 15 * time.Second
|
||||
)
|
||||
|
||||
// ProgressStats holds atomic counters for progress tracking
|
||||
type ProgressStats struct {
|
||||
FilesScanned atomic.Int64 // Total files seen during scan (includes skipped)
|
||||
FilesProcessed atomic.Int64 // Files actually processed in phase 2
|
||||
FilesSkipped atomic.Int64 // Files skipped due to no changes
|
||||
BytesScanned atomic.Int64 // Bytes from new/changed files only
|
||||
BytesSkipped atomic.Int64 // Bytes from unchanged files
|
||||
BytesProcessed atomic.Int64 // Actual bytes processed (for ETA calculation)
|
||||
ChunksCreated atomic.Int64
|
||||
BlobsCreated atomic.Int64
|
||||
BlobsUploaded atomic.Int64
|
||||
BytesUploaded atomic.Int64
|
||||
UploadDurationMs atomic.Int64 // Total milliseconds spent uploading to S3
|
||||
CurrentFile atomic.Value // stores string
|
||||
TotalSize atomic.Int64 // Total size to process (set after scan phase)
|
||||
TotalFiles atomic.Int64 // Total files to process in phase 2
|
||||
ProcessStartTime atomic.Value // stores time.Time when processing starts
|
||||
StartTime time.Time
|
||||
mu sync.RWMutex
|
||||
lastDetailTime time.Time
|
||||
|
||||
// Upload tracking
|
||||
CurrentUpload atomic.Value // stores *UploadInfo
|
||||
lastChunkingTime time.Time // Track when we last showed chunking progress
|
||||
}
|
||||
|
||||
// UploadInfo tracks current upload progress
|
||||
type UploadInfo struct {
|
||||
BlobHash string
|
||||
Size int64
|
||||
StartTime time.Time
|
||||
LastLogTime time.Time
|
||||
}
|
||||
|
||||
// ProgressReporter handles periodic progress reporting
|
||||
type ProgressReporter struct {
|
||||
stats *ProgressStats
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
wg sync.WaitGroup
|
||||
detailTicker *time.Ticker
|
||||
summaryTicker *time.Ticker
|
||||
sigChan chan os.Signal
|
||||
}
|
||||
|
||||
// NewProgressReporter creates a new progress reporter
|
||||
func NewProgressReporter() *ProgressReporter {
|
||||
stats := &ProgressStats{
|
||||
StartTime: time.Now().UTC(),
|
||||
lastDetailTime: time.Now().UTC(),
|
||||
}
|
||||
stats.CurrentFile.Store("")
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
|
||||
pr := &ProgressReporter{
|
||||
stats: stats,
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
summaryTicker: time.NewTicker(SummaryInterval),
|
||||
detailTicker: time.NewTicker(DetailInterval),
|
||||
sigChan: make(chan os.Signal, 1),
|
||||
}
|
||||
|
||||
// Register for SIGUSR1
|
||||
signal.Notify(pr.sigChan, syscall.SIGUSR1)
|
||||
|
||||
return pr
|
||||
}
|
||||
|
||||
// Start begins the progress reporting
|
||||
func (pr *ProgressReporter) Start() {
|
||||
pr.wg.Add(1)
|
||||
go pr.run()
|
||||
|
||||
// Print initial multi-line status
|
||||
pr.printDetailedStatus()
|
||||
}
|
||||
|
||||
// Stop stops the progress reporting
|
||||
func (pr *ProgressReporter) Stop() {
|
||||
pr.cancel()
|
||||
pr.summaryTicker.Stop()
|
||||
pr.detailTicker.Stop()
|
||||
signal.Stop(pr.sigChan)
|
||||
close(pr.sigChan)
|
||||
pr.wg.Wait()
|
||||
}
|
||||
|
||||
// GetStats returns the progress stats for updating
|
||||
func (pr *ProgressReporter) GetStats() *ProgressStats {
|
||||
return pr.stats
|
||||
}
|
||||
|
||||
// SetTotalSize sets the total size to process (after scan phase)
|
||||
func (pr *ProgressReporter) SetTotalSize(size int64) {
|
||||
pr.stats.TotalSize.Store(size)
|
||||
pr.stats.ProcessStartTime.Store(time.Now().UTC())
|
||||
}
|
||||
|
||||
// run is the main progress reporting loop
|
||||
func (pr *ProgressReporter) run() {
|
||||
defer pr.wg.Done()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-pr.ctx.Done():
|
||||
return
|
||||
case <-pr.summaryTicker.C:
|
||||
pr.printSummaryStatus()
|
||||
case <-pr.detailTicker.C:
|
||||
pr.printDetailedStatus()
|
||||
case <-pr.sigChan:
|
||||
// SIGUSR1 received, print detailed status
|
||||
log.Info("SIGUSR1 received, printing detailed status")
|
||||
pr.printDetailedStatus()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// printSummaryStatus prints a one-line status update
|
||||
func (pr *ProgressReporter) printSummaryStatus() {
|
||||
// Check if we're currently uploading
|
||||
if uploadInfo, ok := pr.stats.CurrentUpload.Load().(*UploadInfo); ok && uploadInfo != nil {
|
||||
// Show upload progress instead
|
||||
pr.printUploadProgress(uploadInfo)
|
||||
return
|
||||
}
|
||||
|
||||
// Only show chunking progress if we've done chunking recently
|
||||
pr.stats.mu.RLock()
|
||||
timeSinceLastChunk := time.Since(pr.stats.lastChunkingTime)
|
||||
pr.stats.mu.RUnlock()
|
||||
|
||||
if timeSinceLastChunk > SummaryInterval*2 {
|
||||
// No recent chunking activity, don't show progress
|
||||
return
|
||||
}
|
||||
|
||||
elapsed := time.Since(pr.stats.StartTime)
|
||||
bytesScanned := pr.stats.BytesScanned.Load()
|
||||
bytesSkipped := pr.stats.BytesSkipped.Load()
|
||||
bytesProcessed := pr.stats.BytesProcessed.Load()
|
||||
totalSize := pr.stats.TotalSize.Load()
|
||||
currentFile := pr.stats.CurrentFile.Load().(string)
|
||||
|
||||
// Calculate ETA if we have total size and are processing
|
||||
etaStr := ""
|
||||
if totalSize > 0 && bytesProcessed > 0 {
|
||||
processStart, ok := pr.stats.ProcessStartTime.Load().(time.Time)
|
||||
if ok && !processStart.IsZero() {
|
||||
processElapsed := time.Since(processStart)
|
||||
rate := float64(bytesProcessed) / processElapsed.Seconds()
|
||||
if rate > 0 {
|
||||
remainingBytes := totalSize - bytesProcessed
|
||||
remainingSeconds := float64(remainingBytes) / rate
|
||||
eta := time.Duration(remainingSeconds * float64(time.Second))
|
||||
etaStr = fmt.Sprintf(" | ETA: %s", formatDuration(eta))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
rate := float64(bytesScanned+bytesSkipped) / elapsed.Seconds()
|
||||
|
||||
// Show files processed / total files to process
|
||||
filesProcessed := pr.stats.FilesProcessed.Load()
|
||||
totalFiles := pr.stats.TotalFiles.Load()
|
||||
|
||||
status := fmt.Sprintf("Snapshot progress: %d/%d files, %s/%s (%.1f%%), %s/s%s",
|
||||
filesProcessed,
|
||||
totalFiles,
|
||||
humanize.Bytes(uint64(bytesProcessed)),
|
||||
humanize.Bytes(uint64(totalSize)),
|
||||
float64(bytesProcessed)/float64(totalSize)*100,
|
||||
humanize.Bytes(uint64(rate)),
|
||||
etaStr,
|
||||
)
|
||||
|
||||
if currentFile != "" {
|
||||
status += fmt.Sprintf(" | Current: %s", truncatePath(currentFile, 40))
|
||||
}
|
||||
|
||||
log.Info(status)
|
||||
}
|
||||
|
||||
// printDetailedStatus prints a multi-line detailed status
|
||||
func (pr *ProgressReporter) printDetailedStatus() {
|
||||
pr.stats.mu.Lock()
|
||||
pr.stats.lastDetailTime = time.Now().UTC()
|
||||
pr.stats.mu.Unlock()
|
||||
|
||||
elapsed := time.Since(pr.stats.StartTime)
|
||||
filesScanned := pr.stats.FilesScanned.Load()
|
||||
filesSkipped := pr.stats.FilesSkipped.Load()
|
||||
bytesScanned := pr.stats.BytesScanned.Load()
|
||||
bytesSkipped := pr.stats.BytesSkipped.Load()
|
||||
bytesProcessed := pr.stats.BytesProcessed.Load()
|
||||
totalSize := pr.stats.TotalSize.Load()
|
||||
chunksCreated := pr.stats.ChunksCreated.Load()
|
||||
blobsCreated := pr.stats.BlobsCreated.Load()
|
||||
blobsUploaded := pr.stats.BlobsUploaded.Load()
|
||||
bytesUploaded := pr.stats.BytesUploaded.Load()
|
||||
currentFile := pr.stats.CurrentFile.Load().(string)
|
||||
|
||||
totalBytes := bytesScanned + bytesSkipped
|
||||
rate := float64(totalBytes) / elapsed.Seconds()
|
||||
|
||||
log.Notice("=== Snapshot Progress Report ===")
|
||||
log.Info("Elapsed time", "duration", formatDuration(elapsed))
|
||||
|
||||
// Calculate and show ETA if we have data
|
||||
if totalSize > 0 && bytesProcessed > 0 {
|
||||
processStart, ok := pr.stats.ProcessStartTime.Load().(time.Time)
|
||||
if ok && !processStart.IsZero() {
|
||||
processElapsed := time.Since(processStart)
|
||||
processRate := float64(bytesProcessed) / processElapsed.Seconds()
|
||||
if processRate > 0 {
|
||||
remainingBytes := totalSize - bytesProcessed
|
||||
remainingSeconds := float64(remainingBytes) / processRate
|
||||
eta := time.Duration(remainingSeconds * float64(time.Second))
|
||||
percentComplete := float64(bytesProcessed) / float64(totalSize) * 100
|
||||
log.Info("Overall progress",
|
||||
"percent", fmt.Sprintf("%.1f%%", percentComplete),
|
||||
"processed", humanize.Bytes(uint64(bytesProcessed)),
|
||||
"total", humanize.Bytes(uint64(totalSize)),
|
||||
"rate", humanize.Bytes(uint64(processRate))+"/s",
|
||||
"eta", formatDuration(eta))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log.Info("Files processed",
|
||||
"scanned", filesScanned,
|
||||
"skipped", filesSkipped,
|
||||
"total", filesScanned,
|
||||
"skip_rate", formatPercent(filesSkipped, filesScanned))
|
||||
log.Info("Data scanned",
|
||||
"new", humanize.Bytes(uint64(bytesScanned)),
|
||||
"skipped", humanize.Bytes(uint64(bytesSkipped)),
|
||||
"total", humanize.Bytes(uint64(totalBytes)),
|
||||
"scan_rate", humanize.Bytes(uint64(rate))+"/s")
|
||||
log.Info("Chunks created", "count", chunksCreated)
|
||||
log.Info("Blobs status",
|
||||
"created", blobsCreated,
|
||||
"uploaded", blobsUploaded,
|
||||
"pending", blobsCreated-blobsUploaded)
|
||||
log.Info("Total uploaded to S3",
|
||||
"uploaded", humanize.Bytes(uint64(bytesUploaded)),
|
||||
"compression_ratio", formatRatio(bytesUploaded, bytesScanned))
|
||||
if currentFile != "" {
|
||||
log.Info("Current file", "path", currentFile)
|
||||
}
|
||||
log.Notice("=============================")
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
|
||||
func formatDuration(d time.Duration) string {
|
||||
if d < 0 {
|
||||
return "unknown"
|
||||
}
|
||||
if d < time.Minute {
|
||||
return fmt.Sprintf("%ds", int(d.Seconds()))
|
||||
}
|
||||
if d < time.Hour {
|
||||
return fmt.Sprintf("%dm%ds", int(d.Minutes()), int(d.Seconds())%60)
|
||||
}
|
||||
return fmt.Sprintf("%dh%dm", int(d.Hours()), int(d.Minutes())%60)
|
||||
}
|
||||
|
||||
func formatPercent(numerator, denominator int64) string {
|
||||
if denominator == 0 {
|
||||
return "0.0%"
|
||||
}
|
||||
return fmt.Sprintf("%.1f%%", float64(numerator)/float64(denominator)*100)
|
||||
}
|
||||
|
||||
func formatRatio(compressed, uncompressed int64) string {
|
||||
if uncompressed == 0 {
|
||||
return "1.00"
|
||||
}
|
||||
ratio := float64(compressed) / float64(uncompressed)
|
||||
return fmt.Sprintf("%.2f", ratio)
|
||||
}
|
||||
|
||||
func truncatePath(path string, maxLen int) string {
|
||||
if len(path) <= maxLen {
|
||||
return path
|
||||
}
|
||||
// Keep the last maxLen-3 characters and prepend "..."
|
||||
return "..." + path[len(path)-(maxLen-3):]
|
||||
}
|
||||
|
||||
// printUploadProgress prints upload progress
|
||||
func (pr *ProgressReporter) printUploadProgress(info *UploadInfo) {
|
||||
// This function is called repeatedly during upload, not just at start
|
||||
// Don't print anything here - the actual progress is shown by ReportUploadProgress
|
||||
}
|
||||
|
||||
// ReportUploadStart marks the beginning of a blob upload
|
||||
func (pr *ProgressReporter) ReportUploadStart(blobHash string, size int64) {
|
||||
info := &UploadInfo{
|
||||
BlobHash: blobHash,
|
||||
Size: size,
|
||||
StartTime: time.Now().UTC(),
|
||||
}
|
||||
pr.stats.CurrentUpload.Store(info)
|
||||
|
||||
// Log the start of upload
|
||||
log.Info("Starting blob upload to S3",
|
||||
"hash", blobHash[:8]+"...",
|
||||
"size", humanize.Bytes(uint64(size)))
|
||||
}
|
||||
|
||||
// ReportUploadComplete marks the completion of a blob upload
|
||||
func (pr *ProgressReporter) ReportUploadComplete(blobHash string, size int64, duration time.Duration) {
|
||||
// Clear current upload
|
||||
pr.stats.CurrentUpload.Store((*UploadInfo)(nil))
|
||||
|
||||
// Add to total upload duration
|
||||
pr.stats.UploadDurationMs.Add(duration.Milliseconds())
|
||||
|
||||
// Calculate speed
|
||||
if duration < time.Millisecond {
|
||||
duration = time.Millisecond
|
||||
}
|
||||
bytesPerSec := float64(size) / duration.Seconds()
|
||||
bitsPerSec := bytesPerSec * 8
|
||||
|
||||
// Format speed
|
||||
var speedStr string
|
||||
if bitsPerSec >= 1e9 {
|
||||
speedStr = fmt.Sprintf("%.1fGbit/sec", bitsPerSec/1e9)
|
||||
} else if bitsPerSec >= 1e6 {
|
||||
speedStr = fmt.Sprintf("%.0fMbit/sec", bitsPerSec/1e6)
|
||||
} else if bitsPerSec >= 1e3 {
|
||||
speedStr = fmt.Sprintf("%.0fKbit/sec", bitsPerSec/1e3)
|
||||
} else {
|
||||
speedStr = fmt.Sprintf("%.0fbit/sec", bitsPerSec)
|
||||
}
|
||||
|
||||
log.Info("Blob upload completed",
|
||||
"hash", blobHash[:8]+"...",
|
||||
"size", humanize.Bytes(uint64(size)),
|
||||
"duration", formatDuration(duration),
|
||||
"speed", speedStr)
|
||||
}
|
||||
|
||||
// UpdateChunkingActivity updates the last chunking time
|
||||
func (pr *ProgressReporter) UpdateChunkingActivity() {
|
||||
pr.stats.mu.Lock()
|
||||
pr.stats.lastChunkingTime = time.Now().UTC()
|
||||
pr.stats.mu.Unlock()
|
||||
}
|
||||
|
||||
// ReportUploadProgress reports current upload progress with instantaneous speed
|
||||
func (pr *ProgressReporter) ReportUploadProgress(blobHash string, bytesUploaded, totalSize int64, instantSpeed float64) {
|
||||
// Update the current upload info with progress
|
||||
if uploadInfo, ok := pr.stats.CurrentUpload.Load().(*UploadInfo); ok && uploadInfo != nil {
|
||||
now := time.Now()
|
||||
|
||||
// Only log at the configured interval
|
||||
if now.Sub(uploadInfo.LastLogTime) >= UploadProgressInterval {
|
||||
// Format speed in bits/second using humanize
|
||||
bitsPerSec := instantSpeed * 8
|
||||
speedStr := humanize.SI(bitsPerSec, "bit/sec")
|
||||
|
||||
percent := float64(bytesUploaded) / float64(totalSize) * 100
|
||||
|
||||
// Calculate ETA based on current speed
|
||||
etaStr := "unknown"
|
||||
if instantSpeed > 0 && bytesUploaded < totalSize {
|
||||
remainingBytes := totalSize - bytesUploaded
|
||||
remainingSeconds := float64(remainingBytes) / instantSpeed
|
||||
eta := time.Duration(remainingSeconds * float64(time.Second))
|
||||
etaStr = formatDuration(eta)
|
||||
}
|
||||
|
||||
log.Info("Blob upload progress",
|
||||
"hash", blobHash[:8]+"...",
|
||||
"progress", fmt.Sprintf("%.1f%%", percent),
|
||||
"uploaded", humanize.Bytes(uint64(bytesUploaded)),
|
||||
"total", humanize.Bytes(uint64(totalSize)),
|
||||
"speed", speedStr,
|
||||
"eta", etaStr)
|
||||
|
||||
uploadInfo.LastLogTime = now
|
||||
}
|
||||
}
|
||||
}
|
||||
983
internal/snapshot/scanner.go
Normal file
983
internal/snapshot/scanner.go
Normal file
@@ -0,0 +1,983 @@
|
||||
package snapshot
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/blob"
|
||||
"git.eeqj.de/sneak/vaultik/internal/chunker"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/storage"
|
||||
"github.com/dustin/go-humanize"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
// FileToProcess holds information about a file that needs processing
|
||||
type FileToProcess struct {
|
||||
Path string
|
||||
FileInfo os.FileInfo
|
||||
File *database.File
|
||||
}
|
||||
|
||||
// Scanner scans directories and populates the database with file and chunk information
|
||||
type Scanner struct {
|
||||
fs afero.Fs
|
||||
chunker *chunker.Chunker
|
||||
packer *blob.Packer
|
||||
repos *database.Repositories
|
||||
storage storage.Storer
|
||||
maxBlobSize int64
|
||||
compressionLevel int
|
||||
ageRecipient string
|
||||
snapshotID string // Current snapshot being processed
|
||||
progress *ProgressReporter
|
||||
|
||||
// In-memory cache of known chunk hashes for fast existence checks
|
||||
knownChunks map[string]struct{}
|
||||
knownChunksMu sync.RWMutex
|
||||
|
||||
// Mutex for coordinating blob creation
|
||||
packerMu sync.Mutex // Blocks chunk production during blob creation
|
||||
|
||||
// Context for cancellation
|
||||
scanCtx context.Context
|
||||
}
|
||||
|
||||
// ScannerConfig contains configuration for the scanner
|
||||
type ScannerConfig struct {
|
||||
FS afero.Fs
|
||||
ChunkSize int64
|
||||
Repositories *database.Repositories
|
||||
Storage storage.Storer
|
||||
MaxBlobSize int64
|
||||
CompressionLevel int
|
||||
AgeRecipients []string // Optional, empty means no encryption
|
||||
EnableProgress bool // Enable progress reporting
|
||||
}
|
||||
|
||||
// ScanResult contains the results of a scan operation
|
||||
type ScanResult struct {
|
||||
FilesScanned int
|
||||
FilesSkipped int
|
||||
FilesDeleted int
|
||||
BytesScanned int64
|
||||
BytesSkipped int64
|
||||
BytesDeleted int64
|
||||
ChunksCreated int
|
||||
BlobsCreated int
|
||||
StartTime time.Time
|
||||
EndTime time.Time
|
||||
}
|
||||
|
||||
// NewScanner creates a new scanner instance
|
||||
func NewScanner(cfg ScannerConfig) *Scanner {
|
||||
// Create encryptor (required for blob packing)
|
||||
if len(cfg.AgeRecipients) == 0 {
|
||||
log.Error("No age recipients configured - encryption is required")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Create blob packer with encryption
|
||||
packerCfg := blob.PackerConfig{
|
||||
MaxBlobSize: cfg.MaxBlobSize,
|
||||
CompressionLevel: cfg.CompressionLevel,
|
||||
Recipients: cfg.AgeRecipients,
|
||||
Repositories: cfg.Repositories,
|
||||
Fs: cfg.FS,
|
||||
}
|
||||
packer, err := blob.NewPacker(packerCfg)
|
||||
if err != nil {
|
||||
log.Error("Failed to create packer", "error", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
var progress *ProgressReporter
|
||||
if cfg.EnableProgress {
|
||||
progress = NewProgressReporter()
|
||||
}
|
||||
|
||||
return &Scanner{
|
||||
fs: cfg.FS,
|
||||
chunker: chunker.NewChunker(cfg.ChunkSize),
|
||||
packer: packer,
|
||||
repos: cfg.Repositories,
|
||||
storage: cfg.Storage,
|
||||
maxBlobSize: cfg.MaxBlobSize,
|
||||
compressionLevel: cfg.CompressionLevel,
|
||||
ageRecipient: strings.Join(cfg.AgeRecipients, ","),
|
||||
progress: progress,
|
||||
}
|
||||
}
|
||||
|
||||
// Scan scans a directory and populates the database
|
||||
func (s *Scanner) Scan(ctx context.Context, path string, snapshotID string) (*ScanResult, error) {
|
||||
s.snapshotID = snapshotID
|
||||
s.scanCtx = ctx
|
||||
result := &ScanResult{
|
||||
StartTime: time.Now().UTC(),
|
||||
}
|
||||
|
||||
// Set blob handler for concurrent upload
|
||||
if s.storage != nil {
|
||||
log.Debug("Setting blob handler for storage uploads")
|
||||
s.packer.SetBlobHandler(s.handleBlobReady)
|
||||
} else {
|
||||
log.Debug("No storage configured, blobs will not be uploaded")
|
||||
}
|
||||
|
||||
// Start progress reporting if enabled
|
||||
if s.progress != nil {
|
||||
s.progress.Start()
|
||||
defer s.progress.Stop()
|
||||
}
|
||||
|
||||
// Phase 0: Load known files and chunks from database into memory for fast lookup
|
||||
fmt.Println("Loading known files from database...")
|
||||
knownFiles, err := s.loadKnownFiles(ctx, path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("loading known files: %w", err)
|
||||
}
|
||||
fmt.Printf("Loaded %s known files from database\n", formatNumber(len(knownFiles)))
|
||||
|
||||
fmt.Println("Loading known chunks from database...")
|
||||
if err := s.loadKnownChunks(ctx); err != nil {
|
||||
return nil, fmt.Errorf("loading known chunks: %w", err)
|
||||
}
|
||||
fmt.Printf("Loaded %s known chunks from database\n", formatNumber(len(s.knownChunks)))
|
||||
|
||||
// Phase 1: Scan directory, collect files to process, and track existing files
|
||||
// (builds existingFiles map during walk to avoid double traversal)
|
||||
log.Info("Phase 1/3: Scanning directory structure")
|
||||
existingFiles := make(map[string]struct{})
|
||||
scanResult, err := s.scanPhase(ctx, path, result, existingFiles, knownFiles)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scan phase failed: %w", err)
|
||||
}
|
||||
filesToProcess := scanResult.FilesToProcess
|
||||
|
||||
// Phase 1b: Detect deleted files by comparing DB against scanned files
|
||||
if err := s.detectDeletedFilesFromMap(ctx, knownFiles, existingFiles, result); err != nil {
|
||||
return nil, fmt.Errorf("detecting deleted files: %w", err)
|
||||
}
|
||||
|
||||
// Phase 1c: Associate unchanged files with this snapshot (no new records needed)
|
||||
if len(scanResult.UnchangedFileIDs) > 0 {
|
||||
fmt.Printf("Associating %s unchanged files with snapshot...\n", formatNumber(len(scanResult.UnchangedFileIDs)))
|
||||
if err := s.batchAddFilesToSnapshot(ctx, scanResult.UnchangedFileIDs); err != nil {
|
||||
return nil, fmt.Errorf("associating unchanged files: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate total size to process
|
||||
var totalSizeToProcess int64
|
||||
for _, file := range filesToProcess {
|
||||
totalSizeToProcess += file.FileInfo.Size()
|
||||
}
|
||||
|
||||
// Update progress with total size and file count
|
||||
if s.progress != nil {
|
||||
s.progress.SetTotalSize(totalSizeToProcess)
|
||||
s.progress.GetStats().TotalFiles.Store(int64(len(filesToProcess)))
|
||||
}
|
||||
|
||||
log.Info("Phase 1 complete",
|
||||
"total_files", len(filesToProcess),
|
||||
"total_size", humanize.Bytes(uint64(totalSizeToProcess)),
|
||||
"files_skipped", result.FilesSkipped,
|
||||
"bytes_skipped", humanize.Bytes(uint64(result.BytesSkipped)))
|
||||
|
||||
// Print scan summary
|
||||
fmt.Printf("Scan complete: %s examined (%s), %s to process (%s)",
|
||||
formatNumber(result.FilesScanned),
|
||||
humanize.Bytes(uint64(totalSizeToProcess+result.BytesSkipped)),
|
||||
formatNumber(len(filesToProcess)),
|
||||
humanize.Bytes(uint64(totalSizeToProcess)))
|
||||
if result.FilesDeleted > 0 {
|
||||
fmt.Printf(", %s deleted (%s)",
|
||||
formatNumber(result.FilesDeleted),
|
||||
humanize.Bytes(uint64(result.BytesDeleted)))
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Phase 2: Process files and create chunks
|
||||
if len(filesToProcess) > 0 {
|
||||
fmt.Printf("Processing %s files...\n", formatNumber(len(filesToProcess)))
|
||||
log.Info("Phase 2/3: Creating snapshot (chunking, compressing, encrypting, and uploading blobs)")
|
||||
if err := s.processPhase(ctx, filesToProcess, result); err != nil {
|
||||
return nil, fmt.Errorf("process phase failed: %w", err)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("No files need processing. Creating metadata-only snapshot.\n")
|
||||
log.Info("Phase 2/3: Skipping (no files need processing, metadata-only snapshot)")
|
||||
}
|
||||
|
||||
// Get final stats from packer
|
||||
blobs := s.packer.GetFinishedBlobs()
|
||||
result.BlobsCreated += len(blobs)
|
||||
|
||||
// Query database for actual blob count created during this snapshot
|
||||
// The database is authoritative, especially for concurrent blob uploads
|
||||
// We count uploads rather than all snapshot_blobs to get only NEW blobs
|
||||
if s.snapshotID != "" {
|
||||
uploadCount, err := s.repos.Uploads.GetCountBySnapshot(ctx, s.snapshotID)
|
||||
if err != nil {
|
||||
log.Warn("Failed to query upload count from database", "error", err)
|
||||
} else {
|
||||
result.BlobsCreated = int(uploadCount)
|
||||
}
|
||||
}
|
||||
|
||||
result.EndTime = time.Now().UTC()
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// loadKnownFiles loads all known files from the database into a map for fast lookup
|
||||
// This avoids per-file database queries during the scan phase
|
||||
func (s *Scanner) loadKnownFiles(ctx context.Context, path string) (map[string]*database.File, error) {
|
||||
files, err := s.repos.Files.ListByPrefix(ctx, path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("listing files by prefix: %w", err)
|
||||
}
|
||||
|
||||
result := make(map[string]*database.File, len(files))
|
||||
for _, f := range files {
|
||||
result[f.Path] = f
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// loadKnownChunks loads all known chunk hashes from the database into a map for fast lookup
|
||||
// This avoids per-chunk database queries during file processing
|
||||
func (s *Scanner) loadKnownChunks(ctx context.Context) error {
|
||||
chunks, err := s.repos.Chunks.List(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("listing chunks: %w", err)
|
||||
}
|
||||
|
||||
s.knownChunksMu.Lock()
|
||||
s.knownChunks = make(map[string]struct{}, len(chunks))
|
||||
for _, c := range chunks {
|
||||
s.knownChunks[c.ChunkHash] = struct{}{}
|
||||
}
|
||||
s.knownChunksMu.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// chunkExists checks if a chunk hash exists in the in-memory cache
|
||||
func (s *Scanner) chunkExists(hash string) bool {
|
||||
s.knownChunksMu.RLock()
|
||||
_, exists := s.knownChunks[hash]
|
||||
s.knownChunksMu.RUnlock()
|
||||
return exists
|
||||
}
|
||||
|
||||
// addKnownChunk adds a chunk hash to the in-memory cache
|
||||
func (s *Scanner) addKnownChunk(hash string) {
|
||||
s.knownChunksMu.Lock()
|
||||
s.knownChunks[hash] = struct{}{}
|
||||
s.knownChunksMu.Unlock()
|
||||
}
|
||||
|
||||
// ScanPhaseResult contains the results of the scan phase
|
||||
type ScanPhaseResult struct {
|
||||
FilesToProcess []*FileToProcess
|
||||
UnchangedFileIDs []string // IDs of unchanged files to associate with snapshot
|
||||
}
|
||||
|
||||
// scanPhase performs the initial directory scan to identify files to process
|
||||
// It uses the pre-loaded knownFiles map for fast change detection without DB queries
|
||||
// It also populates existingFiles map for deletion detection
|
||||
// Returns files needing processing and IDs of unchanged files for snapshot association
|
||||
func (s *Scanner) scanPhase(ctx context.Context, path string, result *ScanResult, existingFiles map[string]struct{}, knownFiles map[string]*database.File) (*ScanPhaseResult, error) {
|
||||
// Use known file count as estimate for progress (accurate for subsequent backups)
|
||||
estimatedTotal := int64(len(knownFiles))
|
||||
|
||||
var filesToProcess []*FileToProcess
|
||||
var unchangedFileIDs []string // Just IDs - no new records needed
|
||||
var mu sync.Mutex
|
||||
|
||||
// Set up periodic status output
|
||||
startTime := time.Now()
|
||||
lastStatusTime := time.Now()
|
||||
statusInterval := 15 * time.Second
|
||||
var filesScanned int64
|
||||
|
||||
log.Debug("Starting directory walk", "path", path)
|
||||
err := afero.Walk(s.fs, path, func(filePath string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
log.Debug("Error accessing filesystem entry", "path", filePath, "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Check context cancellation
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
// Skip non-regular files for processing (but still count them)
|
||||
if !info.Mode().IsRegular() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Track this file as existing (for deletion detection)
|
||||
existingFiles[filePath] = struct{}{}
|
||||
|
||||
// Check file against in-memory map (no DB query!)
|
||||
file, needsProcessing := s.checkFileInMemory(filePath, info, knownFiles)
|
||||
|
||||
mu.Lock()
|
||||
if needsProcessing {
|
||||
// New or changed file - will create record after processing
|
||||
filesToProcess = append(filesToProcess, &FileToProcess{
|
||||
Path: filePath,
|
||||
FileInfo: info,
|
||||
File: file,
|
||||
})
|
||||
} else if file.ID != "" {
|
||||
// Unchanged file with existing ID - just need snapshot association
|
||||
unchangedFileIDs = append(unchangedFileIDs, file.ID)
|
||||
}
|
||||
filesScanned++
|
||||
changedCount := len(filesToProcess)
|
||||
mu.Unlock()
|
||||
|
||||
// Update result stats
|
||||
if needsProcessing {
|
||||
result.BytesScanned += info.Size()
|
||||
} else {
|
||||
result.FilesSkipped++
|
||||
result.BytesSkipped += info.Size()
|
||||
}
|
||||
result.FilesScanned++
|
||||
|
||||
// Output periodic status
|
||||
if time.Since(lastStatusTime) >= statusInterval {
|
||||
elapsed := time.Since(startTime)
|
||||
rate := float64(filesScanned) / elapsed.Seconds()
|
||||
|
||||
// Build status line - use estimate if available (not first backup)
|
||||
if estimatedTotal > 0 {
|
||||
// Show actual scanned vs estimate (may exceed estimate if files were added)
|
||||
pct := float64(filesScanned) / float64(estimatedTotal) * 100
|
||||
if pct > 100 {
|
||||
pct = 100 // Cap at 100% for display
|
||||
}
|
||||
remaining := estimatedTotal - filesScanned
|
||||
if remaining < 0 {
|
||||
remaining = 0
|
||||
}
|
||||
var eta time.Duration
|
||||
if rate > 0 && remaining > 0 {
|
||||
eta = time.Duration(float64(remaining)/rate) * time.Second
|
||||
}
|
||||
fmt.Printf("Scan: %s files (~%.0f%%), %s changed/new, %.0f files/sec, %s elapsed",
|
||||
formatNumber(int(filesScanned)),
|
||||
pct,
|
||||
formatNumber(changedCount),
|
||||
rate,
|
||||
elapsed.Round(time.Second))
|
||||
if eta > 0 {
|
||||
fmt.Printf(", ETA %s", eta.Round(time.Second))
|
||||
}
|
||||
fmt.Println()
|
||||
} else {
|
||||
// First backup - no estimate available
|
||||
fmt.Printf("Scan: %s files, %s changed/new, %.0f files/sec, %s elapsed\n",
|
||||
formatNumber(int(filesScanned)),
|
||||
formatNumber(changedCount),
|
||||
rate,
|
||||
elapsed.Round(time.Second))
|
||||
}
|
||||
lastStatusTime = time.Now()
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &ScanPhaseResult{
|
||||
FilesToProcess: filesToProcess,
|
||||
UnchangedFileIDs: unchangedFileIDs,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// checkFileInMemory checks if a file needs processing using the in-memory map
|
||||
// No database access is performed - this is purely CPU/memory work
|
||||
func (s *Scanner) checkFileInMemory(path string, info os.FileInfo, knownFiles map[string]*database.File) (*database.File, bool) {
|
||||
// Get file stats
|
||||
stat, ok := info.Sys().(interface {
|
||||
Uid() uint32
|
||||
Gid() uint32
|
||||
})
|
||||
|
||||
var uid, gid uint32
|
||||
if ok {
|
||||
uid = stat.Uid()
|
||||
gid = stat.Gid()
|
||||
}
|
||||
|
||||
// Create file record
|
||||
file := &database.File{
|
||||
Path: path,
|
||||
MTime: info.ModTime(),
|
||||
CTime: info.ModTime(), // afero doesn't provide ctime
|
||||
Size: info.Size(),
|
||||
Mode: uint32(info.Mode()),
|
||||
UID: uid,
|
||||
GID: gid,
|
||||
}
|
||||
|
||||
// Check against in-memory map
|
||||
existingFile, exists := knownFiles[path]
|
||||
if !exists {
|
||||
// New file
|
||||
return file, true
|
||||
}
|
||||
|
||||
// Reuse existing ID
|
||||
file.ID = existingFile.ID
|
||||
|
||||
// Check if file has changed
|
||||
if existingFile.Size != file.Size ||
|
||||
existingFile.MTime.Unix() != file.MTime.Unix() ||
|
||||
existingFile.Mode != file.Mode ||
|
||||
existingFile.UID != file.UID ||
|
||||
existingFile.GID != file.GID {
|
||||
return file, true
|
||||
}
|
||||
|
||||
// File unchanged
|
||||
return file, false
|
||||
}
|
||||
|
||||
// batchAddFilesToSnapshot adds existing file IDs to the snapshot association table
|
||||
// This is used for unchanged files that already have records in the database
|
||||
func (s *Scanner) batchAddFilesToSnapshot(ctx context.Context, fileIDs []string) error {
|
||||
const batchSize = 1000
|
||||
|
||||
startTime := time.Now()
|
||||
lastStatusTime := time.Now()
|
||||
statusInterval := 5 * time.Second
|
||||
|
||||
for i := 0; i < len(fileIDs); i += batchSize {
|
||||
// Check context cancellation
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
end := i + batchSize
|
||||
if end > len(fileIDs) {
|
||||
end = len(fileIDs)
|
||||
}
|
||||
batch := fileIDs[i:end]
|
||||
|
||||
err := s.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
for _, fileID := range batch {
|
||||
if err := s.repos.Snapshots.AddFileByID(ctx, tx, s.snapshotID, fileID); err != nil {
|
||||
return fmt.Errorf("adding file to snapshot: %w", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Periodic status
|
||||
if time.Since(lastStatusTime) >= statusInterval {
|
||||
elapsed := time.Since(startTime)
|
||||
rate := float64(end) / elapsed.Seconds()
|
||||
pct := float64(end) / float64(len(fileIDs)) * 100
|
||||
fmt.Printf("Associating files: %s/%s (%.1f%%), %.0f files/sec\n",
|
||||
formatNumber(end), formatNumber(len(fileIDs)), pct, rate)
|
||||
lastStatusTime = time.Now()
|
||||
}
|
||||
}
|
||||
|
||||
elapsed := time.Since(startTime)
|
||||
rate := float64(len(fileIDs)) / elapsed.Seconds()
|
||||
fmt.Printf("Associated %s unchanged files in %s (%.0f files/sec)\n",
|
||||
formatNumber(len(fileIDs)), elapsed.Round(time.Second), rate)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// processPhase processes the files that need backing up
|
||||
func (s *Scanner) processPhase(ctx context.Context, filesToProcess []*FileToProcess, result *ScanResult) error {
|
||||
// Calculate total bytes to process
|
||||
var totalBytes int64
|
||||
for _, f := range filesToProcess {
|
||||
totalBytes += f.FileInfo.Size()
|
||||
}
|
||||
|
||||
// Set up periodic status output
|
||||
lastStatusTime := time.Now()
|
||||
statusInterval := 15 * time.Second
|
||||
startTime := time.Now()
|
||||
filesProcessed := 0
|
||||
var bytesProcessed int64
|
||||
totalFiles := len(filesToProcess)
|
||||
|
||||
// Process each file
|
||||
for _, fileToProcess := range filesToProcess {
|
||||
// Update progress
|
||||
if s.progress != nil {
|
||||
s.progress.GetStats().CurrentFile.Store(fileToProcess.Path)
|
||||
}
|
||||
|
||||
// Process file in streaming fashion
|
||||
if err := s.processFileStreaming(ctx, fileToProcess, result); err != nil {
|
||||
return fmt.Errorf("processing file %s: %w", fileToProcess.Path, err)
|
||||
}
|
||||
|
||||
// Update files processed counter
|
||||
if s.progress != nil {
|
||||
s.progress.GetStats().FilesProcessed.Add(1)
|
||||
}
|
||||
|
||||
filesProcessed++
|
||||
bytesProcessed += fileToProcess.FileInfo.Size()
|
||||
|
||||
// Output periodic status
|
||||
if time.Since(lastStatusTime) >= statusInterval {
|
||||
elapsed := time.Since(startTime)
|
||||
pct := float64(bytesProcessed) / float64(totalBytes) * 100
|
||||
byteRate := float64(bytesProcessed) / elapsed.Seconds()
|
||||
fileRate := float64(filesProcessed) / elapsed.Seconds()
|
||||
|
||||
// Calculate ETA based on bytes (more accurate than files)
|
||||
remainingBytes := totalBytes - bytesProcessed
|
||||
var eta time.Duration
|
||||
if byteRate > 0 {
|
||||
eta = time.Duration(float64(remainingBytes)/byteRate) * time.Second
|
||||
}
|
||||
|
||||
// Format: Progress [5.7k/610k] 6.7 GB/44 GB (15.4%), 106MB/sec, 500 files/sec, running for 1m30s, ETA: 5m49s
|
||||
fmt.Printf("Progress [%s/%s] %s/%s (%.1f%%), %s/sec, %.0f files/sec, running for %s",
|
||||
formatCompact(filesProcessed),
|
||||
formatCompact(totalFiles),
|
||||
humanize.Bytes(uint64(bytesProcessed)),
|
||||
humanize.Bytes(uint64(totalBytes)),
|
||||
pct,
|
||||
humanize.Bytes(uint64(byteRate)),
|
||||
fileRate,
|
||||
elapsed.Round(time.Second))
|
||||
if eta > 0 {
|
||||
fmt.Printf(", ETA: %s", eta.Round(time.Second))
|
||||
}
|
||||
fmt.Println()
|
||||
lastStatusTime = time.Now()
|
||||
}
|
||||
}
|
||||
|
||||
// Final flush (outside any transaction)
|
||||
s.packerMu.Lock()
|
||||
if err := s.packer.Flush(); err != nil {
|
||||
s.packerMu.Unlock()
|
||||
return fmt.Errorf("flushing packer: %w", err)
|
||||
}
|
||||
s.packerMu.Unlock()
|
||||
|
||||
// If no storage configured, store any remaining blobs locally
|
||||
if s.storage == nil {
|
||||
blobs := s.packer.GetFinishedBlobs()
|
||||
for _, b := range blobs {
|
||||
// Blob metadata is already stored incrementally during packing
|
||||
// Just add the blob to the snapshot
|
||||
err := s.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return s.repos.Snapshots.AddBlob(ctx, tx, s.snapshotID, b.ID, b.Hash)
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("storing blob metadata: %w", err)
|
||||
}
|
||||
}
|
||||
result.BlobsCreated += len(blobs)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// handleBlobReady is called by the packer when a blob is finalized
|
||||
func (s *Scanner) handleBlobReady(blobWithReader *blob.BlobWithReader) error {
|
||||
startTime := time.Now().UTC()
|
||||
finishedBlob := blobWithReader.FinishedBlob
|
||||
|
||||
// Report upload start
|
||||
if s.progress != nil {
|
||||
s.progress.ReportUploadStart(finishedBlob.Hash, finishedBlob.Compressed)
|
||||
}
|
||||
|
||||
// Upload to storage first (without holding any locks)
|
||||
// Use scan context for cancellation support
|
||||
ctx := s.scanCtx
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
// Track bytes uploaded for accurate speed calculation
|
||||
lastProgressTime := time.Now()
|
||||
lastProgressBytes := int64(0)
|
||||
|
||||
progressCallback := func(uploaded int64) error {
|
||||
// Calculate instantaneous speed
|
||||
now := time.Now()
|
||||
elapsed := now.Sub(lastProgressTime).Seconds()
|
||||
if elapsed > 0.5 { // Update speed every 0.5 seconds
|
||||
bytesSinceLastUpdate := uploaded - lastProgressBytes
|
||||
speed := float64(bytesSinceLastUpdate) / elapsed
|
||||
|
||||
if s.progress != nil {
|
||||
s.progress.ReportUploadProgress(finishedBlob.Hash, uploaded, finishedBlob.Compressed, speed)
|
||||
}
|
||||
|
||||
lastProgressTime = now
|
||||
lastProgressBytes = uploaded
|
||||
}
|
||||
|
||||
// Check for cancellation
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Create sharded path: blobs/ca/fe/cafebabe...
|
||||
blobPath := fmt.Sprintf("blobs/%s/%s/%s", finishedBlob.Hash[:2], finishedBlob.Hash[2:4], finishedBlob.Hash)
|
||||
if err := s.storage.PutWithProgress(ctx, blobPath, blobWithReader.Reader, finishedBlob.Compressed, progressCallback); err != nil {
|
||||
return fmt.Errorf("uploading blob %s to storage: %w", finishedBlob.Hash, err)
|
||||
}
|
||||
|
||||
uploadDuration := time.Since(startTime)
|
||||
|
||||
// Calculate upload speed
|
||||
uploadSpeedBps := float64(finishedBlob.Compressed) / uploadDuration.Seconds()
|
||||
|
||||
// Print blob stored message
|
||||
fmt.Printf("Blob stored: %s (%s, %s/sec, %s)\n",
|
||||
finishedBlob.Hash[:12]+"...",
|
||||
humanize.Bytes(uint64(finishedBlob.Compressed)),
|
||||
humanize.Bytes(uint64(uploadSpeedBps)),
|
||||
uploadDuration.Round(time.Millisecond))
|
||||
|
||||
// Log upload stats
|
||||
uploadSpeedBits := uploadSpeedBps * 8 // bits per second
|
||||
log.Info("Successfully uploaded blob to storage",
|
||||
"path", blobPath,
|
||||
"size", humanize.Bytes(uint64(finishedBlob.Compressed)),
|
||||
"duration", uploadDuration,
|
||||
"speed", humanize.SI(uploadSpeedBits, "bps"))
|
||||
|
||||
// Report upload complete
|
||||
if s.progress != nil {
|
||||
s.progress.ReportUploadComplete(finishedBlob.Hash, finishedBlob.Compressed, uploadDuration)
|
||||
}
|
||||
|
||||
// Update progress
|
||||
if s.progress != nil {
|
||||
stats := s.progress.GetStats()
|
||||
stats.BlobsUploaded.Add(1)
|
||||
stats.BytesUploaded.Add(finishedBlob.Compressed)
|
||||
stats.BlobsCreated.Add(1)
|
||||
}
|
||||
|
||||
// Store metadata in database (after upload is complete)
|
||||
dbCtx := s.scanCtx
|
||||
if dbCtx == nil {
|
||||
dbCtx = context.Background()
|
||||
}
|
||||
err := s.repos.WithTx(dbCtx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
// Update blob upload timestamp
|
||||
if err := s.repos.Blobs.UpdateUploaded(ctx, tx, finishedBlob.ID); err != nil {
|
||||
return fmt.Errorf("updating blob upload timestamp: %w", err)
|
||||
}
|
||||
|
||||
// Add the blob to the snapshot
|
||||
if err := s.repos.Snapshots.AddBlob(ctx, tx, s.snapshotID, finishedBlob.ID, finishedBlob.Hash); err != nil {
|
||||
return fmt.Errorf("adding blob to snapshot: %w", err)
|
||||
}
|
||||
|
||||
// Record upload metrics
|
||||
upload := &database.Upload{
|
||||
BlobHash: finishedBlob.Hash,
|
||||
SnapshotID: s.snapshotID,
|
||||
UploadedAt: startTime,
|
||||
Size: finishedBlob.Compressed,
|
||||
DurationMs: uploadDuration.Milliseconds(),
|
||||
}
|
||||
if err := s.repos.Uploads.Create(ctx, tx, upload); err != nil {
|
||||
return fmt.Errorf("recording upload metrics: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
// Cleanup temp file if needed
|
||||
if blobWithReader.TempFile != nil {
|
||||
tempName := blobWithReader.TempFile.Name()
|
||||
if err := blobWithReader.TempFile.Close(); err != nil {
|
||||
log.Fatal("Failed to close temp file", "file", tempName, "error", err)
|
||||
}
|
||||
if err := s.fs.Remove(tempName); err != nil {
|
||||
log.Fatal("Failed to remove temp file", "file", tempName, "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// processFileStreaming processes a file by streaming chunks directly to the packer
|
||||
func (s *Scanner) processFileStreaming(ctx context.Context, fileToProcess *FileToProcess, result *ScanResult) error {
|
||||
// Open the file
|
||||
file, err := s.fs.Open(fileToProcess.Path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("opening file: %w", err)
|
||||
}
|
||||
defer func() { _ = file.Close() }()
|
||||
|
||||
// We'll collect file chunks for database storage
|
||||
// but process them for packing as we go
|
||||
type chunkInfo struct {
|
||||
fileChunk database.FileChunk
|
||||
offset int64
|
||||
size int64
|
||||
}
|
||||
var chunks []chunkInfo
|
||||
chunkIndex := 0
|
||||
|
||||
// Process chunks in streaming fashion and get full file hash
|
||||
fileHash, err := s.chunker.ChunkReaderStreaming(file, func(chunk chunker.Chunk) error {
|
||||
// Check for cancellation
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
log.Debug("Processing content-defined chunk from file",
|
||||
"file", fileToProcess.Path,
|
||||
"chunk_index", chunkIndex,
|
||||
"hash", chunk.Hash,
|
||||
"size", chunk.Size)
|
||||
|
||||
// Check if chunk already exists (fast in-memory lookup)
|
||||
chunkExists := s.chunkExists(chunk.Hash)
|
||||
|
||||
// Store chunk if new
|
||||
if !chunkExists {
|
||||
err := s.repos.WithTx(ctx, func(txCtx context.Context, tx *sql.Tx) error {
|
||||
dbChunk := &database.Chunk{
|
||||
ChunkHash: chunk.Hash,
|
||||
Size: chunk.Size,
|
||||
}
|
||||
if err := s.repos.Chunks.Create(txCtx, tx, dbChunk); err != nil {
|
||||
return fmt.Errorf("creating chunk: %w", err)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("storing chunk: %w", err)
|
||||
}
|
||||
// Add to in-memory cache for fast duplicate detection
|
||||
s.addKnownChunk(chunk.Hash)
|
||||
}
|
||||
|
||||
// Track file chunk association for later storage
|
||||
chunks = append(chunks, chunkInfo{
|
||||
fileChunk: database.FileChunk{
|
||||
FileID: fileToProcess.File.ID,
|
||||
Idx: chunkIndex,
|
||||
ChunkHash: chunk.Hash,
|
||||
},
|
||||
offset: chunk.Offset,
|
||||
size: chunk.Size,
|
||||
})
|
||||
|
||||
// Update stats
|
||||
if chunkExists {
|
||||
result.FilesSkipped++ // Track as skipped for now
|
||||
result.BytesSkipped += chunk.Size
|
||||
if s.progress != nil {
|
||||
s.progress.GetStats().BytesSkipped.Add(chunk.Size)
|
||||
}
|
||||
} else {
|
||||
result.ChunksCreated++
|
||||
result.BytesScanned += chunk.Size
|
||||
if s.progress != nil {
|
||||
s.progress.GetStats().ChunksCreated.Add(1)
|
||||
s.progress.GetStats().BytesProcessed.Add(chunk.Size)
|
||||
s.progress.UpdateChunkingActivity()
|
||||
}
|
||||
}
|
||||
|
||||
// Add chunk to packer immediately (streaming)
|
||||
// This happens outside the database transaction
|
||||
if !chunkExists {
|
||||
s.packerMu.Lock()
|
||||
err := s.packer.AddChunk(&blob.ChunkRef{
|
||||
Hash: chunk.Hash,
|
||||
Data: chunk.Data,
|
||||
})
|
||||
if err == blob.ErrBlobSizeLimitExceeded {
|
||||
// Finalize current blob and retry
|
||||
if err := s.packer.FinalizeBlob(); err != nil {
|
||||
s.packerMu.Unlock()
|
||||
return fmt.Errorf("finalizing blob: %w", err)
|
||||
}
|
||||
// Retry adding the chunk
|
||||
if err := s.packer.AddChunk(&blob.ChunkRef{
|
||||
Hash: chunk.Hash,
|
||||
Data: chunk.Data,
|
||||
}); err != nil {
|
||||
s.packerMu.Unlock()
|
||||
return fmt.Errorf("adding chunk after finalize: %w", err)
|
||||
}
|
||||
} else if err != nil {
|
||||
s.packerMu.Unlock()
|
||||
return fmt.Errorf("adding chunk to packer: %w", err)
|
||||
}
|
||||
s.packerMu.Unlock()
|
||||
}
|
||||
|
||||
// Clear chunk data from memory immediately after use
|
||||
chunk.Data = nil
|
||||
|
||||
chunkIndex++
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("chunking file: %w", err)
|
||||
}
|
||||
|
||||
log.Debug("Completed snapshotting file",
|
||||
"path", fileToProcess.Path,
|
||||
"file_hash", fileHash,
|
||||
"chunks", len(chunks))
|
||||
|
||||
// Store file record, chunk associations, and snapshot association in database
|
||||
// This happens AFTER successful chunking to avoid orphaned records on interruption
|
||||
err = s.repos.WithTx(ctx, func(txCtx context.Context, tx *sql.Tx) error {
|
||||
// Create or update the file record
|
||||
// Files.Create uses INSERT OR REPLACE, so it handles both new and changed files
|
||||
if err := s.repos.Files.Create(txCtx, tx, fileToProcess.File); err != nil {
|
||||
return fmt.Errorf("creating file record: %w", err)
|
||||
}
|
||||
|
||||
// Delete any existing file_chunks and chunk_files for this file
|
||||
// This ensures old chunks are no longer associated when file content changes
|
||||
if err := s.repos.FileChunks.DeleteByFileID(txCtx, tx, fileToProcess.File.ID); err != nil {
|
||||
return fmt.Errorf("deleting old file chunks: %w", err)
|
||||
}
|
||||
if err := s.repos.ChunkFiles.DeleteByFileID(txCtx, tx, fileToProcess.File.ID); err != nil {
|
||||
return fmt.Errorf("deleting old chunk files: %w", err)
|
||||
}
|
||||
|
||||
// Update chunk associations with the file ID (now that we have it)
|
||||
for i := range chunks {
|
||||
chunks[i].fileChunk.FileID = fileToProcess.File.ID
|
||||
}
|
||||
|
||||
for _, ci := range chunks {
|
||||
// Create file-chunk mapping
|
||||
if err := s.repos.FileChunks.Create(txCtx, tx, &ci.fileChunk); err != nil {
|
||||
return fmt.Errorf("creating file chunk: %w", err)
|
||||
}
|
||||
|
||||
// Create chunk-file mapping
|
||||
chunkFile := &database.ChunkFile{
|
||||
ChunkHash: ci.fileChunk.ChunkHash,
|
||||
FileID: fileToProcess.File.ID,
|
||||
FileOffset: ci.offset,
|
||||
Length: ci.size,
|
||||
}
|
||||
if err := s.repos.ChunkFiles.Create(txCtx, tx, chunkFile); err != nil {
|
||||
return fmt.Errorf("creating chunk file: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Add file to snapshot
|
||||
if err := s.repos.Snapshots.AddFileByID(txCtx, tx, s.snapshotID, fileToProcess.File.ID); err != nil {
|
||||
return fmt.Errorf("adding file to snapshot: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// GetProgress returns the progress reporter for this scanner
|
||||
func (s *Scanner) GetProgress() *ProgressReporter {
|
||||
return s.progress
|
||||
}
|
||||
|
||||
// detectDeletedFilesFromMap finds files that existed in previous snapshots but no longer exist
|
||||
// Uses pre-loaded maps to avoid any filesystem or database access
|
||||
func (s *Scanner) detectDeletedFilesFromMap(ctx context.Context, knownFiles map[string]*database.File, existingFiles map[string]struct{}, result *ScanResult) error {
|
||||
if len(knownFiles) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check each known file against the enumerated set (no filesystem access needed)
|
||||
for path, file := range knownFiles {
|
||||
// Check context cancellation periodically
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
// Check if the file exists in our enumerated set
|
||||
if _, exists := existingFiles[path]; !exists {
|
||||
// File has been deleted
|
||||
result.FilesDeleted++
|
||||
result.BytesDeleted += file.Size
|
||||
log.Debug("Detected deleted file", "path", path, "size", file.Size)
|
||||
}
|
||||
}
|
||||
|
||||
if result.FilesDeleted > 0 {
|
||||
fmt.Printf("Found %s deleted files\n", formatNumber(result.FilesDeleted))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// formatNumber formats a number with comma separators
|
||||
func formatNumber(n int) string {
|
||||
if n < 1000 {
|
||||
return fmt.Sprintf("%d", n)
|
||||
}
|
||||
return humanize.Comma(int64(n))
|
||||
}
|
||||
|
||||
// formatCompact formats a number compactly with k/M suffixes (e.g., 5.7k, 1.2M)
|
||||
func formatCompact(n int) string {
|
||||
if n < 1000 {
|
||||
return fmt.Sprintf("%d", n)
|
||||
}
|
||||
if n < 10000 {
|
||||
return fmt.Sprintf("%.1fk", float64(n)/1000)
|
||||
}
|
||||
if n < 1000000 {
|
||||
return fmt.Sprintf("%.0fk", float64(n)/1000)
|
||||
}
|
||||
return fmt.Sprintf("%.1fM", float64(n)/1000000)
|
||||
}
|
||||
268
internal/snapshot/scanner_test.go
Normal file
268
internal/snapshot/scanner_test.go
Normal file
@@ -0,0 +1,268 @@
|
||||
package snapshot_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
func TestScannerSimpleDirectory(t *testing.T) {
|
||||
// Initialize logger for tests
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
// Create in-memory filesystem
|
||||
fs := afero.NewMemMapFs()
|
||||
|
||||
// Create test directory structure
|
||||
testFiles := map[string]string{
|
||||
"/source/file1.txt": "Hello, world!", // 13 bytes
|
||||
"/source/file2.txt": "This is another file", // 20 bytes
|
||||
"/source/subdir/file3.txt": "File in subdirectory", // 20 bytes
|
||||
"/source/subdir/file4.txt": "Another file in subdirectory", // 28 bytes
|
||||
"/source/empty.txt": "", // 0 bytes
|
||||
"/source/subdir2/file5.txt": "Yet another file", // 16 bytes
|
||||
}
|
||||
|
||||
// Create files with specific times
|
||||
testTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
|
||||
for path, content := range testFiles {
|
||||
dir := filepath.Dir(path)
|
||||
if err := fs.MkdirAll(dir, 0755); err != nil {
|
||||
t.Fatalf("failed to create directory %s: %v", dir, err)
|
||||
}
|
||||
if err := afero.WriteFile(fs, path, []byte(content), 0644); err != nil {
|
||||
t.Fatalf("failed to write file %s: %v", path, err)
|
||||
}
|
||||
// Set times
|
||||
if err := fs.Chtimes(path, testTime, testTime); err != nil {
|
||||
t.Fatalf("failed to set times for %s: %v", path, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create test database
|
||||
db, err := database.NewTestDB()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test database: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
t.Errorf("failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create scanner
|
||||
scanner := snapshot.NewScanner(snapshot.ScannerConfig{
|
||||
FS: fs,
|
||||
ChunkSize: int64(1024 * 16), // 16KB chunks for testing
|
||||
Repositories: repos,
|
||||
MaxBlobSize: int64(1024 * 1024), // 1MB blobs
|
||||
CompressionLevel: 3,
|
||||
AgeRecipients: []string{"age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"}, // Test public key
|
||||
})
|
||||
|
||||
// Create a snapshot record for testing
|
||||
ctx := context.Background()
|
||||
snapshotID := "test-snapshot-001"
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID,
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "test",
|
||||
StartedAt: time.Now(),
|
||||
CompletedAt: nil,
|
||||
FileCount: 0,
|
||||
ChunkCount: 0,
|
||||
BlobCount: 0,
|
||||
TotalSize: 0,
|
||||
BlobSize: 0,
|
||||
CompressionRatio: 1.0,
|
||||
}
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot: %v", err)
|
||||
}
|
||||
|
||||
// Scan the directory
|
||||
var result *snapshot.ScanResult
|
||||
result, err = scanner.Scan(ctx, "/source", snapshotID)
|
||||
if err != nil {
|
||||
t.Fatalf("scan failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify results - we only scan regular files, not directories
|
||||
if result.FilesScanned != 6 {
|
||||
t.Errorf("expected 6 files scanned, got %d", result.FilesScanned)
|
||||
}
|
||||
|
||||
// Total bytes should be the sum of all file contents
|
||||
if result.BytesScanned < 97 { // At minimum we have 97 bytes of file content
|
||||
t.Errorf("expected at least 97 bytes scanned, got %d", result.BytesScanned)
|
||||
}
|
||||
|
||||
// Verify files in database - only regular files are stored
|
||||
files, err := repos.Files.ListByPrefix(ctx, "/source")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to list files: %v", err)
|
||||
}
|
||||
|
||||
// We should have 6 files (directories are not stored)
|
||||
if len(files) != 6 {
|
||||
t.Errorf("expected 6 files in database, got %d", len(files))
|
||||
}
|
||||
|
||||
// Verify specific file
|
||||
file1, err := repos.Files.GetByPath(ctx, "/source/file1.txt")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get file1.txt: %v", err)
|
||||
}
|
||||
|
||||
if file1.Size != 13 {
|
||||
t.Errorf("expected file1.txt size 13, got %d", file1.Size)
|
||||
}
|
||||
|
||||
if file1.Mode != 0644 {
|
||||
t.Errorf("expected file1.txt mode 0644, got %o", file1.Mode)
|
||||
}
|
||||
|
||||
// Verify chunks were created
|
||||
chunks, err := repos.FileChunks.GetByFile(ctx, "/source/file1.txt")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get chunks for file1.txt: %v", err)
|
||||
}
|
||||
|
||||
if len(chunks) != 1 { // Small file should be one chunk
|
||||
t.Errorf("expected 1 chunk for file1.txt, got %d", len(chunks))
|
||||
}
|
||||
|
||||
// Verify deduplication - file3.txt and file4.txt have different content
|
||||
// but we should still have the correct number of unique chunks
|
||||
allChunks, err := repos.Chunks.List(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to list all chunks: %v", err)
|
||||
}
|
||||
|
||||
// We should have at most 6 chunks (one per unique file content)
|
||||
// Empty file might not create a chunk
|
||||
if len(allChunks) > 6 {
|
||||
t.Errorf("expected at most 6 chunks, got %d", len(allChunks))
|
||||
}
|
||||
}
|
||||
|
||||
func TestScannerLargeFile(t *testing.T) {
|
||||
// Initialize logger for tests
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
// Create in-memory filesystem
|
||||
fs := afero.NewMemMapFs()
|
||||
|
||||
// Create a large file that will require multiple chunks
|
||||
// Use random content to ensure good chunk boundaries
|
||||
largeContent := make([]byte, 1024*1024) // 1MB
|
||||
// Fill with pseudo-random data to ensure chunk boundaries
|
||||
for i := 0; i < len(largeContent); i++ {
|
||||
// Simple pseudo-random generator for deterministic tests
|
||||
largeContent[i] = byte((i * 7919) ^ (i >> 3))
|
||||
}
|
||||
|
||||
if err := fs.MkdirAll("/source", 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := afero.WriteFile(fs, "/source/large.bin", largeContent, 0644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create test database
|
||||
db, err := database.NewTestDB()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test database: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
t.Errorf("failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create scanner with 64KB average chunk size
|
||||
scanner := snapshot.NewScanner(snapshot.ScannerConfig{
|
||||
FS: fs,
|
||||
ChunkSize: int64(1024 * 64), // 64KB average chunks
|
||||
Repositories: repos,
|
||||
MaxBlobSize: int64(1024 * 1024),
|
||||
CompressionLevel: 3,
|
||||
AgeRecipients: []string{"age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"}, // Test public key
|
||||
})
|
||||
|
||||
// Create a snapshot record for testing
|
||||
ctx := context.Background()
|
||||
snapshotID := "test-snapshot-001"
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID,
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "test",
|
||||
StartedAt: time.Now(),
|
||||
CompletedAt: nil,
|
||||
FileCount: 0,
|
||||
ChunkCount: 0,
|
||||
BlobCount: 0,
|
||||
TotalSize: 0,
|
||||
BlobSize: 0,
|
||||
CompressionRatio: 1.0,
|
||||
}
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot: %v", err)
|
||||
}
|
||||
|
||||
// Scan the directory
|
||||
var result *snapshot.ScanResult
|
||||
result, err = scanner.Scan(ctx, "/source", snapshotID)
|
||||
if err != nil {
|
||||
t.Fatalf("scan failed: %v", err)
|
||||
}
|
||||
|
||||
// We scan only regular files, not directories
|
||||
if result.FilesScanned != 1 {
|
||||
t.Errorf("expected 1 file scanned, got %d", result.FilesScanned)
|
||||
}
|
||||
|
||||
// The file size should be at least 1MB
|
||||
if result.BytesScanned < 1024*1024 {
|
||||
t.Errorf("expected at least %d bytes scanned, got %d", 1024*1024, result.BytesScanned)
|
||||
}
|
||||
|
||||
// Verify chunks
|
||||
chunks, err := repos.FileChunks.GetByFile(ctx, "/source/large.bin")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get chunks: %v", err)
|
||||
}
|
||||
|
||||
// With content-defined chunking, the number of chunks depends on content
|
||||
// For a 1MB file, we should get at least 1 chunk
|
||||
if len(chunks) < 1 {
|
||||
t.Errorf("expected at least 1 chunk, got %d", len(chunks))
|
||||
}
|
||||
|
||||
// Log the actual number of chunks for debugging
|
||||
t.Logf("1MB file produced %d chunks with 64KB average chunk size", len(chunks))
|
||||
|
||||
// Verify chunk sequence
|
||||
for i, fc := range chunks {
|
||||
if fc.Idx != i {
|
||||
t.Errorf("chunk %d has incorrect sequence %d", i, fc.Idx)
|
||||
}
|
||||
}
|
||||
}
|
||||
882
internal/snapshot/snapshot.go
Normal file
882
internal/snapshot/snapshot.go
Normal file
@@ -0,0 +1,882 @@
|
||||
package snapshot
|
||||
|
||||
// Snapshot Metadata Export Process
|
||||
// ================================
|
||||
//
|
||||
// The snapshot metadata contains all information needed to restore a snapshot.
|
||||
// Instead of creating a custom format, we use a trimmed copy of the SQLite
|
||||
// database containing only data relevant to the current snapshot.
|
||||
//
|
||||
// Process Overview:
|
||||
// 1. After all files/chunks/blobs are backed up, create a snapshot record
|
||||
// 2. Close the main database to ensure consistency
|
||||
// 3. Copy the entire database to a temporary file
|
||||
// 4. Open the temporary database
|
||||
// 5. Delete all snapshots except the current one
|
||||
// 6. Delete all orphaned records:
|
||||
// - Files not referenced by any remaining snapshot
|
||||
// - Chunks not referenced by any remaining files
|
||||
// - Blobs not containing any remaining chunks
|
||||
// - All related mapping tables (file_chunks, chunk_files, blob_chunks)
|
||||
// 7. Close the temporary database
|
||||
// 8. Use sqlite3 to dump the cleaned database to SQL
|
||||
// 9. Delete the temporary database file
|
||||
// 10. Compress the SQL dump with zstd
|
||||
// 11. Encrypt the compressed dump with age (if encryption is enabled)
|
||||
// 12. Upload to S3 as: snapshots/{snapshot-id}.sql.zst[.age]
|
||||
// 13. Reopen the main database
|
||||
//
|
||||
// Advantages of this approach:
|
||||
// - No custom metadata format needed
|
||||
// - Reuses existing database schema and relationships
|
||||
// - SQL dumps are portable and compress well
|
||||
// - Restore process can simply execute the SQL
|
||||
// - Atomic and consistent snapshot of all metadata
|
||||
//
|
||||
// TODO: Future improvements:
|
||||
// - Add snapshot-file relationships to track which files belong to which snapshot
|
||||
// - Implement incremental snapshots that reference previous snapshots
|
||||
// - Add snapshot manifest with additional metadata (size, chunk count, etc.)
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/blobgen"
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/storage"
|
||||
"github.com/dustin/go-humanize"
|
||||
"github.com/spf13/afero"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// SnapshotManager handles snapshot creation and metadata export
|
||||
type SnapshotManager struct {
|
||||
repos *database.Repositories
|
||||
storage storage.Storer
|
||||
config *config.Config
|
||||
fs afero.Fs
|
||||
}
|
||||
|
||||
// SnapshotManagerParams holds dependencies for NewSnapshotManager
|
||||
type SnapshotManagerParams struct {
|
||||
fx.In
|
||||
|
||||
Repos *database.Repositories
|
||||
Storage storage.Storer
|
||||
Config *config.Config
|
||||
}
|
||||
|
||||
// NewSnapshotManager creates a new snapshot manager for dependency injection
|
||||
func NewSnapshotManager(params SnapshotManagerParams) *SnapshotManager {
|
||||
return &SnapshotManager{
|
||||
repos: params.Repos,
|
||||
storage: params.Storage,
|
||||
config: params.Config,
|
||||
}
|
||||
}
|
||||
|
||||
// SetFilesystem sets the filesystem to use for all file operations
|
||||
func (sm *SnapshotManager) SetFilesystem(fs afero.Fs) {
|
||||
sm.fs = fs
|
||||
}
|
||||
|
||||
// CreateSnapshot creates a new snapshot record in the database at the start of a backup
|
||||
func (sm *SnapshotManager) CreateSnapshot(ctx context.Context, hostname, version, gitRevision string) (string, error) {
|
||||
snapshotID := fmt.Sprintf("%s-%s", hostname, time.Now().UTC().Format("20060102-150405Z"))
|
||||
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID,
|
||||
Hostname: hostname,
|
||||
VaultikVersion: version,
|
||||
VaultikGitRevision: gitRevision,
|
||||
StartedAt: time.Now().UTC(),
|
||||
CompletedAt: nil, // Not completed yet
|
||||
FileCount: 0,
|
||||
ChunkCount: 0,
|
||||
BlobCount: 0,
|
||||
TotalSize: 0,
|
||||
BlobSize: 0,
|
||||
CompressionRatio: 1.0,
|
||||
}
|
||||
|
||||
err := sm.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return sm.repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("creating snapshot: %w", err)
|
||||
}
|
||||
|
||||
log.Info("Created snapshot", "snapshot_id", snapshotID)
|
||||
return snapshotID, nil
|
||||
}
|
||||
|
||||
// UpdateSnapshotStats updates the statistics for a snapshot during backup
|
||||
func (sm *SnapshotManager) UpdateSnapshotStats(ctx context.Context, snapshotID string, stats BackupStats) error {
|
||||
err := sm.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return sm.repos.Snapshots.UpdateCounts(ctx, tx, snapshotID,
|
||||
int64(stats.FilesScanned),
|
||||
int64(stats.ChunksCreated),
|
||||
int64(stats.BlobsCreated),
|
||||
stats.BytesScanned,
|
||||
stats.BytesUploaded,
|
||||
)
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("updating snapshot stats: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateSnapshotStatsExtended updates snapshot statistics with extended metrics.
|
||||
// This includes compression level, uncompressed blob size, and upload duration.
|
||||
func (sm *SnapshotManager) UpdateSnapshotStatsExtended(ctx context.Context, snapshotID string, stats ExtendedBackupStats) error {
|
||||
return sm.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
// First update basic stats
|
||||
if err := sm.repos.Snapshots.UpdateCounts(ctx, tx, snapshotID,
|
||||
int64(stats.FilesScanned),
|
||||
int64(stats.ChunksCreated),
|
||||
int64(stats.BlobsCreated),
|
||||
stats.BytesScanned,
|
||||
stats.BytesUploaded,
|
||||
); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Then update extended stats
|
||||
return sm.repos.Snapshots.UpdateExtendedStats(ctx, tx, snapshotID,
|
||||
stats.BlobUncompressedSize,
|
||||
stats.CompressionLevel,
|
||||
stats.UploadDurationMs,
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
// CompleteSnapshot marks a snapshot as completed and exports its metadata
|
||||
func (sm *SnapshotManager) CompleteSnapshot(ctx context.Context, snapshotID string) error {
|
||||
// Mark the snapshot as completed
|
||||
err := sm.repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return sm.repos.Snapshots.MarkComplete(ctx, tx, snapshotID)
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("marking snapshot complete: %w", err)
|
||||
}
|
||||
|
||||
log.Info("Completed snapshot", "snapshot_id", snapshotID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExportSnapshotMetadata exports snapshot metadata to S3
|
||||
//
|
||||
// This method executes the complete snapshot metadata export process:
|
||||
// 1. Creates a temporary directory for working files
|
||||
// 2. Copies the main database to preserve its state
|
||||
// 3. Cleans the copy to contain only current snapshot data
|
||||
// 4. Dumps the cleaned database to SQL
|
||||
// 5. Compresses the SQL dump with zstd
|
||||
// 6. Encrypts the compressed data (if encryption is enabled)
|
||||
// 7. Uploads to S3 at: snapshots/{snapshot-id}.sql.zst[.age]
|
||||
//
|
||||
// The caller is responsible for:
|
||||
// - Ensuring the main database is closed before calling this method
|
||||
// - Reopening the main database after this method returns
|
||||
//
|
||||
// This ensures database consistency during the copy operation.
|
||||
func (sm *SnapshotManager) ExportSnapshotMetadata(ctx context.Context, dbPath string, snapshotID string) error {
|
||||
log.Info("Phase 3/3: Exporting snapshot metadata", "snapshot_id", snapshotID, "source_db", dbPath)
|
||||
|
||||
// Create temp directory for all temporary files
|
||||
tempDir, err := afero.TempDir(sm.fs, "", "vaultik-snapshot-*")
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating temp dir: %w", err)
|
||||
}
|
||||
log.Debug("Created temporary directory", "path", tempDir)
|
||||
defer func() {
|
||||
log.Debug("Cleaning up temporary directory", "path", tempDir)
|
||||
if err := sm.fs.RemoveAll(tempDir); err != nil {
|
||||
log.Debug("Failed to remove temp dir", "path", tempDir, "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Step 1: Copy database to temp file
|
||||
// The main database should be closed at this point
|
||||
tempDBPath := filepath.Join(tempDir, "snapshot.db")
|
||||
log.Debug("Copying database to temporary location", "source", dbPath, "destination", tempDBPath)
|
||||
if err := sm.copyFile(dbPath, tempDBPath); err != nil {
|
||||
return fmt.Errorf("copying database: %w", err)
|
||||
}
|
||||
log.Debug("Database copy complete", "size", sm.getFileSize(tempDBPath))
|
||||
|
||||
// Step 2: Clean the temp database to only contain current snapshot data
|
||||
log.Debug("Cleaning temporary database", "snapshot_id", snapshotID)
|
||||
stats, err := sm.cleanSnapshotDB(ctx, tempDBPath, snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cleaning snapshot database: %w", err)
|
||||
}
|
||||
log.Info("Temporary database cleanup complete",
|
||||
"db_path", tempDBPath,
|
||||
"size_after_clean", humanize.Bytes(uint64(sm.getFileSize(tempDBPath))),
|
||||
"files", stats.FileCount,
|
||||
"chunks", stats.ChunkCount,
|
||||
"blobs", stats.BlobCount,
|
||||
"total_compressed_size", humanize.Bytes(uint64(stats.CompressedSize)),
|
||||
"total_uncompressed_size", humanize.Bytes(uint64(stats.UncompressedSize)),
|
||||
"compression_ratio", fmt.Sprintf("%.2fx", float64(stats.UncompressedSize)/float64(stats.CompressedSize)))
|
||||
|
||||
// Step 3: Dump the cleaned database to SQL
|
||||
dumpPath := filepath.Join(tempDir, "snapshot.sql")
|
||||
if err := sm.dumpDatabase(tempDBPath, dumpPath); err != nil {
|
||||
return fmt.Errorf("dumping database: %w", err)
|
||||
}
|
||||
log.Debug("SQL dump complete", "size", humanize.Bytes(uint64(sm.getFileSize(dumpPath))))
|
||||
|
||||
// Step 4: Compress and encrypt the SQL dump
|
||||
compressedPath := filepath.Join(tempDir, "snapshot.sql.zst.age")
|
||||
if err := sm.compressDump(dumpPath, compressedPath); err != nil {
|
||||
return fmt.Errorf("compressing dump: %w", err)
|
||||
}
|
||||
log.Debug("Compression complete",
|
||||
"original_size", humanize.Bytes(uint64(sm.getFileSize(dumpPath))),
|
||||
"compressed_size", humanize.Bytes(uint64(sm.getFileSize(compressedPath))))
|
||||
|
||||
// Step 5: Read compressed and encrypted data for upload
|
||||
finalData, err := afero.ReadFile(sm.fs, compressedPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("reading compressed dump: %w", err)
|
||||
}
|
||||
|
||||
// Step 6: Generate blob manifest (before closing temp DB)
|
||||
blobManifest, err := sm.generateBlobManifest(ctx, tempDBPath, snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("generating blob manifest: %w", err)
|
||||
}
|
||||
|
||||
// Step 7: Upload to S3 in snapshot subdirectory
|
||||
// Upload database backup (compressed and encrypted)
|
||||
dbKey := fmt.Sprintf("metadata/%s/db.zst.age", snapshotID)
|
||||
|
||||
dbUploadStart := time.Now()
|
||||
if err := sm.storage.Put(ctx, dbKey, bytes.NewReader(finalData)); err != nil {
|
||||
return fmt.Errorf("uploading snapshot database: %w", err)
|
||||
}
|
||||
dbUploadDuration := time.Since(dbUploadStart)
|
||||
dbUploadSpeed := float64(len(finalData)) * 8 / dbUploadDuration.Seconds() // bits per second
|
||||
log.Info("Uploaded snapshot database to S3",
|
||||
"path", dbKey,
|
||||
"size", humanize.Bytes(uint64(len(finalData))),
|
||||
"duration", dbUploadDuration,
|
||||
"speed", humanize.SI(dbUploadSpeed, "bps"))
|
||||
|
||||
// Upload blob manifest (compressed only, not encrypted)
|
||||
manifestKey := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
|
||||
manifestUploadStart := time.Now()
|
||||
if err := sm.storage.Put(ctx, manifestKey, bytes.NewReader(blobManifest)); err != nil {
|
||||
return fmt.Errorf("uploading blob manifest: %w", err)
|
||||
}
|
||||
manifestUploadDuration := time.Since(manifestUploadStart)
|
||||
manifestUploadSpeed := float64(len(blobManifest)) * 8 / manifestUploadDuration.Seconds() // bits per second
|
||||
log.Info("Uploaded blob manifest to S3",
|
||||
"path", manifestKey,
|
||||
"size", humanize.Bytes(uint64(len(blobManifest))),
|
||||
"duration", manifestUploadDuration,
|
||||
"speed", humanize.SI(manifestUploadSpeed, "bps"))
|
||||
|
||||
log.Info("Uploaded snapshot metadata",
|
||||
"snapshot_id", snapshotID,
|
||||
"db_size", len(finalData),
|
||||
"manifest_size", len(blobManifest))
|
||||
return nil
|
||||
}
|
||||
|
||||
// CleanupStats contains statistics about cleaned snapshot database
|
||||
type CleanupStats struct {
|
||||
FileCount int
|
||||
ChunkCount int
|
||||
BlobCount int
|
||||
CompressedSize int64
|
||||
UncompressedSize int64
|
||||
}
|
||||
|
||||
// cleanSnapshotDB removes all data except for the specified snapshot
|
||||
//
|
||||
// The cleanup is performed in a specific order to maintain referential integrity:
|
||||
// 1. Delete other snapshots
|
||||
// 2. Delete orphaned snapshot associations (snapshot_files, snapshot_blobs) for deleted snapshots
|
||||
// 3. Delete orphaned files (not in the current snapshot)
|
||||
// 4. Delete orphaned chunk-to-file mappings (references to deleted files)
|
||||
// 5. Delete orphaned blobs (not in the current snapshot)
|
||||
// 6. Delete orphaned blob-to-chunk mappings (references to deleted chunks)
|
||||
// 7. Delete orphaned chunks (not referenced by any file)
|
||||
//
|
||||
// Each step is implemented as a separate method for clarity and maintainability.
|
||||
func (sm *SnapshotManager) cleanSnapshotDB(ctx context.Context, dbPath string, snapshotID string) (*CleanupStats, error) {
|
||||
// Open the temp database
|
||||
db, err := database.New(ctx, dbPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening temp database: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
log.Debug("Failed to close temp database", "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Start a transaction
|
||||
tx, err := db.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("beginning transaction: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if rbErr := tx.Rollback(); rbErr != nil && rbErr != sql.ErrTxDone {
|
||||
log.Debug("Failed to rollback transaction", "error", rbErr)
|
||||
}
|
||||
}()
|
||||
|
||||
// Execute cleanup steps in order
|
||||
if err := sm.deleteOtherSnapshots(ctx, tx, snapshotID); err != nil {
|
||||
return nil, fmt.Errorf("step 1 - delete other snapshots: %w", err)
|
||||
}
|
||||
|
||||
if err := sm.deleteOrphanedSnapshotAssociations(ctx, tx, snapshotID); err != nil {
|
||||
return nil, fmt.Errorf("step 2 - delete orphaned snapshot associations: %w", err)
|
||||
}
|
||||
|
||||
if err := sm.deleteOrphanedFiles(ctx, tx, snapshotID); err != nil {
|
||||
return nil, fmt.Errorf("step 3 - delete orphaned files: %w", err)
|
||||
}
|
||||
|
||||
if err := sm.deleteOrphanedChunkToFileMappings(ctx, tx); err != nil {
|
||||
return nil, fmt.Errorf("step 4 - delete orphaned chunk-to-file mappings: %w", err)
|
||||
}
|
||||
|
||||
if err := sm.deleteOrphanedBlobs(ctx, tx, snapshotID); err != nil {
|
||||
return nil, fmt.Errorf("step 5 - delete orphaned blobs: %w", err)
|
||||
}
|
||||
|
||||
if err := sm.deleteOrphanedBlobToChunkMappings(ctx, tx); err != nil {
|
||||
return nil, fmt.Errorf("step 6 - delete orphaned blob-to-chunk mappings: %w", err)
|
||||
}
|
||||
|
||||
if err := sm.deleteOrphanedChunks(ctx, tx); err != nil {
|
||||
return nil, fmt.Errorf("step 7 - delete orphaned chunks: %w", err)
|
||||
}
|
||||
|
||||
// Commit transaction
|
||||
log.Debug("[Temp DB Cleanup] Committing cleanup transaction")
|
||||
if err := tx.Commit(); err != nil {
|
||||
return nil, fmt.Errorf("committing transaction: %w", err)
|
||||
}
|
||||
|
||||
// Collect statistics about the cleaned database
|
||||
stats := &CleanupStats{}
|
||||
|
||||
// Count files
|
||||
var fileCount int
|
||||
err = db.QueryRowWithLog(ctx, "SELECT COUNT(*) FROM files").Scan(&fileCount)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("counting files: %w", err)
|
||||
}
|
||||
stats.FileCount = fileCount
|
||||
|
||||
// Count chunks
|
||||
var chunkCount int
|
||||
err = db.QueryRowWithLog(ctx, "SELECT COUNT(*) FROM chunks").Scan(&chunkCount)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("counting chunks: %w", err)
|
||||
}
|
||||
stats.ChunkCount = chunkCount
|
||||
|
||||
// Count blobs and get sizes
|
||||
var blobCount int
|
||||
var compressedSize, uncompressedSize sql.NullInt64
|
||||
err = db.QueryRowWithLog(ctx, `
|
||||
SELECT COUNT(*), COALESCE(SUM(compressed_size), 0), COALESCE(SUM(uncompressed_size), 0)
|
||||
FROM blobs
|
||||
WHERE blob_hash IN (SELECT blob_hash FROM snapshot_blobs WHERE snapshot_id = ?)
|
||||
`, snapshotID).Scan(&blobCount, &compressedSize, &uncompressedSize)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("counting blobs and sizes: %w", err)
|
||||
}
|
||||
stats.BlobCount = blobCount
|
||||
stats.CompressedSize = compressedSize.Int64
|
||||
stats.UncompressedSize = uncompressedSize.Int64
|
||||
|
||||
return stats, nil
|
||||
}
|
||||
|
||||
// dumpDatabase creates a SQL dump of the database
|
||||
func (sm *SnapshotManager) dumpDatabase(dbPath, dumpPath string) error {
|
||||
log.Debug("Running sqlite3 dump command", "source", dbPath, "destination", dumpPath)
|
||||
cmd := exec.Command("sqlite3", dbPath, ".dump")
|
||||
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return fmt.Errorf("running sqlite3 dump: %w", err)
|
||||
}
|
||||
|
||||
log.Debug("SQL dump generated", "size", humanize.Bytes(uint64(len(output))))
|
||||
if err := afero.WriteFile(sm.fs, dumpPath, output, 0644); err != nil {
|
||||
return fmt.Errorf("writing dump file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// compressDump compresses the SQL dump using zstd
|
||||
func (sm *SnapshotManager) compressDump(inputPath, outputPath string) error {
|
||||
input, err := sm.fs.Open(inputPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("opening input file: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := input.Close(); err != nil {
|
||||
log.Debug("Failed to close input file", "path", inputPath, "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
output, err := sm.fs.Create(outputPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating output file: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := output.Close(); err != nil {
|
||||
log.Debug("Failed to close output file", "path", outputPath, "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Use blobgen for compression and encryption
|
||||
log.Debug("Compressing and encrypting data")
|
||||
writer, err := blobgen.NewWriter(output, sm.config.CompressionLevel, sm.config.AgeRecipients)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating blobgen writer: %w", err)
|
||||
}
|
||||
|
||||
// Track if writer has been closed to avoid double-close
|
||||
writerClosed := false
|
||||
defer func() {
|
||||
if !writerClosed {
|
||||
if err := writer.Close(); err != nil {
|
||||
log.Debug("Failed to close writer", "error", err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
if _, err := io.Copy(writer, input); err != nil {
|
||||
return fmt.Errorf("compressing data: %w", err)
|
||||
}
|
||||
|
||||
// Close writer to flush all data
|
||||
if err := writer.Close(); err != nil {
|
||||
return fmt.Errorf("closing writer: %w", err)
|
||||
}
|
||||
writerClosed = true
|
||||
|
||||
log.Debug("Compression complete", "hash", fmt.Sprintf("%x", writer.Sum256()))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyFile copies a file from src to dst
|
||||
func (sm *SnapshotManager) copyFile(src, dst string) error {
|
||||
log.Debug("Opening source file for copy", "path", src)
|
||||
sourceFile, err := sm.fs.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() {
|
||||
log.Debug("Closing source file", "path", src)
|
||||
if err := sourceFile.Close(); err != nil {
|
||||
log.Debug("Failed to close source file", "path", src, "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
log.Debug("Creating destination file", "path", dst)
|
||||
destFile, err := sm.fs.Create(dst)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() {
|
||||
log.Debug("Closing destination file", "path", dst)
|
||||
if err := destFile.Close(); err != nil {
|
||||
log.Debug("Failed to close destination file", "path", dst, "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
log.Debug("Copying file data")
|
||||
n, err := io.Copy(destFile, sourceFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Debug("File copy complete", "bytes_copied", n)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// generateBlobManifest creates a compressed JSON list of all blobs in the snapshot
|
||||
func (sm *SnapshotManager) generateBlobManifest(ctx context.Context, dbPath string, snapshotID string) ([]byte, error) {
|
||||
|
||||
// Open the cleaned database using the database package
|
||||
db, err := database.New(ctx, dbPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening database: %w", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
|
||||
// Create repositories to access the data
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Get all blobs for this snapshot
|
||||
log.Debug("Querying blobs for snapshot", "snapshot_id", snapshotID)
|
||||
blobHashes, err := repos.Snapshots.GetBlobHashes(ctx, snapshotID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting snapshot blobs: %w", err)
|
||||
}
|
||||
log.Debug("Found blobs", "count", len(blobHashes))
|
||||
|
||||
// Get blob details including sizes
|
||||
blobs := make([]BlobInfo, 0, len(blobHashes))
|
||||
totalCompressedSize := int64(0)
|
||||
|
||||
for _, hash := range blobHashes {
|
||||
blob, err := repos.Blobs.GetByHash(ctx, hash)
|
||||
if err != nil {
|
||||
log.Warn("Failed to get blob details", "hash", hash, "error", err)
|
||||
continue
|
||||
}
|
||||
if blob != nil {
|
||||
blobs = append(blobs, BlobInfo{
|
||||
Hash: hash,
|
||||
CompressedSize: blob.CompressedSize,
|
||||
})
|
||||
totalCompressedSize += blob.CompressedSize
|
||||
}
|
||||
}
|
||||
|
||||
// Create manifest
|
||||
manifest := &Manifest{
|
||||
SnapshotID: snapshotID,
|
||||
Timestamp: time.Now().UTC().Format(time.RFC3339),
|
||||
BlobCount: len(blobs),
|
||||
TotalCompressedSize: totalCompressedSize,
|
||||
Blobs: blobs,
|
||||
}
|
||||
|
||||
// Encode manifest
|
||||
compressedData, err := EncodeManifest(manifest, sm.config.CompressionLevel)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("encoding manifest: %w", err)
|
||||
}
|
||||
|
||||
log.Info("Generated blob manifest",
|
||||
"snapshot_id", snapshotID,
|
||||
"blob_count", len(blobs),
|
||||
"total_compressed_size", totalCompressedSize,
|
||||
"manifest_size", len(compressedData))
|
||||
|
||||
return compressedData, nil
|
||||
}
|
||||
|
||||
// compressData compresses data using zstd
|
||||
|
||||
// getFileSize returns the size of a file in bytes, or -1 if error
|
||||
func (sm *SnapshotManager) getFileSize(path string) int64 {
|
||||
info, err := sm.fs.Stat(path)
|
||||
if err != nil {
|
||||
return -1
|
||||
}
|
||||
return info.Size()
|
||||
}
|
||||
|
||||
// BackupStats contains statistics from a backup operation
|
||||
type BackupStats struct {
|
||||
FilesScanned int
|
||||
BytesScanned int64
|
||||
ChunksCreated int
|
||||
BlobsCreated int
|
||||
BytesUploaded int64
|
||||
}
|
||||
|
||||
// ExtendedBackupStats contains additional statistics for comprehensive tracking
|
||||
type ExtendedBackupStats struct {
|
||||
BackupStats
|
||||
BlobUncompressedSize int64 // Total uncompressed size of all referenced blobs
|
||||
CompressionLevel int // Compression level used for this snapshot
|
||||
UploadDurationMs int64 // Total milliseconds spent uploading to S3
|
||||
}
|
||||
|
||||
// CleanupIncompleteSnapshots removes incomplete snapshots that don't have metadata in S3.
|
||||
// This is critical for data safety: incomplete snapshots can cause deduplication to skip
|
||||
// files that were never successfully backed up, resulting in data loss.
|
||||
func (sm *SnapshotManager) CleanupIncompleteSnapshots(ctx context.Context, hostname string) error {
|
||||
log.Info("Checking for incomplete snapshots", "hostname", hostname)
|
||||
|
||||
// Get all incomplete snapshots for this hostname
|
||||
incompleteSnapshots, err := sm.repos.Snapshots.GetIncompleteByHostname(ctx, hostname)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting incomplete snapshots: %w", err)
|
||||
}
|
||||
|
||||
if len(incompleteSnapshots) == 0 {
|
||||
log.Debug("No incomplete snapshots found")
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Info("Found incomplete snapshots", "count", len(incompleteSnapshots))
|
||||
|
||||
// Check each incomplete snapshot for metadata in storage
|
||||
for _, snapshot := range incompleteSnapshots {
|
||||
// Check if metadata exists in storage
|
||||
metadataKey := fmt.Sprintf("metadata/%s/db.zst", snapshot.ID)
|
||||
_, err := sm.storage.Stat(ctx, metadataKey)
|
||||
|
||||
if err != nil {
|
||||
// Metadata doesn't exist in S3 - this is an incomplete snapshot
|
||||
log.Info("Cleaning up incomplete snapshot record", "snapshot_id", snapshot.ID, "started_at", snapshot.StartedAt)
|
||||
|
||||
// Delete the snapshot and all its associations
|
||||
if err := sm.deleteSnapshot(ctx, snapshot.ID); err != nil {
|
||||
return fmt.Errorf("deleting incomplete snapshot %s: %w", snapshot.ID, err)
|
||||
}
|
||||
|
||||
log.Info("Deleted incomplete snapshot record and associated data", "snapshot_id", snapshot.ID)
|
||||
} else {
|
||||
// Metadata exists - this snapshot was completed but database wasn't updated
|
||||
// This shouldn't happen in normal operation, but mark it complete
|
||||
log.Warn("Found snapshot with S3 metadata but incomplete in database", "snapshot_id", snapshot.ID)
|
||||
if err := sm.repos.Snapshots.MarkComplete(ctx, nil, snapshot.ID); err != nil {
|
||||
log.Error("Failed to mark snapshot as complete in database", "snapshot_id", snapshot.ID, "error", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteSnapshot removes a snapshot and all its associations from the database
|
||||
func (sm *SnapshotManager) deleteSnapshot(ctx context.Context, snapshotID string) error {
|
||||
// Delete snapshot_files entries
|
||||
if err := sm.repos.Snapshots.DeleteSnapshotFiles(ctx, snapshotID); err != nil {
|
||||
return fmt.Errorf("deleting snapshot files: %w", err)
|
||||
}
|
||||
|
||||
// Delete snapshot_blobs entries
|
||||
if err := sm.repos.Snapshots.DeleteSnapshotBlobs(ctx, snapshotID); err != nil {
|
||||
return fmt.Errorf("deleting snapshot blobs: %w", err)
|
||||
}
|
||||
|
||||
// Delete uploads entries (has foreign key to snapshots without CASCADE)
|
||||
if err := sm.repos.Snapshots.DeleteSnapshotUploads(ctx, snapshotID); err != nil {
|
||||
return fmt.Errorf("deleting snapshot uploads: %w", err)
|
||||
}
|
||||
|
||||
// Delete the snapshot itself
|
||||
if err := sm.repos.Snapshots.Delete(ctx, snapshotID); err != nil {
|
||||
return fmt.Errorf("deleting snapshot: %w", err)
|
||||
}
|
||||
|
||||
// Clean up orphaned data
|
||||
log.Debug("Cleaning up orphaned records in main database")
|
||||
if err := sm.cleanupOrphanedData(ctx); err != nil {
|
||||
return fmt.Errorf("cleaning up orphaned data: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// cleanupOrphanedData removes files, chunks, and blobs that are no longer referenced by any snapshot
|
||||
func (sm *SnapshotManager) cleanupOrphanedData(ctx context.Context) error {
|
||||
// Order is important to respect foreign key constraints:
|
||||
// 1. Delete orphaned files (will cascade delete file_chunks)
|
||||
// 2. Delete orphaned blobs (will cascade delete blob_chunks for deleted blobs)
|
||||
// 3. Delete orphaned blob_chunks (where blob exists but chunk doesn't)
|
||||
// 4. Delete orphaned chunks (now safe after all blob_chunks are gone)
|
||||
|
||||
// Delete orphaned files (files not in any snapshot)
|
||||
log.Debug("Deleting orphaned file records from database")
|
||||
if err := sm.repos.Files.DeleteOrphaned(ctx); err != nil {
|
||||
return fmt.Errorf("deleting orphaned files: %w", err)
|
||||
}
|
||||
|
||||
// Delete orphaned blobs (blobs not in any snapshot)
|
||||
// This will cascade delete blob_chunks for deleted blobs
|
||||
log.Debug("Deleting orphaned blob records from database")
|
||||
if err := sm.repos.Blobs.DeleteOrphaned(ctx); err != nil {
|
||||
return fmt.Errorf("deleting orphaned blobs: %w", err)
|
||||
}
|
||||
|
||||
// Delete orphaned blob_chunks entries
|
||||
// This handles cases where the blob still exists but chunks were deleted
|
||||
log.Debug("Deleting orphaned blob_chunks associations from database")
|
||||
if err := sm.repos.BlobChunks.DeleteOrphaned(ctx); err != nil {
|
||||
return fmt.Errorf("deleting orphaned blob_chunks: %w", err)
|
||||
}
|
||||
|
||||
// Delete orphaned chunks (chunks not referenced by any file)
|
||||
// This must come after cleaning up blob_chunks to avoid foreign key violations
|
||||
log.Debug("Deleting orphaned chunk records from database")
|
||||
if err := sm.repos.Chunks.DeleteOrphaned(ctx); err != nil {
|
||||
return fmt.Errorf("deleting orphaned chunks: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteOtherSnapshots deletes all snapshots except the current one
|
||||
func (sm *SnapshotManager) deleteOtherSnapshots(ctx context.Context, tx *sql.Tx, currentSnapshotID string) error {
|
||||
log.Debug("[Temp DB Cleanup] Deleting all snapshot records except current", "keeping", currentSnapshotID)
|
||||
|
||||
// First delete uploads that reference other snapshots (no CASCADE DELETE on this FK)
|
||||
database.LogSQL("Execute", "DELETE FROM uploads WHERE snapshot_id != ?", currentSnapshotID)
|
||||
uploadResult, err := tx.ExecContext(ctx, "DELETE FROM uploads WHERE snapshot_id != ?", currentSnapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting uploads for other snapshots: %w", err)
|
||||
}
|
||||
uploadsDeleted, _ := uploadResult.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted upload records", "count", uploadsDeleted)
|
||||
|
||||
// Now we can safely delete the snapshots
|
||||
database.LogSQL("Execute", "DELETE FROM snapshots WHERE id != ?", currentSnapshotID)
|
||||
result, err := tx.ExecContext(ctx, "DELETE FROM snapshots WHERE id != ?", currentSnapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting other snapshots: %w", err)
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted snapshot records from database", "count", rowsAffected)
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteOrphanedSnapshotAssociations deletes snapshot_files and snapshot_blobs for deleted snapshots
|
||||
func (sm *SnapshotManager) deleteOrphanedSnapshotAssociations(ctx context.Context, tx *sql.Tx, currentSnapshotID string) error {
|
||||
// Delete orphaned snapshot_files
|
||||
log.Debug("[Temp DB Cleanup] Deleting orphaned snapshot_files associations")
|
||||
database.LogSQL("Execute", "DELETE FROM snapshot_files WHERE snapshot_id != ?", currentSnapshotID)
|
||||
result, err := tx.ExecContext(ctx, "DELETE FROM snapshot_files WHERE snapshot_id != ?", currentSnapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned snapshot_files: %w", err)
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted snapshot_files associations", "count", rowsAffected)
|
||||
|
||||
// Delete orphaned snapshot_blobs
|
||||
log.Debug("[Temp DB Cleanup] Deleting orphaned snapshot_blobs associations")
|
||||
database.LogSQL("Execute", "DELETE FROM snapshot_blobs WHERE snapshot_id != ?", currentSnapshotID)
|
||||
result, err = tx.ExecContext(ctx, "DELETE FROM snapshot_blobs WHERE snapshot_id != ?", currentSnapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned snapshot_blobs: %w", err)
|
||||
}
|
||||
rowsAffected, _ = result.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted snapshot_blobs associations", "count", rowsAffected)
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteOrphanedFiles deletes files not in the current snapshot
|
||||
func (sm *SnapshotManager) deleteOrphanedFiles(ctx context.Context, tx *sql.Tx, currentSnapshotID string) error {
|
||||
log.Debug("[Temp DB Cleanup] Deleting file records not referenced by current snapshot")
|
||||
database.LogSQL("Execute", `DELETE FROM files WHERE NOT EXISTS (SELECT 1 FROM snapshot_files WHERE snapshot_files.file_id = files.id AND snapshot_files.snapshot_id = ?)`, currentSnapshotID)
|
||||
result, err := tx.ExecContext(ctx, `
|
||||
DELETE FROM files
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM snapshot_files
|
||||
WHERE snapshot_files.file_id = files.id
|
||||
AND snapshot_files.snapshot_id = ?
|
||||
)`, currentSnapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned files: %w", err)
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted file records from database", "count", rowsAffected)
|
||||
|
||||
// Note: file_chunks will be deleted via CASCADE
|
||||
log.Debug("[Temp DB Cleanup] file_chunks associations deleted via CASCADE")
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteOrphanedChunkToFileMappings deletes chunk_files entries for deleted files
|
||||
func (sm *SnapshotManager) deleteOrphanedChunkToFileMappings(ctx context.Context, tx *sql.Tx) error {
|
||||
log.Debug("[Temp DB Cleanup] Deleting orphaned chunk_files associations")
|
||||
database.LogSQL("Execute", `DELETE FROM chunk_files WHERE NOT EXISTS (SELECT 1 FROM files WHERE files.id = chunk_files.file_id)`)
|
||||
result, err := tx.ExecContext(ctx, `
|
||||
DELETE FROM chunk_files
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM files
|
||||
WHERE files.id = chunk_files.file_id
|
||||
)`)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned chunk_files: %w", err)
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted chunk_files associations", "count", rowsAffected)
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteOrphanedBlobs deletes blobs not in the current snapshot
|
||||
func (sm *SnapshotManager) deleteOrphanedBlobs(ctx context.Context, tx *sql.Tx, currentSnapshotID string) error {
|
||||
log.Debug("[Temp DB Cleanup] Deleting blob records not referenced by current snapshot")
|
||||
database.LogSQL("Execute", `DELETE FROM blobs WHERE NOT EXISTS (SELECT 1 FROM snapshot_blobs WHERE snapshot_blobs.blob_hash = blobs.blob_hash AND snapshot_blobs.snapshot_id = ?)`, currentSnapshotID)
|
||||
result, err := tx.ExecContext(ctx, `
|
||||
DELETE FROM blobs
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM snapshot_blobs
|
||||
WHERE snapshot_blobs.blob_hash = blobs.blob_hash
|
||||
AND snapshot_blobs.snapshot_id = ?
|
||||
)`, currentSnapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned blobs: %w", err)
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted blob records from database", "count", rowsAffected)
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteOrphanedBlobToChunkMappings deletes blob_chunks entries for deleted blobs
|
||||
func (sm *SnapshotManager) deleteOrphanedBlobToChunkMappings(ctx context.Context, tx *sql.Tx) error {
|
||||
log.Debug("[Temp DB Cleanup] Deleting orphaned blob_chunks associations")
|
||||
database.LogSQL("Execute", `DELETE FROM blob_chunks WHERE NOT EXISTS (SELECT 1 FROM blobs WHERE blobs.id = blob_chunks.blob_id)`)
|
||||
result, err := tx.ExecContext(ctx, `
|
||||
DELETE FROM blob_chunks
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM blobs
|
||||
WHERE blobs.id = blob_chunks.blob_id
|
||||
)`)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned blob_chunks: %w", err)
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted blob_chunks associations", "count", rowsAffected)
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteOrphanedChunks deletes chunks not referenced by any file or blob
|
||||
func (sm *SnapshotManager) deleteOrphanedChunks(ctx context.Context, tx *sql.Tx) error {
|
||||
log.Debug("[Temp DB Cleanup] Deleting orphaned chunk records")
|
||||
query := `
|
||||
DELETE FROM chunks
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM file_chunks
|
||||
WHERE file_chunks.chunk_hash = chunks.chunk_hash
|
||||
)
|
||||
AND NOT EXISTS (
|
||||
SELECT 1 FROM blob_chunks
|
||||
WHERE blob_chunks.chunk_hash = chunks.chunk_hash
|
||||
)`
|
||||
database.LogSQL("Execute", query)
|
||||
result, err := tx.ExecContext(ctx, query)
|
||||
if err != nil {
|
||||
return fmt.Errorf("deleting orphaned chunks: %w", err)
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
log.Debug("[Temp DB Cleanup] Deleted chunk records from database", "count", rowsAffected)
|
||||
return nil
|
||||
}
|
||||
188
internal/snapshot/snapshot_test.go
Normal file
188
internal/snapshot/snapshot_test.go
Normal file
@@ -0,0 +1,188 @@
|
||||
package snapshot
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"io"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
const (
|
||||
// Test age public key for encryption
|
||||
testAgeRecipient = "age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"
|
||||
)
|
||||
|
||||
// copyFile is a test helper to copy files using afero
|
||||
func copyFile(fs afero.Fs, src, dst string) error {
|
||||
sourceFile, err := fs.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() { _ = sourceFile.Close() }()
|
||||
|
||||
destFile, err := fs.Create(dst)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() { _ = destFile.Close() }()
|
||||
|
||||
_, err = io.Copy(destFile, sourceFile)
|
||||
return err
|
||||
}
|
||||
|
||||
func TestCleanSnapshotDBEmptySnapshot(t *testing.T) {
|
||||
// Initialize logger
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
ctx := context.Background()
|
||||
fs := afero.NewOsFs()
|
||||
|
||||
// Create a test database
|
||||
tempDir := t.TempDir()
|
||||
dbPath := filepath.Join(tempDir, "test.db")
|
||||
db, err := database.New(ctx, dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create database: %v", err)
|
||||
}
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create an empty snapshot
|
||||
snapshot := &database.Snapshot{
|
||||
ID: "empty-snapshot",
|
||||
Hostname: "test-host",
|
||||
}
|
||||
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create snapshot: %v", err)
|
||||
}
|
||||
|
||||
// Create some files and chunks not associated with any snapshot
|
||||
file := &database.File{Path: "/orphan/file.txt", Size: 1000}
|
||||
chunk := &database.Chunk{ChunkHash: "orphan-chunk", Size: 500}
|
||||
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
if err := repos.Files.Create(ctx, tx, file); err != nil {
|
||||
return err
|
||||
}
|
||||
return repos.Chunks.Create(ctx, tx, chunk)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create orphan data: %v", err)
|
||||
}
|
||||
|
||||
// Close the database
|
||||
if err := db.Close(); err != nil {
|
||||
t.Fatalf("failed to close database: %v", err)
|
||||
}
|
||||
|
||||
// Copy database
|
||||
tempDBPath := filepath.Join(tempDir, "temp.db")
|
||||
if err := copyFile(fs, dbPath, tempDBPath); err != nil {
|
||||
t.Fatalf("failed to copy database: %v", err)
|
||||
}
|
||||
|
||||
// Create a mock config for testing
|
||||
cfg := &config.Config{
|
||||
CompressionLevel: 3,
|
||||
AgeRecipients: []string{testAgeRecipient},
|
||||
}
|
||||
// Create SnapshotManager with filesystem
|
||||
sm := &SnapshotManager{
|
||||
config: cfg,
|
||||
fs: fs,
|
||||
}
|
||||
if _, err := sm.cleanSnapshotDB(ctx, tempDBPath, snapshot.ID); err != nil {
|
||||
t.Fatalf("failed to clean snapshot database: %v", err)
|
||||
}
|
||||
|
||||
// Verify the cleaned database
|
||||
cleanedDB, err := database.New(ctx, tempDBPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open cleaned database: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := cleanedDB.Close(); err != nil {
|
||||
t.Errorf("failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
cleanedRepos := database.NewRepositories(cleanedDB)
|
||||
|
||||
// Verify snapshot exists
|
||||
verifySnapshot, err := cleanedRepos.Snapshots.GetByID(ctx, snapshot.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get snapshot: %v", err)
|
||||
}
|
||||
if verifySnapshot == nil {
|
||||
t.Error("snapshot should exist")
|
||||
}
|
||||
|
||||
// Verify orphan file is gone
|
||||
f, err := cleanedRepos.Files.GetByPath(ctx, file.Path)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to check file: %v", err)
|
||||
}
|
||||
if f != nil {
|
||||
t.Error("orphan file should not exist")
|
||||
}
|
||||
|
||||
// Verify orphan chunk is gone
|
||||
c, err := cleanedRepos.Chunks.GetByHash(ctx, chunk.ChunkHash)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to check chunk: %v", err)
|
||||
}
|
||||
if c != nil {
|
||||
t.Error("orphan chunk should not exist")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCleanSnapshotDBNonExistentSnapshot(t *testing.T) {
|
||||
// Initialize logger
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
ctx := context.Background()
|
||||
fs := afero.NewOsFs()
|
||||
|
||||
// Create a test database
|
||||
tempDir := t.TempDir()
|
||||
dbPath := filepath.Join(tempDir, "test.db")
|
||||
db, err := database.New(ctx, dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create database: %v", err)
|
||||
}
|
||||
|
||||
// Close immediately
|
||||
if err := db.Close(); err != nil {
|
||||
t.Fatalf("failed to close database: %v", err)
|
||||
}
|
||||
|
||||
// Copy database
|
||||
tempDBPath := filepath.Join(tempDir, "temp.db")
|
||||
if err := copyFile(fs, dbPath, tempDBPath); err != nil {
|
||||
t.Fatalf("failed to copy database: %v", err)
|
||||
}
|
||||
|
||||
// Create a mock config for testing
|
||||
cfg := &config.Config{
|
||||
CompressionLevel: 3,
|
||||
AgeRecipients: []string{testAgeRecipient},
|
||||
}
|
||||
// Try to clean with non-existent snapshot
|
||||
sm := &SnapshotManager{config: cfg, fs: fs}
|
||||
_, err = sm.cleanSnapshotDB(ctx, tempDBPath, "non-existent-snapshot")
|
||||
|
||||
// Should not error - it will just delete everything
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
262
internal/storage/file.go
Normal file
262
internal/storage/file.go
Normal file
@@ -0,0 +1,262 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
// FileStorer implements Storer using the local filesystem.
|
||||
// It mirrors the S3 path structure for consistency.
|
||||
type FileStorer struct {
|
||||
fs afero.Fs
|
||||
basePath string
|
||||
}
|
||||
|
||||
// NewFileStorer creates a new filesystem storage backend.
|
||||
// The basePath directory will be created if it doesn't exist.
|
||||
// Uses the real OS filesystem by default; call SetFilesystem to override for testing.
|
||||
func NewFileStorer(basePath string) (*FileStorer, error) {
|
||||
fs := afero.NewOsFs()
|
||||
// Ensure base path exists
|
||||
if err := fs.MkdirAll(basePath, 0755); err != nil {
|
||||
return nil, fmt.Errorf("creating base path: %w", err)
|
||||
}
|
||||
return &FileStorer{
|
||||
fs: fs,
|
||||
basePath: basePath,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// SetFilesystem overrides the filesystem for testing.
|
||||
func (f *FileStorer) SetFilesystem(fs afero.Fs) {
|
||||
f.fs = fs
|
||||
}
|
||||
|
||||
// fullPath returns the full filesystem path for a key.
|
||||
func (f *FileStorer) fullPath(key string) string {
|
||||
return filepath.Join(f.basePath, key)
|
||||
}
|
||||
|
||||
// Put stores data at the specified key.
|
||||
func (f *FileStorer) Put(ctx context.Context, key string, data io.Reader) error {
|
||||
path := f.fullPath(key)
|
||||
|
||||
// Create parent directories
|
||||
dir := filepath.Dir(path)
|
||||
if err := f.fs.MkdirAll(dir, 0755); err != nil {
|
||||
return fmt.Errorf("creating directories: %w", err)
|
||||
}
|
||||
|
||||
file, err := f.fs.Create(path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating file: %w", err)
|
||||
}
|
||||
defer func() { _ = file.Close() }()
|
||||
|
||||
if _, err := io.Copy(file, data); err != nil {
|
||||
return fmt.Errorf("writing file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// PutWithProgress stores data with progress reporting.
|
||||
func (f *FileStorer) PutWithProgress(ctx context.Context, key string, data io.Reader, size int64, progress ProgressCallback) error {
|
||||
path := f.fullPath(key)
|
||||
|
||||
// Create parent directories
|
||||
dir := filepath.Dir(path)
|
||||
if err := f.fs.MkdirAll(dir, 0755); err != nil {
|
||||
return fmt.Errorf("creating directories: %w", err)
|
||||
}
|
||||
|
||||
file, err := f.fs.Create(path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating file: %w", err)
|
||||
}
|
||||
defer func() { _ = file.Close() }()
|
||||
|
||||
// Wrap with progress tracking
|
||||
pw := &progressWriter{
|
||||
writer: file,
|
||||
callback: progress,
|
||||
}
|
||||
|
||||
if _, err := io.Copy(pw, data); err != nil {
|
||||
return fmt.Errorf("writing file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get retrieves data from the specified key.
|
||||
func (f *FileStorer) Get(ctx context.Context, key string) (io.ReadCloser, error) {
|
||||
path := f.fullPath(key)
|
||||
file, err := f.fs.Open(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, ErrNotFound
|
||||
}
|
||||
return nil, fmt.Errorf("opening file: %w", err)
|
||||
}
|
||||
return file, nil
|
||||
}
|
||||
|
||||
// Stat returns metadata about an object without retrieving its contents.
|
||||
func (f *FileStorer) Stat(ctx context.Context, key string) (*ObjectInfo, error) {
|
||||
path := f.fullPath(key)
|
||||
info, err := f.fs.Stat(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, ErrNotFound
|
||||
}
|
||||
return nil, fmt.Errorf("stat file: %w", err)
|
||||
}
|
||||
return &ObjectInfo{
|
||||
Key: key,
|
||||
Size: info.Size(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Delete removes an object.
|
||||
func (f *FileStorer) Delete(ctx context.Context, key string) error {
|
||||
path := f.fullPath(key)
|
||||
err := f.fs.Remove(path)
|
||||
if os.IsNotExist(err) {
|
||||
return nil // Match S3 behavior: no error if doesn't exist
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("removing file: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// List returns all keys with the given prefix.
|
||||
func (f *FileStorer) List(ctx context.Context, prefix string) ([]string, error) {
|
||||
var keys []string
|
||||
basePath := f.fullPath(prefix)
|
||||
|
||||
// Check if base path exists
|
||||
exists, err := afero.Exists(f.fs, basePath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("checking path: %w", err)
|
||||
}
|
||||
if !exists {
|
||||
return keys, nil // Empty list for non-existent prefix
|
||||
}
|
||||
|
||||
err = afero.Walk(f.fs, basePath, func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Check context cancellation
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
if !info.IsDir() {
|
||||
// Convert back to key (relative path from basePath)
|
||||
relPath, err := filepath.Rel(f.basePath, path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("computing relative path: %w", err)
|
||||
}
|
||||
// Normalize path separators to forward slashes for consistency
|
||||
relPath = strings.ReplaceAll(relPath, string(filepath.Separator), "/")
|
||||
keys = append(keys, relPath)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("walking directory: %w", err)
|
||||
}
|
||||
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
// ListStream returns a channel of ObjectInfo for large result sets.
|
||||
func (f *FileStorer) ListStream(ctx context.Context, prefix string) <-chan ObjectInfo {
|
||||
ch := make(chan ObjectInfo)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
basePath := f.fullPath(prefix)
|
||||
|
||||
// Check if base path exists
|
||||
exists, err := afero.Exists(f.fs, basePath)
|
||||
if err != nil {
|
||||
ch <- ObjectInfo{Err: fmt.Errorf("checking path: %w", err)}
|
||||
return
|
||||
}
|
||||
if !exists {
|
||||
return // Empty channel for non-existent prefix
|
||||
}
|
||||
|
||||
_ = afero.Walk(f.fs, basePath, func(path string, info os.FileInfo, err error) error {
|
||||
// Check context cancellation
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
ch <- ObjectInfo{Err: ctx.Err()}
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
ch <- ObjectInfo{Err: err}
|
||||
return nil // Continue walking despite errors
|
||||
}
|
||||
|
||||
if !info.IsDir() {
|
||||
relPath, err := filepath.Rel(f.basePath, path)
|
||||
if err != nil {
|
||||
ch <- ObjectInfo{Err: fmt.Errorf("computing relative path: %w", err)}
|
||||
return nil
|
||||
}
|
||||
// Normalize path separators
|
||||
relPath = strings.ReplaceAll(relPath, string(filepath.Separator), "/")
|
||||
ch <- ObjectInfo{
|
||||
Key: relPath,
|
||||
Size: info.Size(),
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}()
|
||||
return ch
|
||||
}
|
||||
|
||||
// Info returns human-readable storage location information.
|
||||
func (f *FileStorer) Info() StorageInfo {
|
||||
return StorageInfo{
|
||||
Type: "file",
|
||||
Location: f.basePath,
|
||||
}
|
||||
}
|
||||
|
||||
// progressWriter wraps an io.Writer to track write progress.
|
||||
type progressWriter struct {
|
||||
writer io.Writer
|
||||
written int64
|
||||
callback ProgressCallback
|
||||
}
|
||||
|
||||
func (pw *progressWriter) Write(p []byte) (int, error) {
|
||||
n, err := pw.writer.Write(p)
|
||||
if n > 0 {
|
||||
pw.written += int64(n)
|
||||
if pw.callback != nil {
|
||||
if callbackErr := pw.callback(pw.written); callbackErr != nil {
|
||||
return n, callbackErr
|
||||
}
|
||||
}
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
110
internal/storage/module.go
Normal file
110
internal/storage/module.go
Normal file
@@ -0,0 +1,110 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/s3"
|
||||
"go.uber.org/fx"
|
||||
)
|
||||
|
||||
// Module exports storage functionality as an fx module.
|
||||
// It provides a Storer implementation based on the configured storage URL
|
||||
// or falls back to legacy S3 configuration.
|
||||
var Module = fx.Module("storage",
|
||||
fx.Provide(NewStorer),
|
||||
)
|
||||
|
||||
// NewStorer creates a Storer based on configuration.
|
||||
// If StorageURL is set, it uses URL-based configuration.
|
||||
// Otherwise, it falls back to legacy S3 configuration.
|
||||
func NewStorer(cfg *config.Config) (Storer, error) {
|
||||
if cfg.StorageURL != "" {
|
||||
return storerFromURL(cfg.StorageURL, cfg)
|
||||
}
|
||||
return storerFromLegacyS3Config(cfg)
|
||||
}
|
||||
|
||||
func storerFromURL(rawURL string, cfg *config.Config) (Storer, error) {
|
||||
parsed, err := ParseStorageURL(rawURL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing storage URL: %w", err)
|
||||
}
|
||||
|
||||
switch parsed.Scheme {
|
||||
case "file":
|
||||
return NewFileStorer(parsed.Prefix)
|
||||
|
||||
case "s3":
|
||||
// Build endpoint URL
|
||||
endpoint := parsed.Endpoint
|
||||
if endpoint == "" {
|
||||
endpoint = "s3.amazonaws.com"
|
||||
}
|
||||
|
||||
// Add protocol if not present
|
||||
if parsed.UseSSL && !strings.HasPrefix(endpoint, "https://") && !strings.HasPrefix(endpoint, "http://") {
|
||||
endpoint = "https://" + endpoint
|
||||
} else if !parsed.UseSSL && !strings.HasPrefix(endpoint, "http://") && !strings.HasPrefix(endpoint, "https://") {
|
||||
endpoint = "http://" + endpoint
|
||||
}
|
||||
|
||||
region := parsed.Region
|
||||
if region == "" {
|
||||
region = cfg.S3.Region
|
||||
if region == "" {
|
||||
region = "us-east-1"
|
||||
}
|
||||
}
|
||||
|
||||
// Credentials come from config (not URL for security)
|
||||
client, err := s3.NewClient(context.Background(), s3.Config{
|
||||
Endpoint: endpoint,
|
||||
Bucket: parsed.Bucket,
|
||||
Prefix: parsed.Prefix,
|
||||
AccessKeyID: cfg.S3.AccessKeyID,
|
||||
SecretAccessKey: cfg.S3.SecretAccessKey,
|
||||
Region: region,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating S3 client: %w", err)
|
||||
}
|
||||
return NewS3Storer(client), nil
|
||||
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported storage scheme: %s", parsed.Scheme)
|
||||
}
|
||||
}
|
||||
|
||||
func storerFromLegacyS3Config(cfg *config.Config) (Storer, error) {
|
||||
endpoint := cfg.S3.Endpoint
|
||||
|
||||
// Ensure protocol is present
|
||||
if !strings.HasPrefix(endpoint, "http://") && !strings.HasPrefix(endpoint, "https://") {
|
||||
if cfg.S3.UseSSL {
|
||||
endpoint = "https://" + endpoint
|
||||
} else {
|
||||
endpoint = "http://" + endpoint
|
||||
}
|
||||
}
|
||||
|
||||
region := cfg.S3.Region
|
||||
if region == "" {
|
||||
region = "us-east-1"
|
||||
}
|
||||
|
||||
client, err := s3.NewClient(context.Background(), s3.Config{
|
||||
Endpoint: endpoint,
|
||||
Bucket: cfg.S3.Bucket,
|
||||
Prefix: cfg.S3.Prefix,
|
||||
AccessKeyID: cfg.S3.AccessKeyID,
|
||||
SecretAccessKey: cfg.S3.SecretAccessKey,
|
||||
Region: region,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating S3 client: %w", err)
|
||||
}
|
||||
return NewS3Storer(client), nil
|
||||
}
|
||||
85
internal/storage/s3.go
Normal file
85
internal/storage/s3.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/s3"
|
||||
)
|
||||
|
||||
// S3Storer wraps the existing s3.Client to implement Storer.
|
||||
type S3Storer struct {
|
||||
client *s3.Client
|
||||
}
|
||||
|
||||
// NewS3Storer creates a new S3 storage backend.
|
||||
func NewS3Storer(client *s3.Client) *S3Storer {
|
||||
return &S3Storer{client: client}
|
||||
}
|
||||
|
||||
// Put stores data at the specified key.
|
||||
func (s *S3Storer) Put(ctx context.Context, key string, data io.Reader) error {
|
||||
return s.client.PutObject(ctx, key, data)
|
||||
}
|
||||
|
||||
// PutWithProgress stores data with progress reporting.
|
||||
func (s *S3Storer) PutWithProgress(ctx context.Context, key string, data io.Reader, size int64, progress ProgressCallback) error {
|
||||
// Convert storage.ProgressCallback to s3.ProgressCallback
|
||||
var s3Progress s3.ProgressCallback
|
||||
if progress != nil {
|
||||
s3Progress = s3.ProgressCallback(progress)
|
||||
}
|
||||
return s.client.PutObjectWithProgress(ctx, key, data, size, s3Progress)
|
||||
}
|
||||
|
||||
// Get retrieves data from the specified key.
|
||||
func (s *S3Storer) Get(ctx context.Context, key string) (io.ReadCloser, error) {
|
||||
return s.client.GetObject(ctx, key)
|
||||
}
|
||||
|
||||
// Stat returns metadata about an object without retrieving its contents.
|
||||
func (s *S3Storer) Stat(ctx context.Context, key string) (*ObjectInfo, error) {
|
||||
info, err := s.client.StatObject(ctx, key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &ObjectInfo{
|
||||
Key: info.Key,
|
||||
Size: info.Size,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Delete removes an object.
|
||||
func (s *S3Storer) Delete(ctx context.Context, key string) error {
|
||||
return s.client.DeleteObject(ctx, key)
|
||||
}
|
||||
|
||||
// List returns all keys with the given prefix.
|
||||
func (s *S3Storer) List(ctx context.Context, prefix string) ([]string, error) {
|
||||
return s.client.ListObjects(ctx, prefix)
|
||||
}
|
||||
|
||||
// ListStream returns a channel of ObjectInfo for large result sets.
|
||||
func (s *S3Storer) ListStream(ctx context.Context, prefix string) <-chan ObjectInfo {
|
||||
ch := make(chan ObjectInfo)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
for info := range s.client.ListObjectsStream(ctx, prefix, false) {
|
||||
ch <- ObjectInfo{
|
||||
Key: info.Key,
|
||||
Size: info.Size,
|
||||
Err: info.Err,
|
||||
}
|
||||
}
|
||||
}()
|
||||
return ch
|
||||
}
|
||||
|
||||
// Info returns human-readable storage location information.
|
||||
func (s *S3Storer) Info() StorageInfo {
|
||||
return StorageInfo{
|
||||
Type: "s3",
|
||||
Location: fmt.Sprintf("%s/%s", s.client.Endpoint(), s.client.BucketName()),
|
||||
}
|
||||
}
|
||||
74
internal/storage/storer.go
Normal file
74
internal/storage/storer.go
Normal file
@@ -0,0 +1,74 @@
|
||||
// Package storage provides a unified interface for storage backends.
|
||||
// It supports both S3-compatible object storage and local filesystem storage,
|
||||
// allowing Vaultik to store backups in either location with the same API.
|
||||
//
|
||||
// Storage backends are selected via URL:
|
||||
// - s3://bucket/prefix?endpoint=host®ion=r - S3-compatible storage
|
||||
// - file:///path/to/backup - Local filesystem storage
|
||||
//
|
||||
// Both backends implement the Storer interface and support progress reporting
|
||||
// during upload/write operations.
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
// ErrNotFound is returned when an object does not exist.
|
||||
var ErrNotFound = errors.New("object not found")
|
||||
|
||||
// ProgressCallback is called during storage operations with bytes transferred so far.
|
||||
// Return an error to cancel the operation.
|
||||
type ProgressCallback func(bytesTransferred int64) error
|
||||
|
||||
// ObjectInfo contains metadata about a stored object.
|
||||
type ObjectInfo struct {
|
||||
Key string // Object key/path
|
||||
Size int64 // Size in bytes
|
||||
Err error // Error for streaming results (nil on success)
|
||||
}
|
||||
|
||||
// StorageInfo provides human-readable storage configuration.
|
||||
type StorageInfo struct {
|
||||
Type string // "s3" or "file"
|
||||
Location string // endpoint/bucket for S3, base path for filesystem
|
||||
}
|
||||
|
||||
// Storer defines the interface for storage backends.
|
||||
// All paths are relative to the storage root (bucket/prefix for S3, base directory for filesystem).
|
||||
type Storer interface {
|
||||
// Put stores data at the specified key.
|
||||
// Parent directories are created automatically for filesystem backends.
|
||||
Put(ctx context.Context, key string, data io.Reader) error
|
||||
|
||||
// PutWithProgress stores data with progress reporting.
|
||||
// Size must be the exact size of the data to store.
|
||||
// The progress callback is called periodically with bytes transferred.
|
||||
PutWithProgress(ctx context.Context, key string, data io.Reader, size int64, progress ProgressCallback) error
|
||||
|
||||
// Get retrieves data from the specified key.
|
||||
// The caller must close the returned ReadCloser.
|
||||
// Returns ErrNotFound if the object does not exist.
|
||||
Get(ctx context.Context, key string) (io.ReadCloser, error)
|
||||
|
||||
// Stat returns metadata about an object without retrieving its contents.
|
||||
// Returns ErrNotFound if the object does not exist.
|
||||
Stat(ctx context.Context, key string) (*ObjectInfo, error)
|
||||
|
||||
// Delete removes an object. No error is returned if the object doesn't exist.
|
||||
Delete(ctx context.Context, key string) error
|
||||
|
||||
// List returns all keys with the given prefix.
|
||||
// For large result sets, prefer ListStream.
|
||||
List(ctx context.Context, prefix string) ([]string, error)
|
||||
|
||||
// ListStream returns a channel of ObjectInfo for large result sets.
|
||||
// The channel is closed when listing completes.
|
||||
// If an error occurs during listing, the final item will have Err set.
|
||||
ListStream(ctx context.Context, prefix string) <-chan ObjectInfo
|
||||
|
||||
// Info returns human-readable storage location information.
|
||||
Info() StorageInfo
|
||||
}
|
||||
90
internal/storage/url.go
Normal file
90
internal/storage/url.go
Normal file
@@ -0,0 +1,90 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// StorageURL represents a parsed storage URL.
|
||||
type StorageURL struct {
|
||||
Scheme string // "s3" or "file"
|
||||
Bucket string // S3 bucket name (empty for file)
|
||||
Prefix string // Path within bucket or filesystem base path
|
||||
Endpoint string // S3 endpoint (optional, default AWS)
|
||||
Region string // S3 region (optional)
|
||||
UseSSL bool // Use HTTPS for S3 (default true)
|
||||
}
|
||||
|
||||
// ParseStorageURL parses a storage URL string.
|
||||
// Supported formats:
|
||||
// - s3://bucket/prefix?endpoint=host®ion=us-east-1&ssl=true
|
||||
// - file:///absolute/path/to/backup
|
||||
func ParseStorageURL(rawURL string) (*StorageURL, error) {
|
||||
if rawURL == "" {
|
||||
return nil, fmt.Errorf("storage URL is empty")
|
||||
}
|
||||
|
||||
// Handle file:// URLs
|
||||
if strings.HasPrefix(rawURL, "file://") {
|
||||
path := strings.TrimPrefix(rawURL, "file://")
|
||||
if path == "" {
|
||||
return nil, fmt.Errorf("file URL path is empty")
|
||||
}
|
||||
return &StorageURL{
|
||||
Scheme: "file",
|
||||
Prefix: path,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Handle s3:// URLs
|
||||
if strings.HasPrefix(rawURL, "s3://") {
|
||||
u, err := url.Parse(rawURL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid URL: %w", err)
|
||||
}
|
||||
|
||||
bucket := u.Host
|
||||
if bucket == "" {
|
||||
return nil, fmt.Errorf("s3 URL missing bucket name")
|
||||
}
|
||||
|
||||
prefix := strings.TrimPrefix(u.Path, "/")
|
||||
|
||||
query := u.Query()
|
||||
useSSL := true
|
||||
if query.Get("ssl") == "false" {
|
||||
useSSL = false
|
||||
}
|
||||
|
||||
return &StorageURL{
|
||||
Scheme: "s3",
|
||||
Bucket: bucket,
|
||||
Prefix: prefix,
|
||||
Endpoint: query.Get("endpoint"),
|
||||
Region: query.Get("region"),
|
||||
UseSSL: useSSL,
|
||||
}, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("unsupported URL scheme: must start with s3:// or file://")
|
||||
}
|
||||
|
||||
// String returns a human-readable representation of the storage URL.
|
||||
func (u *StorageURL) String() string {
|
||||
switch u.Scheme {
|
||||
case "file":
|
||||
return fmt.Sprintf("file://%s", u.Prefix)
|
||||
case "s3":
|
||||
endpoint := u.Endpoint
|
||||
if endpoint == "" {
|
||||
endpoint = "s3.amazonaws.com"
|
||||
}
|
||||
if u.Prefix != "" {
|
||||
return fmt.Sprintf("s3://%s/%s (endpoint: %s)", u.Bucket, u.Prefix, endpoint)
|
||||
}
|
||||
return fmt.Sprintf("s3://%s (endpoint: %s)", u.Bucket, endpoint)
|
||||
default:
|
||||
return fmt.Sprintf("%s://?", u.Scheme)
|
||||
}
|
||||
}
|
||||
103
internal/vaultik/helpers.go
Normal file
103
internal/vaultik/helpers.go
Normal file
@@ -0,0 +1,103 @@
|
||||
package vaultik
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// SnapshotInfo contains information about a snapshot
|
||||
type SnapshotInfo struct {
|
||||
ID string `json:"id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
CompressedSize int64 `json:"compressed_size"`
|
||||
}
|
||||
|
||||
// formatNumber formats a number with commas
|
||||
func formatNumber(n int) string {
|
||||
str := fmt.Sprintf("%d", n)
|
||||
var result []string
|
||||
for i, digit := range str {
|
||||
if i > 0 && (len(str)-i)%3 == 0 {
|
||||
result = append(result, ",")
|
||||
}
|
||||
result = append(result, string(digit))
|
||||
}
|
||||
return strings.Join(result, "")
|
||||
}
|
||||
|
||||
// formatDuration formats a duration in a human-readable way
|
||||
func formatDuration(d time.Duration) string {
|
||||
if d < time.Second {
|
||||
return fmt.Sprintf("%dms", d.Milliseconds())
|
||||
}
|
||||
if d < time.Minute {
|
||||
return fmt.Sprintf("%.1fs", d.Seconds())
|
||||
}
|
||||
if d < time.Hour {
|
||||
mins := int(d.Minutes())
|
||||
secs := int(d.Seconds()) % 60
|
||||
return fmt.Sprintf("%dm %ds", mins, secs)
|
||||
}
|
||||
hours := int(d.Hours())
|
||||
mins := int(d.Minutes()) % 60
|
||||
return fmt.Sprintf("%dh %dm", hours, mins)
|
||||
}
|
||||
|
||||
// formatBytes formats bytes in a human-readable format
|
||||
func formatBytes(bytes int64) string {
|
||||
const unit = 1024
|
||||
if bytes < unit {
|
||||
return fmt.Sprintf("%d B", bytes)
|
||||
}
|
||||
div, exp := int64(unit), 0
|
||||
for n := bytes / unit; n >= unit; n /= unit {
|
||||
div *= unit
|
||||
exp++
|
||||
}
|
||||
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||
}
|
||||
|
||||
// parseSnapshotTimestamp extracts the timestamp from a snapshot ID
|
||||
func parseSnapshotTimestamp(snapshotID string) (time.Time, error) {
|
||||
// Format: hostname-YYYYMMDD-HHMMSSZ
|
||||
parts := strings.Split(snapshotID, "-")
|
||||
if len(parts) < 3 {
|
||||
return time.Time{}, fmt.Errorf("invalid snapshot ID format")
|
||||
}
|
||||
|
||||
dateStr := parts[len(parts)-2]
|
||||
timeStr := parts[len(parts)-1]
|
||||
|
||||
if len(dateStr) != 8 || len(timeStr) != 7 || !strings.HasSuffix(timeStr, "Z") {
|
||||
return time.Time{}, fmt.Errorf("invalid timestamp format")
|
||||
}
|
||||
|
||||
// Remove Z suffix
|
||||
timeStr = timeStr[:6]
|
||||
|
||||
// Parse the timestamp
|
||||
timestamp, err := time.Parse("20060102150405", dateStr+timeStr)
|
||||
if err != nil {
|
||||
return time.Time{}, fmt.Errorf("failed to parse timestamp: %w", err)
|
||||
}
|
||||
|
||||
return timestamp.UTC(), nil
|
||||
}
|
||||
|
||||
// parseDuration parses a duration string with support for days
|
||||
func parseDuration(s string) (time.Duration, error) {
|
||||
// Check for days suffix
|
||||
if strings.HasSuffix(s, "d") {
|
||||
daysStr := strings.TrimSuffix(s, "d")
|
||||
days, err := strconv.Atoi(daysStr)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("invalid days value: %w", err)
|
||||
}
|
||||
return time.Duration(days) * 24 * time.Hour, nil
|
||||
}
|
||||
|
||||
// Otherwise use standard Go duration parsing
|
||||
return time.ParseDuration(s)
|
||||
}
|
||||
101
internal/vaultik/info.go
Normal file
101
internal/vaultik/info.go
Normal file
@@ -0,0 +1,101 @@
|
||||
package vaultik
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
)
|
||||
|
||||
// ShowInfo displays system and configuration information
|
||||
func (v *Vaultik) ShowInfo() error {
|
||||
// System Information
|
||||
fmt.Printf("=== System Information ===\n")
|
||||
fmt.Printf("OS/Architecture: %s/%s\n", runtime.GOOS, runtime.GOARCH)
|
||||
fmt.Printf("Version: %s\n", v.Globals.Version)
|
||||
fmt.Printf("Commit: %s\n", v.Globals.Commit)
|
||||
fmt.Printf("Go Version: %s\n", runtime.Version())
|
||||
fmt.Println()
|
||||
|
||||
// Storage Configuration
|
||||
fmt.Printf("=== Storage Configuration ===\n")
|
||||
fmt.Printf("S3 Bucket: %s\n", v.Config.S3.Bucket)
|
||||
if v.Config.S3.Prefix != "" {
|
||||
fmt.Printf("S3 Prefix: %s\n", v.Config.S3.Prefix)
|
||||
}
|
||||
fmt.Printf("S3 Endpoint: %s\n", v.Config.S3.Endpoint)
|
||||
fmt.Printf("S3 Region: %s\n", v.Config.S3.Region)
|
||||
fmt.Println()
|
||||
|
||||
// Backup Settings
|
||||
fmt.Printf("=== Backup Settings ===\n")
|
||||
fmt.Printf("Source Directories:\n")
|
||||
for _, dir := range v.Config.SourceDirs {
|
||||
fmt.Printf(" - %s\n", dir)
|
||||
}
|
||||
|
||||
// Global exclude patterns
|
||||
if len(v.Config.Exclude) > 0 {
|
||||
fmt.Printf("Exclude Patterns: %s\n", strings.Join(v.Config.Exclude, ", "))
|
||||
}
|
||||
|
||||
fmt.Printf("Compression: zstd level %d\n", v.Config.CompressionLevel)
|
||||
fmt.Printf("Chunk Size: %s\n", humanize.Bytes(uint64(v.Config.ChunkSize)))
|
||||
fmt.Printf("Blob Size Limit: %s\n", humanize.Bytes(uint64(v.Config.BlobSizeLimit)))
|
||||
fmt.Println()
|
||||
|
||||
// Encryption Configuration
|
||||
fmt.Printf("=== Encryption Configuration ===\n")
|
||||
fmt.Printf("Recipients:\n")
|
||||
for _, recipient := range v.Config.AgeRecipients {
|
||||
fmt.Printf(" - %s\n", recipient)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Daemon Settings (if applicable)
|
||||
if v.Config.BackupInterval > 0 || v.Config.MinTimeBetweenRun > 0 {
|
||||
fmt.Printf("=== Daemon Settings ===\n")
|
||||
if v.Config.BackupInterval > 0 {
|
||||
fmt.Printf("Backup Interval: %s\n", v.Config.BackupInterval)
|
||||
}
|
||||
if v.Config.MinTimeBetweenRun > 0 {
|
||||
fmt.Printf("Minimum Time: %s\n", v.Config.MinTimeBetweenRun)
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Local Database
|
||||
fmt.Printf("=== Local Database ===\n")
|
||||
fmt.Printf("Index Path: %s\n", v.Config.IndexPath)
|
||||
|
||||
// Check if index file exists and get its size
|
||||
if info, err := v.Fs.Stat(v.Config.IndexPath); err == nil {
|
||||
fmt.Printf("Index Size: %s\n", humanize.Bytes(uint64(info.Size())))
|
||||
|
||||
// Get snapshot count from database
|
||||
query := `SELECT COUNT(*) FROM snapshots WHERE completed_at IS NOT NULL`
|
||||
var snapshotCount int
|
||||
if err := v.DB.Conn().QueryRowContext(v.ctx, query).Scan(&snapshotCount); err == nil {
|
||||
fmt.Printf("Snapshots: %d\n", snapshotCount)
|
||||
}
|
||||
|
||||
// Get blob count from database
|
||||
query = `SELECT COUNT(*) FROM blobs`
|
||||
var blobCount int
|
||||
if err := v.DB.Conn().QueryRowContext(v.ctx, query).Scan(&blobCount); err == nil {
|
||||
fmt.Printf("Blobs: %d\n", blobCount)
|
||||
}
|
||||
|
||||
// Get file count from database
|
||||
query = `SELECT COUNT(*) FROM files`
|
||||
var fileCount int
|
||||
if err := v.DB.Conn().QueryRowContext(v.ctx, query).Scan(&fileCount); err == nil {
|
||||
fmt.Printf("Files: %d\n", fileCount)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("Index Size: (not created)\n")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
400
internal/vaultik/integration_test.go
Normal file
400
internal/vaultik/integration_test.go
Normal file
@@ -0,0 +1,400 @@
|
||||
package vaultik_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"database/sql"
|
||||
"io"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/config"
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
||||
"git.eeqj.de/sneak/vaultik/internal/storage"
|
||||
"github.com/spf13/afero"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// MockStorer implements storage.Storer for testing
|
||||
type MockStorer struct {
|
||||
mu sync.Mutex
|
||||
data map[string][]byte
|
||||
calls []string
|
||||
}
|
||||
|
||||
func NewMockStorer() *MockStorer {
|
||||
return &MockStorer{
|
||||
data: make(map[string][]byte),
|
||||
calls: make([]string, 0),
|
||||
}
|
||||
}
|
||||
|
||||
func (m *MockStorer) Put(ctx context.Context, key string, reader io.Reader) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
m.calls = append(m.calls, "Put:"+key)
|
||||
data, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.data[key] = data
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MockStorer) PutWithProgress(ctx context.Context, key string, reader io.Reader, size int64, progress storage.ProgressCallback) error {
|
||||
return m.Put(ctx, key, reader)
|
||||
}
|
||||
|
||||
func (m *MockStorer) Get(ctx context.Context, key string) (io.ReadCloser, error) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
m.calls = append(m.calls, "Get:"+key)
|
||||
data, exists := m.data[key]
|
||||
if !exists {
|
||||
return nil, storage.ErrNotFound
|
||||
}
|
||||
return io.NopCloser(bytes.NewReader(data)), nil
|
||||
}
|
||||
|
||||
func (m *MockStorer) Stat(ctx context.Context, key string) (*storage.ObjectInfo, error) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
m.calls = append(m.calls, "Stat:"+key)
|
||||
data, exists := m.data[key]
|
||||
if !exists {
|
||||
return nil, storage.ErrNotFound
|
||||
}
|
||||
return &storage.ObjectInfo{
|
||||
Key: key,
|
||||
Size: int64(len(data)),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (m *MockStorer) Delete(ctx context.Context, key string) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
m.calls = append(m.calls, "Delete:"+key)
|
||||
delete(m.data, key)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MockStorer) List(ctx context.Context, prefix string) ([]string, error) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
m.calls = append(m.calls, "List:"+prefix)
|
||||
var keys []string
|
||||
for key := range m.data {
|
||||
if len(prefix) == 0 || (len(key) >= len(prefix) && key[:len(prefix)] == prefix) {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
}
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
func (m *MockStorer) ListStream(ctx context.Context, prefix string) <-chan storage.ObjectInfo {
|
||||
ch := make(chan storage.ObjectInfo)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
for key, data := range m.data {
|
||||
if len(prefix) == 0 || (len(key) >= len(prefix) && key[:len(prefix)] == prefix) {
|
||||
ch <- storage.ObjectInfo{
|
||||
Key: key,
|
||||
Size: int64(len(data)),
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
return ch
|
||||
}
|
||||
|
||||
func (m *MockStorer) Info() storage.StorageInfo {
|
||||
return storage.StorageInfo{
|
||||
Type: "mock",
|
||||
Location: "memory",
|
||||
}
|
||||
}
|
||||
|
||||
// GetCalls returns the list of operations that were called
|
||||
func (m *MockStorer) GetCalls() []string {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
calls := make([]string, len(m.calls))
|
||||
copy(calls, m.calls)
|
||||
return calls
|
||||
}
|
||||
|
||||
// GetStorageSize returns the number of objects in storage
|
||||
func (m *MockStorer) GetStorageSize() int {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
return len(m.data)
|
||||
}
|
||||
|
||||
// TestEndToEndBackup tests the full backup workflow with mocked dependencies
|
||||
func TestEndToEndBackup(t *testing.T) {
|
||||
// Initialize logger
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
// Create in-memory filesystem
|
||||
fs := afero.NewMemMapFs()
|
||||
|
||||
// Create test directory structure and files
|
||||
testFiles := map[string]string{
|
||||
"/home/user/documents/file1.txt": "This is file 1 content",
|
||||
"/home/user/documents/file2.txt": "This is file 2 content with more data",
|
||||
"/home/user/pictures/photo1.jpg": "Binary photo data here...",
|
||||
"/home/user/code/main.go": "package main\n\nfunc main() {\n\tprintln(\"Hello, World!\")\n}",
|
||||
}
|
||||
|
||||
// Create all directories first
|
||||
dirs := []string{
|
||||
"/home/user/documents",
|
||||
"/home/user/pictures",
|
||||
"/home/user/code",
|
||||
}
|
||||
for _, dir := range dirs {
|
||||
if err := fs.MkdirAll(dir, 0755); err != nil {
|
||||
t.Fatalf("failed to create directory %s: %v", dir, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create test files
|
||||
for path, content := range testFiles {
|
||||
if err := afero.WriteFile(fs, path, []byte(content), 0644); err != nil {
|
||||
t.Fatalf("failed to create test file %s: %v", path, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create mock storage
|
||||
mockStorage := NewMockStorer()
|
||||
|
||||
// Create test configuration
|
||||
cfg := &config.Config{
|
||||
SourceDirs: []string{"/home/user"},
|
||||
Exclude: []string{"*.tmp", "*.log"},
|
||||
ChunkSize: config.Size(16 * 1024), // 16KB chunks
|
||||
BlobSizeLimit: config.Size(100 * 1024), // 100KB blobs
|
||||
CompressionLevel: 3,
|
||||
AgeRecipients: []string{"age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"}, // Test public key
|
||||
AgeSecretKey: "AGE-SECRET-KEY-19CR5YSFW59HM4TLD6GXVEDMZFTVVF7PPHKUT68TXSFPK7APHXA2QS2NJA5", // Test private key
|
||||
S3: config.S3Config{
|
||||
Endpoint: "http://localhost:9000", // MinIO endpoint for testing
|
||||
Region: "us-east-1",
|
||||
Bucket: "test-bucket",
|
||||
AccessKeyID: "test-access",
|
||||
SecretAccessKey: "test-secret",
|
||||
},
|
||||
IndexPath: ":memory:", // In-memory SQLite database
|
||||
}
|
||||
|
||||
// For a true end-to-end test, we'll create a simpler test that focuses on
|
||||
// the core backup logic using the scanner directly with our mock storage
|
||||
ctx := context.Background()
|
||||
|
||||
// Create in-memory database
|
||||
db, err := database.New(ctx, ":memory:")
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
t.Errorf("failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create scanner with mock storage
|
||||
scanner := snapshot.NewScanner(snapshot.ScannerConfig{
|
||||
FS: fs,
|
||||
ChunkSize: cfg.ChunkSize.Int64(),
|
||||
Repositories: repos,
|
||||
Storage: mockStorage,
|
||||
MaxBlobSize: cfg.BlobSizeLimit.Int64(),
|
||||
CompressionLevel: cfg.CompressionLevel,
|
||||
AgeRecipients: cfg.AgeRecipients,
|
||||
EnableProgress: false,
|
||||
})
|
||||
|
||||
// Create a snapshot record
|
||||
snapshotID := "test-snapshot-001"
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID,
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "test-version",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Run the backup scan
|
||||
result, err := scanner.Scan(ctx, "/home/user", snapshotID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify scan results
|
||||
// The scanner counts both files and directories, so we have:
|
||||
// 4 files + 4 directories (/home, /home/user, /home/user/documents, /home/user/pictures, /home/user/code)
|
||||
assert.GreaterOrEqual(t, result.FilesScanned, 4, "Should scan at least 4 files")
|
||||
assert.Greater(t, result.BytesScanned, int64(0), "Should scan some bytes")
|
||||
assert.Greater(t, result.ChunksCreated, 0, "Should create chunks")
|
||||
assert.Greater(t, result.BlobsCreated, 0, "Should create blobs")
|
||||
|
||||
// Verify storage operations
|
||||
calls := mockStorage.GetCalls()
|
||||
t.Logf("Storage operations performed: %v", calls)
|
||||
|
||||
// Should have uploaded at least one blob
|
||||
blobUploads := 0
|
||||
for _, call := range calls {
|
||||
if len(call) > 4 && call[:4] == "Put:" {
|
||||
if len(call) > 10 && call[4:10] == "blobs/" {
|
||||
blobUploads++
|
||||
}
|
||||
}
|
||||
}
|
||||
assert.Greater(t, blobUploads, 0, "Should upload at least one blob")
|
||||
|
||||
// Verify files in database
|
||||
files, err := repos.Files.ListByPrefix(ctx, "/home/user")
|
||||
require.NoError(t, err)
|
||||
// Count only regular files (not directories)
|
||||
regularFiles := 0
|
||||
for _, f := range files {
|
||||
if f.Mode&0x80000000 == 0 { // Check if regular file (not directory)
|
||||
regularFiles++
|
||||
}
|
||||
}
|
||||
assert.Equal(t, 4, regularFiles, "Should have 4 regular files in database")
|
||||
|
||||
// Verify chunks were created by checking a specific file
|
||||
fileChunks, err := repos.FileChunks.GetByPath(ctx, "/home/user/documents/file1.txt")
|
||||
require.NoError(t, err)
|
||||
assert.Greater(t, len(fileChunks), 0, "Should have chunks for file1.txt")
|
||||
|
||||
// Verify blobs were uploaded to storage
|
||||
assert.Greater(t, mockStorage.GetStorageSize(), 0, "Should have blobs in storage")
|
||||
|
||||
// Complete the snapshot - just verify we got results
|
||||
// In a real integration test, we'd update the snapshot record
|
||||
|
||||
// Create snapshot manager to test metadata export
|
||||
snapshotManager := &snapshot.SnapshotManager{}
|
||||
snapshotManager.SetFilesystem(fs)
|
||||
|
||||
// Note: We can't fully test snapshot metadata export without a proper S3 client mock
|
||||
// that implements all required methods. This would require refactoring the S3 client
|
||||
// interface to be more testable.
|
||||
|
||||
t.Logf("Backup completed successfully:")
|
||||
t.Logf(" Files scanned: %d", result.FilesScanned)
|
||||
t.Logf(" Bytes scanned: %d", result.BytesScanned)
|
||||
t.Logf(" Chunks created: %d", result.ChunksCreated)
|
||||
t.Logf(" Blobs created: %d", result.BlobsCreated)
|
||||
t.Logf(" Storage size: %d objects", mockStorage.GetStorageSize())
|
||||
}
|
||||
|
||||
// TestBackupAndVerify tests backing up files and verifying the blobs
|
||||
func TestBackupAndVerify(t *testing.T) {
|
||||
// Initialize logger
|
||||
log.Initialize(log.Config{})
|
||||
|
||||
// Create in-memory filesystem
|
||||
fs := afero.NewMemMapFs()
|
||||
|
||||
// Create test files
|
||||
testContent := "This is a test file with some content that should be backed up"
|
||||
err := fs.MkdirAll("/data", 0755)
|
||||
require.NoError(t, err)
|
||||
err = afero.WriteFile(fs, "/data/test.txt", []byte(testContent), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create mock storage
|
||||
mockStorage := NewMockStorer()
|
||||
|
||||
// Create test database
|
||||
ctx := context.Background()
|
||||
db, err := database.New(ctx, ":memory:")
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
t.Errorf("failed to close database: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
repos := database.NewRepositories(db)
|
||||
|
||||
// Create scanner
|
||||
scanner := snapshot.NewScanner(snapshot.ScannerConfig{
|
||||
FS: fs,
|
||||
ChunkSize: int64(1024 * 16), // 16KB chunks
|
||||
Repositories: repos,
|
||||
Storage: mockStorage,
|
||||
MaxBlobSize: int64(1024 * 1024), // 1MB blobs
|
||||
CompressionLevel: 3,
|
||||
AgeRecipients: []string{"age1ezrjmfpwsc95svdg0y54mums3zevgzu0x0ecq2f7tp8a05gl0sjq9q9wjg"}, // Test public key
|
||||
})
|
||||
|
||||
// Create a snapshot
|
||||
snapshotID := "test-snapshot-001"
|
||||
err = repos.WithTx(ctx, func(ctx context.Context, tx *sql.Tx) error {
|
||||
snapshot := &database.Snapshot{
|
||||
ID: snapshotID,
|
||||
Hostname: "test-host",
|
||||
VaultikVersion: "test-version",
|
||||
StartedAt: time.Now(),
|
||||
}
|
||||
return repos.Snapshots.Create(ctx, tx, snapshot)
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Run the backup
|
||||
result, err := scanner.Scan(ctx, "/data", snapshotID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify backup created blobs
|
||||
assert.Greater(t, result.BlobsCreated, 0, "Should create at least one blob")
|
||||
assert.Equal(t, mockStorage.GetStorageSize(), result.BlobsCreated, "Storage should have the blobs")
|
||||
|
||||
// Verify we can retrieve the blob from storage
|
||||
objects, err := mockStorage.List(ctx, "blobs/")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, objects, result.BlobsCreated, "Should have correct number of blobs in storage")
|
||||
|
||||
// Get the first blob and verify it exists
|
||||
if len(objects) > 0 {
|
||||
blobKey := objects[0]
|
||||
t.Logf("Verifying blob: %s", blobKey)
|
||||
|
||||
// Get blob info
|
||||
blobInfo, err := mockStorage.Stat(ctx, blobKey)
|
||||
require.NoError(t, err)
|
||||
assert.Greater(t, blobInfo.Size, int64(0), "Blob should have content")
|
||||
|
||||
// Get blob content
|
||||
reader, err := mockStorage.Get(ctx, blobKey)
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = reader.Close() }()
|
||||
|
||||
// Verify blob data is encrypted (should not contain plaintext)
|
||||
blobData, err := io.ReadAll(reader)
|
||||
require.NoError(t, err)
|
||||
assert.NotContains(t, string(blobData), testContent, "Blob should be encrypted")
|
||||
assert.Greater(t, len(blobData), 0, "Blob should have data")
|
||||
}
|
||||
|
||||
t.Logf("Backup and verify test completed successfully")
|
||||
}
|
||||
169
internal/vaultik/prune.go
Normal file
169
internal/vaultik/prune.go
Normal file
@@ -0,0 +1,169 @@
|
||||
package vaultik
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"github.com/dustin/go-humanize"
|
||||
)
|
||||
|
||||
// PruneOptions contains options for the prune command
|
||||
type PruneOptions struct {
|
||||
Force bool
|
||||
}
|
||||
|
||||
// PruneBlobs removes unreferenced blobs from storage
|
||||
func (v *Vaultik) PruneBlobs(opts *PruneOptions) error {
|
||||
log.Info("Starting prune operation")
|
||||
|
||||
// Get all remote snapshots and their manifests
|
||||
allBlobsReferenced := make(map[string]bool)
|
||||
manifestCount := 0
|
||||
|
||||
// List all snapshots in storage
|
||||
log.Info("Listing remote snapshots")
|
||||
objectCh := v.Storage.ListStream(v.ctx, "metadata/")
|
||||
|
||||
var snapshotIDs []string
|
||||
for object := range objectCh {
|
||||
if object.Err != nil {
|
||||
return fmt.Errorf("listing remote snapshots: %w", object.Err)
|
||||
}
|
||||
|
||||
// Extract snapshot ID from paths like metadata/hostname-20240115-143052Z/
|
||||
parts := strings.Split(object.Key, "/")
|
||||
if len(parts) >= 2 && parts[0] == "metadata" && parts[1] != "" {
|
||||
// Check if this is a directory by looking for trailing slash
|
||||
if strings.HasSuffix(object.Key, "/") || strings.Contains(object.Key, "/manifest.json.zst") {
|
||||
snapshotID := parts[1]
|
||||
// Only add unique snapshot IDs
|
||||
found := false
|
||||
for _, id := range snapshotIDs {
|
||||
if id == snapshotID {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
snapshotIDs = append(snapshotIDs, snapshotID)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log.Info("Found manifests in remote storage", "count", len(snapshotIDs))
|
||||
|
||||
// Download and parse each manifest to get referenced blobs
|
||||
for _, snapshotID := range snapshotIDs {
|
||||
log.Debug("Processing manifest", "snapshot_id", snapshotID)
|
||||
|
||||
manifest, err := v.downloadManifest(snapshotID)
|
||||
if err != nil {
|
||||
log.Error("Failed to download manifest", "snapshot_id", snapshotID, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Add all blobs from this manifest to our referenced set
|
||||
for _, blob := range manifest.Blobs {
|
||||
allBlobsReferenced[blob.Hash] = true
|
||||
}
|
||||
manifestCount++
|
||||
}
|
||||
|
||||
log.Info("Processed manifests", "count", manifestCount, "unique_blobs_referenced", len(allBlobsReferenced))
|
||||
|
||||
// List all blobs in storage
|
||||
log.Info("Listing all blobs in storage")
|
||||
allBlobs := make(map[string]int64) // hash -> size
|
||||
blobObjectCh := v.Storage.ListStream(v.ctx, "blobs/")
|
||||
|
||||
for object := range blobObjectCh {
|
||||
if object.Err != nil {
|
||||
return fmt.Errorf("listing blobs: %w", object.Err)
|
||||
}
|
||||
|
||||
// Extract hash from path like blobs/ab/cd/abcdef123456...
|
||||
parts := strings.Split(object.Key, "/")
|
||||
if len(parts) == 4 && parts[0] == "blobs" {
|
||||
hash := parts[3]
|
||||
allBlobs[hash] = object.Size
|
||||
}
|
||||
}
|
||||
|
||||
log.Info("Found blobs in storage", "count", len(allBlobs))
|
||||
|
||||
// Find unreferenced blobs
|
||||
var unreferencedBlobs []string
|
||||
var totalSize int64
|
||||
for hash, size := range allBlobs {
|
||||
if !allBlobsReferenced[hash] {
|
||||
unreferencedBlobs = append(unreferencedBlobs, hash)
|
||||
totalSize += size
|
||||
}
|
||||
}
|
||||
|
||||
if len(unreferencedBlobs) == 0 {
|
||||
log.Info("No unreferenced blobs found")
|
||||
fmt.Println("No unreferenced blobs to remove.")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Show what will be deleted
|
||||
log.Info("Found unreferenced blobs", "count", len(unreferencedBlobs), "total_size", humanize.Bytes(uint64(totalSize)))
|
||||
fmt.Printf("Found %d unreferenced blob(s) totaling %s\n", len(unreferencedBlobs), humanize.Bytes(uint64(totalSize)))
|
||||
|
||||
// Confirm unless --force is used
|
||||
if !opts.Force {
|
||||
fmt.Printf("\nDelete %d unreferenced blob(s)? [y/N] ", len(unreferencedBlobs))
|
||||
var confirm string
|
||||
if _, err := fmt.Scanln(&confirm); err != nil {
|
||||
// Treat EOF or error as "no"
|
||||
fmt.Println("Cancelled")
|
||||
return nil
|
||||
}
|
||||
if strings.ToLower(confirm) != "y" {
|
||||
fmt.Println("Cancelled")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Delete unreferenced blobs
|
||||
log.Info("Deleting unreferenced blobs")
|
||||
deletedCount := 0
|
||||
deletedSize := int64(0)
|
||||
|
||||
for i, hash := range unreferencedBlobs {
|
||||
blobPath := fmt.Sprintf("blobs/%s/%s/%s", hash[:2], hash[2:4], hash)
|
||||
|
||||
if err := v.Storage.Delete(v.ctx, blobPath); err != nil {
|
||||
log.Error("Failed to delete blob", "hash", hash, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
deletedCount++
|
||||
deletedSize += allBlobs[hash]
|
||||
|
||||
// Progress update every 100 blobs
|
||||
if (i+1)%100 == 0 || i == len(unreferencedBlobs)-1 {
|
||||
log.Info("Deletion progress",
|
||||
"deleted", i+1,
|
||||
"total", len(unreferencedBlobs),
|
||||
"percent", fmt.Sprintf("%.1f%%", float64(i+1)/float64(len(unreferencedBlobs))*100),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
log.Info("Prune complete",
|
||||
"deleted_count", deletedCount,
|
||||
"deleted_size", humanize.Bytes(uint64(deletedSize)),
|
||||
"failed", len(unreferencedBlobs)-deletedCount,
|
||||
)
|
||||
|
||||
fmt.Printf("\nDeleted %d blob(s) totaling %s\n", deletedCount, humanize.Bytes(uint64(deletedSize)))
|
||||
if deletedCount < len(unreferencedBlobs) {
|
||||
fmt.Printf("Failed to delete %d blob(s)\n", len(unreferencedBlobs)-deletedCount)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
701
internal/vaultik/snapshot.go
Normal file
701
internal/vaultik/snapshot.go
Normal file
@@ -0,0 +1,701 @@
|
||||
package vaultik
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"text/tabwriter"
|
||||
"time"
|
||||
|
||||
"git.eeqj.de/sneak/vaultik/internal/database"
|
||||
"git.eeqj.de/sneak/vaultik/internal/log"
|
||||
"git.eeqj.de/sneak/vaultik/internal/snapshot"
|
||||
"github.com/dustin/go-humanize"
|
||||
)
|
||||
|
||||
// SnapshotCreateOptions contains options for the snapshot create command
|
||||
type SnapshotCreateOptions struct {
|
||||
Daemon bool
|
||||
Cron bool
|
||||
Prune bool
|
||||
}
|
||||
|
||||
// CreateSnapshot executes the snapshot creation operation
|
||||
func (v *Vaultik) CreateSnapshot(opts *SnapshotCreateOptions) error {
|
||||
snapshotStartTime := time.Now()
|
||||
|
||||
log.Info("Starting snapshot creation",
|
||||
"version", v.Globals.Version,
|
||||
"commit", v.Globals.Commit,
|
||||
"index_path", v.Config.IndexPath,
|
||||
)
|
||||
|
||||
// Clean up incomplete snapshots FIRST, before any scanning
|
||||
// This is critical for data safety - see CleanupIncompleteSnapshots for details
|
||||
hostname := v.Config.Hostname
|
||||
if hostname == "" {
|
||||
hostname, _ = os.Hostname()
|
||||
}
|
||||
|
||||
// CRITICAL: This MUST succeed. If we fail to clean up incomplete snapshots,
|
||||
// the deduplication logic will think files from the incomplete snapshot were
|
||||
// already backed up and skip them, resulting in data loss.
|
||||
if err := v.SnapshotManager.CleanupIncompleteSnapshots(v.ctx, hostname); err != nil {
|
||||
return fmt.Errorf("cleanup incomplete snapshots: %w", err)
|
||||
}
|
||||
|
||||
if opts.Daemon {
|
||||
log.Info("Running in daemon mode")
|
||||
// TODO: Implement daemon mode with inotify
|
||||
return fmt.Errorf("daemon mode not yet implemented")
|
||||
}
|
||||
|
||||
// Resolve source directories to absolute paths
|
||||
resolvedDirs := make([]string, 0, len(v.Config.SourceDirs))
|
||||
for _, dir := range v.Config.SourceDirs {
|
||||
absPath, err := filepath.Abs(dir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to resolve absolute path for %s: %w", dir, err)
|
||||
}
|
||||
|
||||
// Resolve symlinks
|
||||
resolvedPath, err := filepath.EvalSymlinks(absPath)
|
||||
if err != nil {
|
||||
// If the path doesn't exist yet, use the absolute path
|
||||
if os.IsNotExist(err) {
|
||||
resolvedPath = absPath
|
||||
} else {
|
||||
return fmt.Errorf("failed to resolve symlinks for %s: %w", absPath, err)
|
||||
}
|
||||
}
|
||||
|
||||
resolvedDirs = append(resolvedDirs, resolvedPath)
|
||||
}
|
||||
|
||||
// Create scanner with progress enabled (unless in cron mode)
|
||||
scanner := v.ScannerFactory(snapshot.ScannerParams{
|
||||
EnableProgress: !opts.Cron,
|
||||
Fs: v.Fs,
|
||||
})
|
||||
|
||||
// Statistics tracking
|
||||
totalFiles := 0
|
||||
totalBytes := int64(0)
|
||||
totalChunks := 0
|
||||
totalBlobs := 0
|
||||
totalBytesSkipped := int64(0)
|
||||
totalFilesSkipped := 0
|
||||
totalFilesDeleted := 0
|
||||
totalBytesDeleted := int64(0)
|
||||
totalBytesUploaded := int64(0)
|
||||
totalBlobsUploaded := 0
|
||||
uploadDuration := time.Duration(0)
|
||||
|
||||
// Create a new snapshot at the beginning
|
||||
snapshotID, err := v.SnapshotManager.CreateSnapshot(v.ctx, hostname, v.Globals.Version, v.Globals.Commit)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating snapshot: %w", err)
|
||||
}
|
||||
log.Info("Beginning snapshot", "snapshot_id", snapshotID)
|
||||
_, _ = fmt.Fprintf(v.Stdout, "Beginning snapshot: %s\n", snapshotID)
|
||||
|
||||
for i, dir := range resolvedDirs {
|
||||
// Check if context is cancelled
|
||||
select {
|
||||
case <-v.ctx.Done():
|
||||
log.Info("Snapshot creation cancelled")
|
||||
return v.ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
log.Info("Scanning directory", "path", dir)
|
||||
_, _ = fmt.Fprintf(v.Stdout, "Beginning directory scan (%d/%d): %s\n", i+1, len(resolvedDirs), dir)
|
||||
result, err := scanner.Scan(v.ctx, dir, snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to scan %s: %w", dir, err)
|
||||
}
|
||||
|
||||
totalFiles += result.FilesScanned
|
||||
totalBytes += result.BytesScanned
|
||||
totalChunks += result.ChunksCreated
|
||||
totalBlobs += result.BlobsCreated
|
||||
totalFilesSkipped += result.FilesSkipped
|
||||
totalBytesSkipped += result.BytesSkipped
|
||||
totalFilesDeleted += result.FilesDeleted
|
||||
totalBytesDeleted += result.BytesDeleted
|
||||
|
||||
log.Info("Directory scan complete",
|
||||
"path", dir,
|
||||
"files", result.FilesScanned,
|
||||
"files_skipped", result.FilesSkipped,
|
||||
"bytes", result.BytesScanned,
|
||||
"bytes_skipped", result.BytesSkipped,
|
||||
"chunks", result.ChunksCreated,
|
||||
"blobs", result.BlobsCreated,
|
||||
"duration", result.EndTime.Sub(result.StartTime))
|
||||
|
||||
// Remove per-directory summary - the scanner already prints its own summary
|
||||
}
|
||||
|
||||
// Get upload statistics from scanner progress if available
|
||||
if s := scanner.GetProgress(); s != nil {
|
||||
stats := s.GetStats()
|
||||
totalBytesUploaded = stats.BytesUploaded.Load()
|
||||
totalBlobsUploaded = int(stats.BlobsUploaded.Load())
|
||||
uploadDuration = time.Duration(stats.UploadDurationMs.Load()) * time.Millisecond
|
||||
}
|
||||
|
||||
// Update snapshot statistics with extended fields
|
||||
extStats := snapshot.ExtendedBackupStats{
|
||||
BackupStats: snapshot.BackupStats{
|
||||
FilesScanned: totalFiles,
|
||||
BytesScanned: totalBytes,
|
||||
ChunksCreated: totalChunks,
|
||||
BlobsCreated: totalBlobs,
|
||||
BytesUploaded: totalBytesUploaded,
|
||||
},
|
||||
BlobUncompressedSize: 0, // Will be set from database query below
|
||||
CompressionLevel: v.Config.CompressionLevel,
|
||||
UploadDurationMs: uploadDuration.Milliseconds(),
|
||||
}
|
||||
|
||||
if err := v.SnapshotManager.UpdateSnapshotStatsExtended(v.ctx, snapshotID, extStats); err != nil {
|
||||
return fmt.Errorf("updating snapshot stats: %w", err)
|
||||
}
|
||||
|
||||
// Mark snapshot as complete
|
||||
if err := v.SnapshotManager.CompleteSnapshot(v.ctx, snapshotID); err != nil {
|
||||
return fmt.Errorf("completing snapshot: %w", err)
|
||||
}
|
||||
|
||||
// Export snapshot metadata
|
||||
// Export snapshot metadata without closing the database
|
||||
// The export function should handle its own database connection
|
||||
if err := v.SnapshotManager.ExportSnapshotMetadata(v.ctx, v.Config.IndexPath, snapshotID); err != nil {
|
||||
return fmt.Errorf("exporting snapshot metadata: %w", err)
|
||||
}
|
||||
|
||||
// Calculate final statistics
|
||||
snapshotDuration := time.Since(snapshotStartTime)
|
||||
totalFilesChanged := totalFiles - totalFilesSkipped
|
||||
totalBytesChanged := totalBytes
|
||||
totalBytesAll := totalBytes + totalBytesSkipped
|
||||
|
||||
// Calculate upload speed
|
||||
var avgUploadSpeed string
|
||||
if totalBytesUploaded > 0 && uploadDuration > 0 {
|
||||
bytesPerSec := float64(totalBytesUploaded) / uploadDuration.Seconds()
|
||||
bitsPerSec := bytesPerSec * 8
|
||||
if bitsPerSec >= 1e9 {
|
||||
avgUploadSpeed = fmt.Sprintf("%.1f Gbit/s", bitsPerSec/1e9)
|
||||
} else if bitsPerSec >= 1e6 {
|
||||
avgUploadSpeed = fmt.Sprintf("%.0f Mbit/s", bitsPerSec/1e6)
|
||||
} else if bitsPerSec >= 1e3 {
|
||||
avgUploadSpeed = fmt.Sprintf("%.0f Kbit/s", bitsPerSec/1e3)
|
||||
} else {
|
||||
avgUploadSpeed = fmt.Sprintf("%.0f bit/s", bitsPerSec)
|
||||
}
|
||||
} else {
|
||||
avgUploadSpeed = "N/A"
|
||||
}
|
||||
|
||||
// Get total blob sizes from database
|
||||
totalBlobSizeCompressed := int64(0)
|
||||
totalBlobSizeUncompressed := int64(0)
|
||||
if blobHashes, err := v.Repositories.Snapshots.GetBlobHashes(v.ctx, snapshotID); err == nil {
|
||||
for _, hash := range blobHashes {
|
||||
if blob, err := v.Repositories.Blobs.GetByHash(v.ctx, hash); err == nil && blob != nil {
|
||||
totalBlobSizeCompressed += blob.CompressedSize
|
||||
totalBlobSizeUncompressed += blob.UncompressedSize
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate compression ratio
|
||||
var compressionRatio float64
|
||||
if totalBlobSizeUncompressed > 0 {
|
||||
compressionRatio = float64(totalBlobSizeCompressed) / float64(totalBlobSizeUncompressed)
|
||||
} else {
|
||||
compressionRatio = 1.0
|
||||
}
|
||||
|
||||
// Print comprehensive summary
|
||||
_, _ = fmt.Fprintf(v.Stdout, "=== Snapshot Complete ===\n")
|
||||
_, _ = fmt.Fprintf(v.Stdout, "ID: %s\n", snapshotID)
|
||||
_, _ = fmt.Fprintf(v.Stdout, "Files: %s examined, %s to process, %s unchanged",
|
||||
formatNumber(totalFiles),
|
||||
formatNumber(totalFilesChanged),
|
||||
formatNumber(totalFilesSkipped))
|
||||
if totalFilesDeleted > 0 {
|
||||
_, _ = fmt.Fprintf(v.Stdout, ", %s deleted", formatNumber(totalFilesDeleted))
|
||||
}
|
||||
_, _ = fmt.Fprintln(v.Stdout)
|
||||
_, _ = fmt.Fprintf(v.Stdout, "Data: %s total (%s to process)",
|
||||
humanize.Bytes(uint64(totalBytesAll)),
|
||||
humanize.Bytes(uint64(totalBytesChanged)))
|
||||
if totalBytesDeleted > 0 {
|
||||
_, _ = fmt.Fprintf(v.Stdout, ", %s deleted", humanize.Bytes(uint64(totalBytesDeleted)))
|
||||
}
|
||||
_, _ = fmt.Fprintln(v.Stdout)
|
||||
if totalBlobsUploaded > 0 {
|
||||
_, _ = fmt.Fprintf(v.Stdout, "Storage: %s compressed from %s (%.2fx)\n",
|
||||
humanize.Bytes(uint64(totalBlobSizeCompressed)),
|
||||
humanize.Bytes(uint64(totalBlobSizeUncompressed)),
|
||||
compressionRatio)
|
||||
_, _ = fmt.Fprintf(v.Stdout, "Upload: %d blobs, %s in %s (%s)\n",
|
||||
totalBlobsUploaded,
|
||||
humanize.Bytes(uint64(totalBytesUploaded)),
|
||||
formatDuration(uploadDuration),
|
||||
avgUploadSpeed)
|
||||
}
|
||||
_, _ = fmt.Fprintf(v.Stdout, "Duration: %s\n", formatDuration(snapshotDuration))
|
||||
|
||||
if opts.Prune {
|
||||
log.Info("Pruning enabled - will delete old snapshots after snapshot")
|
||||
// TODO: Implement pruning
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListSnapshots lists all snapshots
|
||||
func (v *Vaultik) ListSnapshots(jsonOutput bool) error {
|
||||
// Get all remote snapshots
|
||||
remoteSnapshots := make(map[string]bool)
|
||||
objectCh := v.Storage.ListStream(v.ctx, "metadata/")
|
||||
|
||||
for object := range objectCh {
|
||||
if object.Err != nil {
|
||||
return fmt.Errorf("listing remote snapshots: %w", object.Err)
|
||||
}
|
||||
|
||||
// Extract snapshot ID from paths like metadata/hostname-20240115-143052Z/
|
||||
parts := strings.Split(object.Key, "/")
|
||||
if len(parts) >= 2 && parts[0] == "metadata" && parts[1] != "" {
|
||||
remoteSnapshots[parts[1]] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Get all local snapshots
|
||||
localSnapshots, err := v.Repositories.Snapshots.ListRecent(v.ctx, 10000)
|
||||
if err != nil {
|
||||
return fmt.Errorf("listing local snapshots: %w", err)
|
||||
}
|
||||
|
||||
// Build a map of local snapshots for quick lookup
|
||||
localSnapshotMap := make(map[string]*database.Snapshot)
|
||||
for _, s := range localSnapshots {
|
||||
localSnapshotMap[s.ID] = s
|
||||
}
|
||||
|
||||
// Remove local snapshots that don't exist remotely
|
||||
for _, snapshot := range localSnapshots {
|
||||
if !remoteSnapshots[snapshot.ID] {
|
||||
log.Info("Removing local snapshot not found in remote", "snapshot_id", snapshot.ID)
|
||||
|
||||
// Delete related records first to avoid foreign key constraints
|
||||
if err := v.Repositories.Snapshots.DeleteSnapshotFiles(v.ctx, snapshot.ID); err != nil {
|
||||
log.Error("Failed to delete snapshot files", "snapshot_id", snapshot.ID, "error", err)
|
||||
}
|
||||
if err := v.Repositories.Snapshots.DeleteSnapshotBlobs(v.ctx, snapshot.ID); err != nil {
|
||||
log.Error("Failed to delete snapshot blobs", "snapshot_id", snapshot.ID, "error", err)
|
||||
}
|
||||
if err := v.Repositories.Snapshots.DeleteSnapshotUploads(v.ctx, snapshot.ID); err != nil {
|
||||
log.Error("Failed to delete snapshot uploads", "snapshot_id", snapshot.ID, "error", err)
|
||||
}
|
||||
|
||||
// Now delete the snapshot itself
|
||||
if err := v.Repositories.Snapshots.Delete(v.ctx, snapshot.ID); err != nil {
|
||||
log.Error("Failed to delete local snapshot", "snapshot_id", snapshot.ID, "error", err)
|
||||
} else {
|
||||
log.Info("Deleted local snapshot not found in remote", "snapshot_id", snapshot.ID)
|
||||
delete(localSnapshotMap, snapshot.ID)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build final snapshot list
|
||||
snapshots := make([]SnapshotInfo, 0, len(remoteSnapshots))
|
||||
|
||||
for snapshotID := range remoteSnapshots {
|
||||
// Check if we have this snapshot locally
|
||||
if localSnap, exists := localSnapshotMap[snapshotID]; exists && localSnap.CompletedAt != nil {
|
||||
// Get total compressed size of all blobs referenced by this snapshot
|
||||
totalSize, err := v.Repositories.Snapshots.GetSnapshotTotalCompressedSize(v.ctx, snapshotID)
|
||||
if err != nil {
|
||||
log.Warn("Failed to get total compressed size", "id", snapshotID, "error", err)
|
||||
// Fall back to stored blob size
|
||||
totalSize = localSnap.BlobSize
|
||||
}
|
||||
|
||||
snapshots = append(snapshots, SnapshotInfo{
|
||||
ID: localSnap.ID,
|
||||
Timestamp: localSnap.StartedAt,
|
||||
CompressedSize: totalSize,
|
||||
})
|
||||
} else {
|
||||
// Remote snapshot not in local DB - fetch manifest to get size
|
||||
timestamp, err := parseSnapshotTimestamp(snapshotID)
|
||||
if err != nil {
|
||||
log.Warn("Failed to parse snapshot timestamp", "id", snapshotID, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Try to download manifest to get size
|
||||
totalSize, err := v.getManifestSize(snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get manifest size for %s: %w", snapshotID, err)
|
||||
}
|
||||
|
||||
snapshots = append(snapshots, SnapshotInfo{
|
||||
ID: snapshotID,
|
||||
Timestamp: timestamp,
|
||||
CompressedSize: totalSize,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by timestamp (newest first)
|
||||
sort.Slice(snapshots, func(i, j int) bool {
|
||||
return snapshots[i].Timestamp.After(snapshots[j].Timestamp)
|
||||
})
|
||||
|
||||
if jsonOutput {
|
||||
// JSON output
|
||||
encoder := json.NewEncoder(os.Stdout)
|
||||
encoder.SetIndent("", " ")
|
||||
return encoder.Encode(snapshots)
|
||||
}
|
||||
|
||||
// Table output
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 3, ' ', 0)
|
||||
if _, err := fmt.Fprintln(w, "SNAPSHOT ID\tTIMESTAMP\tCOMPRESSED SIZE"); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := fmt.Fprintln(w, "───────────\t─────────\t───────────────"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, snap := range snapshots {
|
||||
if _, err := fmt.Fprintf(w, "%s\t%s\t%s\n",
|
||||
snap.ID,
|
||||
snap.Timestamp.Format("2006-01-02 15:04:05"),
|
||||
formatBytes(snap.CompressedSize)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return w.Flush()
|
||||
}
|
||||
|
||||
// PurgeSnapshots removes old snapshots based on criteria
|
||||
func (v *Vaultik) PurgeSnapshots(keepLatest bool, olderThan string, force bool) error {
|
||||
// Sync with remote first
|
||||
if err := v.syncWithRemote(); err != nil {
|
||||
return fmt.Errorf("syncing with remote: %w", err)
|
||||
}
|
||||
|
||||
// Get snapshots from local database
|
||||
dbSnapshots, err := v.Repositories.Snapshots.ListRecent(v.ctx, 10000)
|
||||
if err != nil {
|
||||
return fmt.Errorf("listing snapshots: %w", err)
|
||||
}
|
||||
|
||||
// Convert to SnapshotInfo format, only including completed snapshots
|
||||
snapshots := make([]SnapshotInfo, 0, len(dbSnapshots))
|
||||
for _, s := range dbSnapshots {
|
||||
if s.CompletedAt != nil {
|
||||
snapshots = append(snapshots, SnapshotInfo{
|
||||
ID: s.ID,
|
||||
Timestamp: s.StartedAt,
|
||||
CompressedSize: s.BlobSize,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by timestamp (newest first)
|
||||
sort.Slice(snapshots, func(i, j int) bool {
|
||||
return snapshots[i].Timestamp.After(snapshots[j].Timestamp)
|
||||
})
|
||||
|
||||
var toDelete []SnapshotInfo
|
||||
|
||||
if keepLatest {
|
||||
// Keep only the most recent snapshot
|
||||
if len(snapshots) > 1 {
|
||||
toDelete = snapshots[1:]
|
||||
}
|
||||
} else if olderThan != "" {
|
||||
// Parse duration
|
||||
duration, err := parseDuration(olderThan)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid duration: %w", err)
|
||||
}
|
||||
|
||||
cutoff := time.Now().UTC().Add(-duration)
|
||||
for _, snap := range snapshots {
|
||||
if snap.Timestamp.Before(cutoff) {
|
||||
toDelete = append(toDelete, snap)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(toDelete) == 0 {
|
||||
fmt.Println("No snapshots to delete")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Show what will be deleted
|
||||
fmt.Printf("The following snapshots will be deleted:\n\n")
|
||||
for _, snap := range toDelete {
|
||||
fmt.Printf(" %s (%s, %s)\n",
|
||||
snap.ID,
|
||||
snap.Timestamp.Format("2006-01-02 15:04:05"),
|
||||
formatBytes(snap.CompressedSize))
|
||||
}
|
||||
|
||||
// Confirm unless --force is used
|
||||
if !force {
|
||||
fmt.Printf("\nDelete %d snapshot(s)? [y/N] ", len(toDelete))
|
||||
var confirm string
|
||||
if _, err := fmt.Scanln(&confirm); err != nil {
|
||||
// Treat EOF or error as "no"
|
||||
fmt.Println("Cancelled")
|
||||
return nil
|
||||
}
|
||||
if strings.ToLower(confirm) != "y" {
|
||||
fmt.Println("Cancelled")
|
||||
return nil
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("\nDeleting %d snapshot(s) (--force specified)\n", len(toDelete))
|
||||
}
|
||||
|
||||
// Delete snapshots
|
||||
for _, snap := range toDelete {
|
||||
log.Info("Deleting snapshot", "id", snap.ID)
|
||||
if err := v.deleteSnapshot(snap.ID); err != nil {
|
||||
return fmt.Errorf("deleting snapshot %s: %w", snap.ID, err)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("Deleted %d snapshot(s)\n", len(toDelete))
|
||||
|
||||
// Note: Run 'vaultik prune' separately to clean up unreferenced blobs
|
||||
fmt.Println("\nNote: Run 'vaultik prune' to clean up unreferenced blobs.")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// VerifySnapshot checks snapshot integrity
|
||||
func (v *Vaultik) VerifySnapshot(snapshotID string, deep bool) error {
|
||||
// Parse snapshot ID to extract timestamp
|
||||
parts := strings.Split(snapshotID, "-")
|
||||
var snapshotTime time.Time
|
||||
if len(parts) >= 3 {
|
||||
// Format: hostname-YYYYMMDD-HHMMSSZ
|
||||
dateStr := parts[len(parts)-2]
|
||||
timeStr := parts[len(parts)-1]
|
||||
if len(dateStr) == 8 && len(timeStr) == 7 && strings.HasSuffix(timeStr, "Z") {
|
||||
timeStr = timeStr[:6] // Remove Z
|
||||
timestamp, err := time.Parse("20060102150405", dateStr+timeStr)
|
||||
if err == nil {
|
||||
snapshotTime = timestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("Verifying snapshot %s\n", snapshotID)
|
||||
if !snapshotTime.IsZero() {
|
||||
fmt.Printf("Snapshot time: %s\n", snapshotTime.Format("2006-01-02 15:04:05 MST"))
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Download and parse manifest
|
||||
manifest, err := v.downloadManifest(snapshotID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("downloading manifest: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Snapshot information:\n")
|
||||
fmt.Printf(" Blob count: %d\n", manifest.BlobCount)
|
||||
fmt.Printf(" Total size: %s\n", humanize.Bytes(uint64(manifest.TotalCompressedSize)))
|
||||
if manifest.Timestamp != "" {
|
||||
if t, err := time.Parse(time.RFC3339, manifest.Timestamp); err == nil {
|
||||
fmt.Printf(" Created: %s\n", t.Format("2006-01-02 15:04:05 MST"))
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Check each blob exists
|
||||
fmt.Printf("Checking blob existence...\n")
|
||||
missing := 0
|
||||
verified := 0
|
||||
missingSize := int64(0)
|
||||
|
||||
for _, blob := range manifest.Blobs {
|
||||
blobPath := fmt.Sprintf("blobs/%s/%s/%s", blob.Hash[:2], blob.Hash[2:4], blob.Hash)
|
||||
|
||||
if deep {
|
||||
// Download and verify hash
|
||||
// TODO: Implement deep verification
|
||||
fmt.Printf("Deep verification not yet implemented\n")
|
||||
return nil
|
||||
} else {
|
||||
// Just check existence
|
||||
_, err := v.Storage.Stat(v.ctx, blobPath)
|
||||
if err != nil {
|
||||
fmt.Printf(" Missing: %s (%s)\n", blob.Hash, humanize.Bytes(uint64(blob.CompressedSize)))
|
||||
missing++
|
||||
missingSize += blob.CompressedSize
|
||||
} else {
|
||||
verified++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("\nVerification complete:\n")
|
||||
fmt.Printf(" Verified: %d blobs (%s)\n", verified,
|
||||
humanize.Bytes(uint64(manifest.TotalCompressedSize-missingSize)))
|
||||
if missing > 0 {
|
||||
fmt.Printf(" Missing: %d blobs (%s)\n", missing, humanize.Bytes(uint64(missingSize)))
|
||||
} else {
|
||||
fmt.Printf(" Missing: 0 blobs\n")
|
||||
}
|
||||
fmt.Printf(" Status: ")
|
||||
if missing > 0 {
|
||||
fmt.Printf("FAILED - %d blobs are missing\n", missing)
|
||||
return fmt.Errorf("%d blobs are missing", missing)
|
||||
} else {
|
||||
fmt.Printf("OK - All blobs verified\n")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Helper methods that were previously on SnapshotApp
|
||||
|
||||
func (v *Vaultik) getManifestSize(snapshotID string) (int64, error) {
|
||||
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
|
||||
|
||||
reader, err := v.Storage.Get(v.ctx, manifestPath)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("downloading manifest: %w", err)
|
||||
}
|
||||
defer func() { _ = reader.Close() }()
|
||||
|
||||
manifest, err := snapshot.DecodeManifest(reader)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("decoding manifest: %w", err)
|
||||
}
|
||||
|
||||
return manifest.TotalCompressedSize, nil
|
||||
}
|
||||
|
||||
func (v *Vaultik) downloadManifest(snapshotID string) (*snapshot.Manifest, error) {
|
||||
manifestPath := fmt.Sprintf("metadata/%s/manifest.json.zst", snapshotID)
|
||||
|
||||
reader, err := v.Storage.Get(v.ctx, manifestPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer func() { _ = reader.Close() }()
|
||||
|
||||
manifest, err := snapshot.DecodeManifest(reader)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("decoding manifest: %w", err)
|
||||
}
|
||||
|
||||
return manifest, nil
|
||||
}
|
||||
|
||||
func (v *Vaultik) deleteSnapshot(snapshotID string) error {
|
||||
// First, delete from storage
|
||||
// List all objects under metadata/{snapshotID}/
|
||||
prefix := fmt.Sprintf("metadata/%s/", snapshotID)
|
||||
objectCh := v.Storage.ListStream(v.ctx, prefix)
|
||||
|
||||
var objectsToDelete []string
|
||||
for object := range objectCh {
|
||||
if object.Err != nil {
|
||||
return fmt.Errorf("listing objects: %w", object.Err)
|
||||
}
|
||||
objectsToDelete = append(objectsToDelete, object.Key)
|
||||
}
|
||||
|
||||
// Delete all objects
|
||||
for _, key := range objectsToDelete {
|
||||
if err := v.Storage.Delete(v.ctx, key); err != nil {
|
||||
return fmt.Errorf("removing %s: %w", key, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Then, delete from local database
|
||||
// Delete related records first to avoid foreign key constraints
|
||||
if err := v.Repositories.Snapshots.DeleteSnapshotFiles(v.ctx, snapshotID); err != nil {
|
||||
log.Error("Failed to delete snapshot files", "snapshot_id", snapshotID, "error", err)
|
||||
}
|
||||
if err := v.Repositories.Snapshots.DeleteSnapshotBlobs(v.ctx, snapshotID); err != nil {
|
||||
log.Error("Failed to delete snapshot blobs", "snapshot_id", snapshotID, "error", err)
|
||||
}
|
||||
if err := v.Repositories.Snapshots.DeleteSnapshotUploads(v.ctx, snapshotID); err != nil {
|
||||
log.Error("Failed to delete snapshot uploads", "snapshot_id", snapshotID, "error", err)
|
||||
}
|
||||
|
||||
// Now delete the snapshot itself
|
||||
if err := v.Repositories.Snapshots.Delete(v.ctx, snapshotID); err != nil {
|
||||
return fmt.Errorf("deleting snapshot from database: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (v *Vaultik) syncWithRemote() error {
|
||||
log.Info("Syncing with remote snapshots")
|
||||
|
||||
// Get all remote snapshot IDs
|
||||
remoteSnapshots := make(map[string]bool)
|
||||
objectCh := v.Storage.ListStream(v.ctx, "metadata/")
|
||||
|
||||
for object := range objectCh {
|
||||
if object.Err != nil {
|
||||
return fmt.Errorf("listing remote snapshots: %w", object.Err)
|
||||
}
|
||||
|
||||
// Extract snapshot ID from paths like metadata/hostname-20240115-143052Z/
|
||||
parts := strings.Split(object.Key, "/")
|
||||
if len(parts) >= 2 && parts[0] == "metadata" && parts[1] != "" {
|
||||
remoteSnapshots[parts[1]] = true
|
||||
}
|
||||
}
|
||||
|
||||
log.Debug("Found remote snapshots", "count", len(remoteSnapshots))
|
||||
|
||||
// Get all local snapshots (use a high limit to get all)
|
||||
localSnapshots, err := v.Repositories.Snapshots.ListRecent(v.ctx, 10000)
|
||||
if err != nil {
|
||||
return fmt.Errorf("listing local snapshots: %w", err)
|
||||
}
|
||||
|
||||
// Remove local snapshots that don't exist remotely
|
||||
removedCount := 0
|
||||
for _, snapshot := range localSnapshots {
|
||||
if !remoteSnapshots[snapshot.ID] {
|
||||
log.Info("Removing local snapshot not found in remote", "snapshot_id", snapshot.ID)
|
||||
if err := v.Repositories.Snapshots.Delete(v.ctx, snapshot.ID); err != nil {
|
||||
log.Error("Failed to delete local snapshot", "snapshot_id", snapshot.ID, "error", err)
|
||||
} else {
|
||||
removedCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if removedCount > 0 {
|
||||
log.Info("Removed local snapshots not found in remote", "count", removedCount)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user