- Implement deterministic blob hashing using double SHA256 of uncompressed plaintext data, enabling deduplication even after local DB is cleared - Add Stat() check before blob upload to skip existing blobs in storage - Add rclone storage backend for additional remote storage options - Add 'vaultik database purge' command to erase local state DB - Add 'vaultik remote check' command to verify remote connectivity - Show configured snapshots in 'vaultik snapshot list' output - Skip macOS resource fork files (._*) when listing remote snapshots - Use multi-threaded zstd compression (CPUs - 2 threads) - Add writer tests for double hashing behavior
3.6 KiB
Vaultik 1.0 TODO
Linear list of tasks to complete before 1.0 release.
Rclone Storage Backend (Complete)
Add rclone as a storage backend via Go library import, allowing vaultik to use any of rclone's 70+ supported cloud storage providers.
Configuration:
storage_url: "rclone://myremote/path/to/backups"
User must have rclone configured separately (via rclone config).
Implementation Steps:
- Add rclone dependency to go.mod
- Create
internal/storage/rclone.goimplementingStorerinterfaceNewRcloneStorer(remote, path)- init withconfigfile.Install()andfs.NewFs()Put/PutWithProgress- useoperations.Rcat()Get- usefs.NewObject()thenobj.Open()Stat- usefs.NewObject()for size/metadataDelete- useobj.Remove()List/ListStream- useoperations.ListFn()Info- return remote name
- Update
internal/storage/url.go- parserclone://remote/pathURLs - Update
internal/storage/module.go- add rclone case tostorerFromURL() - Test with real rclone remote
Error Mapping:
fs.ErrorObjectNotFound→ErrNotFoundfs.ErrorDirNotFound→ErrNotFoundfs.ErrorNotFoundInConfigFile→ErrRemoteNotFound(new)
CLI Polish (Priority)
- Improve error messages throughout
- Ensure all errors include actionable context
- Add suggestions for common issues (e.g., "did you set VAULTIK_AGE_SECRET_KEY?")
Security (Priority)
-
Audit encryption implementation
- Verify age encryption is used correctly
- Ensure no plaintext leaks in logs or errors
- Verify blob hashes are computed correctly
-
Secure memory handling for secrets
- Clear S3 credentials from memory after client init
- Document that age_secret_key is env-var only (already implemented)
Testing
-
Write integration tests for restore command
-
Write end-to-end integration test
- Create backup
- Verify backup
- Restore backup
- Compare restored files to originals
-
Add tests for edge cases
- Empty directories
- Symlinks
- Special characters in filenames
- Very large files (multi-GB)
- Many small files (100k+)
-
Add tests for error conditions
- Network failures during upload
- Disk full during restore
- Corrupted blobs
- Missing blobs
Performance
-
Profile and optimize restore performance
- Parallel blob downloads
- Streaming decompression/decryption
- Efficient chunk reassembly
-
Add bandwidth limiting option
--bwlimitflag for upload/download speed limiting
Documentation
- Add man page or --help improvements
- Detailed help for each command
- Examples in help output
Final Polish
-
Ensure version is set correctly in releases
-
Create release process
- Binary releases for supported platforms
- Checksums for binaries
- Release notes template
-
Final code review
- Remove debug statements
- Ensure consistent code style
-
Tag and release v1.0.0
Post-1.0 (Daemon Mode)
-
Implement inotify file watcher for Linux
- Watch source directories for changes
- Track dirty paths in memory
-
Implement FSEvents watcher for macOS
- Watch source directories for changes
- Track dirty paths in memory
-
Implement backup scheduler in daemon mode
- Respect backup_interval config
- Trigger backup when dirty paths exist and interval elapsed
- Implement full_scan_interval for periodic full scans
-
Add proper signal handling for daemon
- Graceful shutdown on SIGTERM/SIGINT
- Complete in-progress backup before exit
-
Write tests for daemon mode