26 Commits

Author SHA1 Message Date
x 180f32902b fix: patch flaws and refactor routes
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-22 05:55:24 +01:00
x 89b4d3f4e6 chore: use scratch
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-22 04:37:33 +01:00
x 577c4b67f6 feat: implement sequential chunk reading and decryption
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-22 04:37:20 +01:00
x 5c13d24736 chore: update deps
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-22 04:10:46 +01:00
x 297db0effa chore: update readme
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-22 04:10:35 +01:00
x f0336b21b8 feat: show version on website
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-19 01:31:28 +01:00
x 2bcf339408 refactor: db location
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-19 00:44:03 +01:00
x 2df37e9002 fix: relax chunk limits, support proxies, optimize reads
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-19 00:33:09 +01:00
x 722dbaa6aa feat: implement encrypted chunked storage and convergent encryption
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 23:39:53 +01:00
x 2d6a3ab216 fix(web): use web crypto for upload id's
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 22:30:20 +01:00
x d18ef48bd4 perf(storage)!: optimize cleanup with secondary index
BREAKING CHANGE: This change requires a fresh database. Existing
databases will lack the index, and the cleanup routine will not function
correctly for pre-existing files.

Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 22:10:07 +01:00
x e18be18029 fix(download): enforce integrity check using db metadata
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 21:54:08 +01:00
x a69e5a52a3 perf: implement zero-copy merge for chunked uploads
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 21:45:41 +01:00
x 8b638275b8 fix: unhandled errors
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 21:19:42 +01:00
x 73ee7a9a14 refactor: embed web files
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 20:53:56 +01:00
x 954aec6d8e feat: replace fs scans with bbolt for fast, persistent metadata management
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 20:27:33 +01:00
x 5a3846266e feat: unit tests
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 19:53:29 +01:00
x a115c49195 fix: add blank favicon
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 19:47:21 +01:00
x 00e5c95fe3 refactor: split handlers.go and centralize config
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-18 19:25:35 +01:00
x aca7267301 refactor: internals
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-17 22:58:38 +01:00
x 5bc9497fa0 fix: enforce max file size
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 04:16:17 +01:00
x 956dff48eb fix: web responsivity
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 04:03:38 +01:00
x d7ba7f63c6 fix: remove goreleaser changelog requirement
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 03:24:18 +01:00
x fc129b7e9f fix: install media-types in docker 2026-01-16 03:18:43 +01:00
x 2d1b2aac48 chore: remove old trash
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 03:02:47 +01:00
x 39ea3ba48d docs: update readme
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 02:50:15 +01:00
34 changed files with 2376 additions and 611 deletions
-1
View File
@@ -1,3 +1,2 @@
storage/*
# Added by goreleaser init:
dist/
+5 -7
View File
@@ -4,6 +4,9 @@ before:
hooks:
- go mod tidy
snapshot:
version_template: "{{ .Version }}"
builds:
- env:
- CGO_ENABLED=0
@@ -13,7 +16,8 @@ builds:
- amd64
- arm64
ldflags:
- -s -w -X main.version={{.Version}} -X main.commit={{.Commit}} -X main.date={{.Date}}
- -s -w
- -X github.com/skidoodle/safebin/internal/app.Version={{.Version}}
flags:
- -trimpath
@@ -26,9 +30,7 @@ archives:
{{- else }}{{ .Arch }}{{ end }}
formats: ["tar.gz"]
files:
- web/**/*
- README.md
- CHANGELOG.md
dockers:
- image_templates:
@@ -38,8 +40,6 @@ dockers:
goos: linux
goarch: amd64
dockerfile: Dockerfile.release
extra_files:
- web
build_flag_templates:
- "--platform=linux/amd64"
- "--label=org.opencontainers.image.title={{ .ProjectName }}"
@@ -52,8 +52,6 @@ dockers:
goos: linux
goarch: arm64
dockerfile: Dockerfile.release
extra_files:
- web
build_flag_templates:
- "--platform=linux/arm64"
- "--label=org.opencontainers.image.title={{ .ProjectName }}"
-44
View File
@@ -1,44 +0,0 @@
# Changelog
## [3.0.0](https://github.com/skidoodle/safebin/compare/v2.0.0...v3.0.0) (2026-01-16)
### ⚠ BREAKING CHANGES
* Docker volume paths and environment variables have been updated. The internal storage path in the container has changed from `/home/appuser/storage` to `/app/storage`. Existing deployments must update their volume mappings and environment variable names to maintain persistence.
### Code Refactoring
* relocate core logic to internal package and modernize project structure ([43be383](https://github.com/skidoodle/safebin/commit/43be383fdbfb0263036284b8beb0ce3c646db87c))
## [2.0.0](https://github.com/skidoodle/safebin/compare/v1.1.0...v2.0.0) (2026-01-16)
### ⚠ BREAKING CHANGES
* The encryption scheme and URL structure have been completely redesigned. Links generated with previous versions of safebin are no longer compatible and cannot be decrypted by this version.
### Features
* overhaul encryption to zero-knowledge at rest and modernize UI ([599347e](https://github.com/skidoodle/safebin/commit/599347e867444288fa58f8e358269121c5d32e36))
## [1.1.0](https://github.com/skidoodle/safebin/compare/v1.0.1...v1.1.0) (2026-01-14)
### Features
* implement chunked uploads and environment-based configuration ([1ccc80a](https://github.com/skidoodle/safebin/commit/1ccc80ad4e5b949a8f1d1f3a8b3b4e8c4d2e1353))
## [1.0.1](https://github.com/skidoodle/safebin/compare/v1.0.0...v1.0.1) (2026-01-14)
### Bug Fixes
* better dockerfile ([c1ecbe5](https://github.com/skidoodle/safebin/commit/c1ecbe567a24eb4e755f19fee68422025f3b15b2))
## 1.0.0 (2026-01-13)
### Features
* add automated release and docker workflow ([e40e6d0](https://github.com/skidoodle/safebin/commit/e40e6d01afd0067bba5d0cf4a9b1ff3d7122259f))
+24 -20
View File
@@ -1,38 +1,42 @@
FROM --platform=$BUILDPLATFORM golang:1.25.5 AS builder
FROM --platform=$BUILDPLATFORM golang:1.25.6-alpine AS builder
WORKDIR /src
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
ARG TARGETOS
ARG TARGETARCH
ARG VERSION=dev
RUN --mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH go build \
-ldflags="-s -w" \
-ldflags="-s -w -X github.com/skidoodle/safebin/internal/app.Version=$VERSION" \
-trimpath \
-o /app/safebin .
-o /bin/safebin .
FROM debian:trixie-slim
FROM alpine:latest AS sys-context
RUN apk add --no-cache ca-certificates mailcap
RUN echo "appuser:x:10001:10001:appuser:/:/sbin/nologin" > /etc/passwd_app \
&& echo "appuser:x:10001:appuser" > /etc/group_app
RUN mkdir -p /app/storage
LABEL org.opencontainers.image.source="https://github.com/skidoodle/safebin"
LABEL org.opencontainers.image.description="Minimalist, self-hosted file storage with Zero-Knowledge at Rest encryption."
LABEL org.opencontainers.image.licenses="GPL-2.0-only"
FROM scratch
COPY --from=sys-context /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=sys-context /etc/mime.types /etc/mime.types
COPY --from=sys-context /etc/passwd_app /etc/passwd
COPY --from=sys-context /etc/group_app /etc/group
COPY --from=builder /bin/safebin /app/safebin
COPY --from=sys-context --chown=10001:10001 /app/storage /app/storage
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m -u 10001 -s /bin/bash appuser
WORKDIR /app
COPY --from=builder /app/safebin .
COPY --from=builder /app/web ./web
RUN mkdir -p /app/storage && chown 10001:10001 /app/storage
VOLUME ["/app/storage"]
USER 10001
VOLUME ["/app/storage"]
EXPOSE 8080
ENV SAFEBIN_HOST=0.0.0.0 \
SAFEBIN_PORT=8080 \
SAFEBIN_STORAGE=/app/storage
ENTRYPOINT ["/app/safebin"]
+18 -12
View File
@@ -1,19 +1,25 @@
FROM debian:trixie-slim
FROM alpine:latest AS sys-context
RUN apk add --no-cache ca-certificates mailcap
RUN echo "appuser:x:10001:10001:appuser:/:/sbin/nologin" > /etc/passwd_app \
&& echo "appuser:x:10001:appuser" > /etc/group_app
RUN mkdir -p /app/storage
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
FROM scratch
COPY --from=sys-context /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=sys-context /etc/mime.types /etc/mime.types
COPY --from=sys-context /etc/passwd_app /etc/passwd
COPY --from=sys-context /etc/group_app /etc/group
COPY safebin /app/safebin
COPY --from=sys-context --chown=10001:10001 /app/storage /app/storage
RUN useradd -m -u 10001 -s /bin/bash appuser
WORKDIR /app
COPY safebin .
COPY web ./web
RUN mkdir -p /app/storage && chown 10001:10001 /app/storage
VOLUME ["/app/storage"]
USER 10001
VOLUME ["/app/storage"]
EXPOSE 8080
ENV SAFEBIN_HOST=0.0.0.0 \
SAFEBIN_PORT=8080 \
SAFEBIN_STORAGE=/app/storage
ENTRYPOINT ["/app/safebin"]
+75 -70
View File
@@ -1,97 +1,102 @@
# safebin
`safebin` is a minimalist, self-hosted file storage service with **Zero-Knowledge at Rest** encryption.
[![Go Version](https://img.shields.io/badge/Go-1.25+-00ADD8?style=flat-square&logo=go)](https://go.dev/)
[![License](https://img.shields.io/badge/License-GPLv2-blue.svg?style=flat-square)](LICENSE)
[![Docker Image](https://img.shields.io/badge/Docker-ghcr.io%2Fskidoodle%2Fsafebin-blue?style=flat-square&logo=docker)](https://github.com/skidoodle/safebin/pkgs/container/safebin)
## Features
**safebin** is a minimalist, self-hosted file storage service designed for efficiency and privacy. It utilizes **Convergent Encryption** to provide secure storage at rest while automatically deduplicating identical files to save disk space.
- **Server-Side Encryption**: Files are encrypted using AES-256-GCM before touching the disk.
- **Log-Safe Keys**: The decryption key is stored in the URL fragment (`#`). Since fragments are never sent to the server, the key never appears in your HTTP access logs.
- **Integrity**: Uses GCM (Galois/Counter Mode) to ensure files cannot be tampered with while stored.
- **Deterministic**: Identical files result in the same ID, allowing for storage deduplication.
## 📖 Architecture & Security Model
## Usage
Safebin is designed to be **Host-Proof at Rest**. While it is not a client-side E2EE solution, it ensures that the server cannot access stored data without the specific link generated at upload time.
You can interact with the service via the web interface or through the command line.
### How it Works
1. **Upload**: The server receives the file stream and calculates a SHA-256 hash of the content.
2. **Key Generation**: This hash becomes the encryption key (Convergent Encryption).
3. **Encryption**: The file is encrypted using **AES-128-GCM** and written to disk.
4. **Deduplication**: Because the key is derived from the content, identical files generate the same ID. The server detects this and stores only one physical copy, regardless of how many times it is uploaded.
5. **Zero-Knowledge Storage**: The server saves the file metadata (ID, size, expiry) but **discards the encryption key**.
6. **Link Generation**: The key is encoded into the URL fragment returned to the user.
### Uploading a file
> **Security Note**: If the server's database or physical storage is seized, the files are mathematically inaccessible. However, because encryption occurs on the server, the process does have access to the plaintext in memory during the brief window of upload and download.
```bash
curl -F 'file=@archive.zip' https://bin.example.com
```
## ✨ Features
The server will return a URL containing the file ID and the decryption key:
`https://bin.example.com/vS6_1_8pS-Y_8-8_...`
- **Convergent Encryption & Deduplication**: Files are addressed by their content. Uploading the same file twice results in a single storage entry, significantly reducing disk usage.
- **Tamper-Proof Storage**: Uses Galois/Counter Mode (GCM) to ensure data integrity. Modified files will fail decryption.
- **Volatile Keys**: Decryption keys reside only in the generated URLs, not in the database.
- **Smart Retention**: A cubic scaling algorithm prioritizes keeping small files (snippets, logs) for a long time, while large binaries expire quickly.
- **Chunked Uploads**: Robust handling of large files via the web interface using 8MB chunks.
### Downloading a file
## 🚀 Deployment
Simply open the link in a browser or use `curl`:
```bash
curl https://bin.example.com/vS6_1_8pS-Y_8-8_... > archive.zip
```
## Configuration
`safebin` is configured via command-line flags:
| Flag | Description | Default |
| :--- | :--- | :--- |
| `-h` | Bind address for the server. | `0.0.0.0` |
| `-p` | Port to listen on. | `8080` |
| `-s` | Directory where encrypted files are stored. | `./storage` |
| `-m` | Maximum file size in mb. | `512` |
## Running Locally
### With Docker
```bash
git clone https://github.com/skidoodle/safebin
cd safebin
docker compose -f compose.dev.yaml up --build
```
### Without Docker
Requires Go 1.25 or higher.
```bash
git clone https://github.com/skidoodle/safebin
cd safebin
go build -o safebin .
./safebin -p 8080 -s ./data
```
## Deploying
### Docker Compose
The easiest way to deploy is using the provided `compose.yaml`.
### Docker Compose (Recommended)
```yaml
services:
safebin:
image: ghcr.io/skidoodle/safebin:main
image: ghcr.io/skidoodle/safebin:latest
container_name: safebin
restart: unless-stopped
ports:
- 8080:8080
- "8080:8080"
environment:
- SAFEBIN_HOST=0.0.0.0
- SAFEBIN_PORT=8080
- SAFEBIN_STORAGE=/app/storage
- SAFEBIN_MAX_MB=512
volumes:
- data:/app/storage
- safebin_data:/app/storage
volumes:
data:
safebin_data:
```
## Retention Policy
### Manual Installation
The server runs a cleanup task every hour. Retention is calculated using a cubic scaling formula to balance disk usage:
- **Small files (< 1MB)**: Up to 365 days.
- **Large files (512MB)**: 24 hours.
Requires Go 1.25 or higher.
This ensures that the server doesn't run out of disk space due to large binary blobs while allowing small text files or images to persist for longer periods.
```bash
# Build the binary
go build -o safebin .
# Run the server
./safebin -p 8080 -s ./data -m 1024
```
## ⚙️ Configuration
Configuration is handled via environment variables or command-line flags. Flags take precedence over environment variables.
| Flag | Environment Variable | Description | Default |
| :--- | :--- | :--- | :--- |
| `-h` | `SAFEBIN_HOST` | Interface/Bind address. | `0.0.0.0` |
| `-p` | `SAFEBIN_PORT` | Port to listen on. | `8080` |
| `-s` | `SAFEBIN_STORAGE` | Directory for database and files. | `./storage` |
| `-m` | `SAFEBIN_MAX_MB` | Maximum allowed file size in MB. | `512` |
## 💻 Usage
### Web Interface
Navigate to `http://localhost:8080`. Drag and drop files to upload. The browser handles chunking automatically.
### CLI (curl)
Safebin is optimized for terminal usage. You can upload files directly via `curl`:
```bash
# Upload a file
curl -F 'file=@screenshot.png' https://bin.example.com
# Response
https://bin.example.com/0iEZGtW-ikVdu...png
```
## ⏳ Retention Policy
To keep storage manageable, Safebin runs a cleanup task every hour. File lifetime is determined by size using a cubic curve:
* **Small Files (< 1MB)**: Retained for **365 days**.
* **Medium Files (~50% Max Size)**: Retained for ~30 days.
* **Large Files (Max Size)**: Retained for **24 hours**.
* **Incomplete Uploads**: Purged after **4 hours**.
## 📄 License
This project is licensed under the [GNU General Public License v2.0](LICENSE).
+1 -1
View File
@@ -1,6 +1,6 @@
services:
safebin:
image: ghcr.io/skidoodle/safebin:main
image: ghcr.io/skidoodle/safebin:latest
container_name: safebin
restart: unless-stopped
ports:
+5 -1
View File
@@ -1,3 +1,7 @@
module github.com/skidoodle/safebin
go 1.25.5
go 1.25.6
require go.etcd.io/bbolt v1.4.3
require golang.org/x/sys v0.40.0 // indirect
+14
View File
@@ -0,0 +1,14 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
go.etcd.io/bbolt v1.4.3 h1:dEadXpI6G79deX5prL3QRNP6JB8UxVkqo4UPnHaNXJo=
go.etcd.io/bbolt v1.4.3/go.mod h1:tKQlpPaYCVFctUIgFKFnAlvbmB3tpy1vkTnDWohtc0E=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+71 -22
View File
@@ -4,9 +4,47 @@ import (
"flag"
"fmt"
"html/template"
"io/fs"
"log/slog"
"os"
"strconv"
"time"
"go.etcd.io/bbolt"
)
var (
Version = "dev"
)
const (
DefaultHost = "0.0.0.0"
DefaultPort = 8080
DefaultStorage = "./storage"
DefaultMaxMB = 512
ServerTimeout = 10 * time.Minute
ShutdownTimeout = 10 * time.Second
UploadChunkSize = 8 << 20
MinChunkSize = 1 << 20
MaxRequestOverhead = 10 << 20
PermUserRWX = 0o700
MegaByte = 1 << 20
ChunkSafetyMargin = 2
SlugLength = 22
KeyLength = 16
CleanupInterval = 1 * time.Hour
TempExpiry = 4 * time.Hour
MinRetention = 24 * time.Hour
MaxRetention = 365 * 24 * time.Hour
DBDirName = "db"
DBFileName = "safebin.db"
DBBucketName = "files"
DBBucketIndexName = "expiry_index"
TempDirName = "tmp"
)
type Config struct {
@@ -19,40 +57,51 @@ type App struct {
Conf Config
Tmpl *template.Template
Logger *slog.Logger
DB *bbolt.DB
Assets fs.FS
}
func LoadConfig() Config {
h := getEnv("SAFEBIN_HOST", "0.0.0.0")
p := getEnvInt("SAFEBIN_PORT", 8080)
s := getEnv("SAFEBIN_STORAGE", "./storage")
mDefault := int64(getEnvInt("SAFEBIN_MAX_MB", 512))
hostEnv := getEnv("SAFEBIN_HOST", DefaultHost)
portEnv := getEnvInt("SAFEBIN_PORT", DefaultPort)
storageEnv := getEnv("SAFEBIN_STORAGE", DefaultStorage)
maxMBEnv := int64(getEnvInt("SAFEBIN_MAX_MB", DefaultMaxMB))
var m int64
flag.StringVar(&h, "h", h, "Bind address")
flag.IntVar(&p, "p", p, "Port")
flag.StringVar(&s, "s", s, "Storage directory")
flag.Int64Var(&m, "m", mDefault, "Max file size in MB")
var host string
var port int
var storage string
var maxMB int64
flag.StringVar(&host, "h", hostEnv, "Bind address")
flag.IntVar(&port, "p", portEnv, "Port")
flag.StringVar(&storage, "s", storageEnv, "Storage directory")
flag.Int64Var(&maxMB, "m", maxMBEnv, "Max file size in MB")
flag.Parse()
return Config{Addr: fmt.Sprintf("%s:%d", h, p), StorageDir: s, MaxMB: m}
}
func getEnv(k, f string) string {
if v, ok := os.LookupEnv(k); ok {
return v
return Config{
Addr: fmt.Sprintf("%s:%d", host, port),
StorageDir: storage,
MaxMB: maxMB,
}
return f
}
func getEnvInt(k string, f int) int {
if v, ok := os.LookupEnv(k); ok {
if i, err := strconv.Atoi(v); err == nil {
func getEnv(key, fallback string) string {
if value, ok := os.LookupEnv(key); ok {
return value
}
return fallback
}
func getEnvInt(key string, fallback int) int {
if value, ok := os.LookupEnv(key); ok {
i, err := strconv.Atoi(value)
if err == nil {
return i
}
}
return f
return fallback
}
func ParseTemplates() *template.Template {
return template.Must(template.ParseGlob("./web/templates/*.html"))
func ParseTemplates(fsys fs.FS) *template.Template {
return template.Must(template.ParseFS(fsys, "*.html"))
}
+37
View File
@@ -0,0 +1,37 @@
package app
import (
"testing"
)
func TestGetEnv(t *testing.T) {
key := "SAFEBIN_TEST_KEY"
val := "somevalue"
if got := getEnv(key, "default"); got != "default" {
t.Errorf("Expected default, got %s", got)
}
t.Setenv(key, val)
if got := getEnv(key, "default"); got != val {
t.Errorf("Expected %s, got %s", val, got)
}
}
func TestGetEnvInt(t *testing.T) {
key := "SAFEBIN_TEST_INT"
if got := getEnvInt(key, 8080); got != 8080 {
t.Errorf("Expected default 8080, got %d", got)
}
t.Setenv(key, "9090")
if got := getEnvInt(key, 8080); got != 9090 {
t.Errorf("Expected 9090, got %d", got)
}
t.Setenv(key, "notanumber")
if got := getEnvInt(key, 8080); got != 8080 {
t.Errorf("Expected fallback on invalid input, got %d", got)
}
}
+46
View File
@@ -0,0 +1,46 @@
package app
import (
"os"
"path/filepath"
"time"
"go.etcd.io/bbolt"
)
type FileMeta struct {
ID string `json:"id"`
Size int64 `json:"size"`
CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"`
}
func InitDB(storageDir string) (*bbolt.DB, error) {
dbDir := filepath.Join(storageDir, DBDirName)
if err := os.MkdirAll(dbDir, PermUserRWX); err != nil {
return nil, err
}
path := filepath.Join(dbDir, DBFileName)
db, err := bbolt.Open(path, 0600, &bbolt.Options{Timeout: 1 * time.Second})
if err != nil {
return nil, err
}
err = db.Update(func(tx *bbolt.Tx) error {
if _, err := tx.CreateBucketIfNotExists([]byte(DBBucketName)); err != nil {
return err
}
if _, err := tx.CreateBucketIfNotExists([]byte(DBBucketIndexName)); err != nil {
return err
}
return nil
})
if err != nil {
_ = db.Close()
return nil, err
}
return db, nil
}
+104
View File
@@ -0,0 +1,104 @@
package app
import (
"encoding/json"
"os"
"path/filepath"
"testing"
"time"
"go.etcd.io/bbolt"
)
func TestInitDB(t *testing.T) {
tmpDir := t.TempDir()
db, err := InitDB(tmpDir)
if err != nil {
t.Fatalf("InitDB failed: %v", err)
}
defer func() {
if err := db.Close(); err != nil {
t.Errorf("Failed to close DB: %v", err)
}
}()
dbPath := filepath.Join(tmpDir, DBDirName, DBFileName)
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
t.Error("Database file was not created")
}
err = db.View(func(tx *bbolt.Tx) error {
if b := tx.Bucket([]byte(DBBucketName)); b == nil {
t.Errorf("Bucket '%s' was not created", DBBucketName)
}
if b := tx.Bucket([]byte(DBBucketIndexName)); b == nil {
t.Errorf("Bucket '%s' was not created", DBBucketIndexName)
}
return nil
})
if err != nil {
t.Errorf("View failed: %v", err)
}
}
func TestDB_MetadataLifecycle(t *testing.T) {
tmpDir := t.TempDir()
db, err := InitDB(tmpDir)
if err != nil {
t.Fatal(err)
}
defer func() {
if err := db.Close(); err != nil {
t.Errorf("Failed to close DB: %v", err)
}
}()
app := &App{
Conf: Config{StorageDir: tmpDir, MaxMB: 100},
DB: db,
}
fileID := "test-file-id"
fileSize := int64(1024)
if err := app.RegisterFile(fileID, fileSize); err != nil {
t.Fatalf("RegisterFile failed: %v", err)
}
err = db.View(func(tx *bbolt.Tx) error {
b := tx.Bucket([]byte(DBBucketName))
data := b.Get([]byte(fileID))
if data == nil {
t.Fatal("Metadata not found in DB")
}
var meta FileMeta
if err := json.Unmarshal(data, &meta); err != nil {
t.Fatalf("Failed to unmarshal meta: %v", err)
}
if meta.ID != fileID {
t.Errorf("Want ID %s, got %s", fileID, meta.ID)
}
if meta.Size != fileSize {
t.Errorf("Want Size %d, got %d", fileSize, meta.Size)
}
if meta.ExpiresAt.Before(time.Now()) {
t.Error("Expiration time is in the past")
}
bIndex := tx.Bucket([]byte(DBBucketIndexName))
indexKey := []byte(meta.ExpiresAt.Format(time.RFC3339) + "_" + fileID)
if val := bIndex.Get(indexKey); val == nil {
t.Error("Index entry not found")
} else if string(val) != fileID {
t.Errorf("Index value mismatch: want %s, got %s", fileID, string(val))
}
return nil
})
if err != nil {
t.Error(err)
}
}
+107
View File
@@ -0,0 +1,107 @@
package app
import (
"encoding/base64"
"encoding/json"
"fmt"
"mime"
"net/http"
"os"
"path/filepath"
"github.com/skidoodle/safebin/internal/crypto"
"go.etcd.io/bbolt"
)
func (app *App) HandleGetFile(writer http.ResponseWriter, request *http.Request) {
slug := request.PathValue("slug")
if len(slug) < SlugLength {
app.SendError(writer, request, http.StatusBadRequest)
return
}
keyBase64 := slug[:SlugLength]
ext := slug[SlugLength:]
key, err := base64.RawURLEncoding.DecodeString(keyBase64)
if err != nil || len(key) != KeyLength {
app.SendError(writer, request, http.StatusUnauthorized)
return
}
id := crypto.GetID(key, ext)
var meta FileMeta
err = app.DB.View(func(tx *bbolt.Tx) error {
b := tx.Bucket([]byte(DBBucketName))
if b == nil {
return fmt.Errorf("bucket not found")
}
data := b.Get([]byte(id))
if data == nil {
return fmt.Errorf("file not found")
}
return json.Unmarshal(data, &meta)
})
if err != nil {
app.SendError(writer, request, http.StatusNotFound)
return
}
path := filepath.Join(app.Conf.StorageDir, id)
info, err := os.Stat(path)
if err != nil {
app.SendError(writer, request, http.StatusNotFound)
return
}
if info.Size() != meta.Size {
app.Logger.Error("Integrity check failed: disk size mismatch",
"id", id,
"disk_bytes", info.Size(),
"expected_bytes", meta.Size,
)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
file, err := os.Open(path)
if err != nil {
app.Logger.Error("Failed to open file", "path", path, "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
defer func() {
if closeErr := file.Close(); closeErr != nil {
app.Logger.Error("Failed to close file", "err", closeErr)
}
}()
streamer, err := crypto.NewGCMStreamer(key)
if err != nil {
app.Logger.Error("Failed to create crypto streamer", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
decryptor := crypto.NewDecryptor(file, streamer.AEAD, info.Size())
contentType := mime.TypeByExtension(ext)
if contentType == "" {
contentType = "application/octet-stream"
}
csp := "default-src 'none'; img-src 'self' data:; media-src 'self' data:; " +
"style-src 'unsafe-inline'; sandbox allow-forms allow-scripts allow-downloads allow-same-origin"
writer.Header().Set("Content-Type", contentType)
writer.Header().Set("Content-Security-Policy", csp)
writer.Header().Set("X-Content-Type-Options", "nosniff")
writer.Header().Set("Content-Disposition", fmt.Sprintf("inline; filename=%q", slug))
http.ServeContent(writer, request, slug, info.ModTime(), decryptor)
}
-174
View File
@@ -1,174 +0,0 @@
package app
import (
"encoding/base64"
"fmt"
"io"
"mime"
"net/http"
"os"
"path/filepath"
"regexp"
"strconv"
"github.com/skidoodle/safebin/internal/crypto"
)
var reUploadID = regexp.MustCompile(`^[a-zA-Z0-9]{10,50}$`)
func (app *App) HandleHome(w http.ResponseWriter, r *http.Request) {
err := app.Tmpl.ExecuteTemplate(w, "base", map[string]any{
"MaxMB": app.Conf.MaxMB,
"Host": r.Host,
})
if err != nil {
app.Logger.Error("Template error", "err", err)
}
}
func (app *App) HandleUpload(w http.ResponseWriter, r *http.Request) {
limit := (app.Conf.MaxMB << 20) + (1 << 20)
r.Body = http.MaxBytesReader(w, r.Body, limit)
file, header, err := r.FormFile("file")
if err != nil {
app.SendError(w, r, http.StatusBadRequest)
return
}
defer file.Close()
tmpPath := filepath.Join(app.Conf.StorageDir, "tmp", fmt.Sprintf("up_%d", os.Getpid()))
tmp, _ := os.Create(tmpPath)
defer os.Remove(tmpPath)
defer tmp.Close()
if _, err := io.Copy(tmp, file); err != nil {
app.SendError(w, r, http.StatusRequestEntityTooLarge)
return
}
app.FinalizeFile(w, r, tmp, header.Filename)
}
func (app *App) HandleChunk(w http.ResponseWriter, r *http.Request) {
uid := r.FormValue("upload_id")
idx, _ := strconv.Atoi(r.FormValue("index"))
if !reUploadID.MatchString(uid) || idx > 1000 {
app.SendError(w, r, http.StatusBadRequest)
return
}
file, _, err := r.FormFile("chunk")
if err != nil {
return
}
defer file.Close()
dir := filepath.Join(app.Conf.StorageDir, "tmp", uid)
os.MkdirAll(dir, 0700)
dest, _ := os.Create(filepath.Join(dir, strconv.Itoa(idx)))
defer dest.Close()
io.Copy(dest, file)
}
func (app *App) HandleFinish(w http.ResponseWriter, r *http.Request) {
uid := r.FormValue("upload_id")
total, _ := strconv.Atoi(r.FormValue("total"))
if !reUploadID.MatchString(uid) || total > 1000 {
app.SendError(w, r, http.StatusBadRequest)
return
}
tmpPath := filepath.Join(app.Conf.StorageDir, "tmp", "m_"+uid)
merged, _ := os.Create(tmpPath)
defer os.Remove(tmpPath)
defer merged.Close()
for i := range total {
partPath := filepath.Join(app.Conf.StorageDir, "tmp", uid, strconv.Itoa(i))
part, err := os.Open(partPath)
if err != nil {
continue
}
io.Copy(merged, part)
part.Close()
}
app.FinalizeFile(w, r, merged, r.FormValue("filename"))
os.RemoveAll(filepath.Join(app.Conf.StorageDir, "tmp", uid))
}
func (app *App) HandleGetFile(w http.ResponseWriter, r *http.Request) {
slug := r.PathValue("slug")
if len(slug) < 22 {
app.SendError(w, r, http.StatusBadRequest)
return
}
keyBase64 := slug[:22]
ext := slug[22:]
key, err := base64.RawURLEncoding.DecodeString(keyBase64)
if err != nil || len(key) != 16 {
app.SendError(w, r, http.StatusUnauthorized)
return
}
id := crypto.GetID(key, ext)
path := filepath.Join(app.Conf.StorageDir, id)
info, err := os.Stat(path)
if err != nil {
app.SendError(w, r, http.StatusNotFound)
return
}
f, _ := os.Open(path)
defer f.Close()
streamer, _ := crypto.NewGCMStreamer(key)
decryptor := crypto.NewDecryptor(f, streamer.AEAD, info.Size())
contentType := mime.TypeByExtension(ext)
if contentType == "" {
contentType = "application/octet-stream"
}
w.Header().Set("Content-Type", contentType)
w.Header().Set("Content-Security-Policy", "default-src 'none'; img-src 'self' data:; media-src 'self' data:; style-src 'unsafe-inline'; sandbox allow-forms allow-scripts allow-downloads allow-same-origin")
w.Header().Set("X-Content-Type-Options", "nosniff")
w.Header().Set("Content-Disposition", fmt.Sprintf("inline; filename=%q", slug))
http.ServeContent(w, r, slug, info.ModTime(), decryptor)
}
func (app *App) FinalizeFile(w http.ResponseWriter, r *http.Request, src *os.File, filename string) {
src.Seek(0, 0)
key, _ := crypto.DeriveKey(src)
ext := filepath.Ext(filename)
id := crypto.GetID(key, ext)
src.Seek(0, 0)
finalPath := filepath.Join(app.Conf.StorageDir, id)
if _, err := os.Stat(finalPath); err == nil {
app.RespondWithLink(w, r, key, filename)
return
}
out, _ := os.Create(finalPath + ".tmp")
streamer, _ := crypto.NewGCMStreamer(key)
if err := streamer.EncryptStream(out, src); err != nil {
out.Close()
os.Remove(finalPath + ".tmp")
app.SendError(w, r, http.StatusInternalServerError)
return
}
out.Close()
os.Rename(finalPath+".tmp", finalPath)
app.RespondWithLink(w, r, key, filename)
}
+52
View File
@@ -0,0 +1,52 @@
package app
import (
"testing"
"time"
)
func TestCalculateRetention(t *testing.T) {
maxMB := int64(100)
tests := []struct {
name string
fileSize int64
wantMin time.Duration
wantMax time.Duration
}{
{
name: "Tiny file (Max retention)",
fileSize: 1024,
wantMin: MaxRetention - time.Hour,
wantMax: MaxRetention,
},
{
name: "Max size file (Min retention)",
fileSize: 100 * MegaByte,
wantMin: MinRetention,
wantMax: MinRetention + time.Minute,
},
{
name: "Half size file (Somewhere in between)",
fileSize: 50 * MegaByte,
wantMin: 24 * time.Hour,
wantMax: MaxRetention,
},
{
name: "Oversized file (Min retention)",
fileSize: 200 * MegaByte,
wantMin: MinRetention,
wantMax: MinRetention + time.Minute,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := CalculateRetention(tc.fileSize, maxMB)
if got < tc.wantMin || got > tc.wantMax {
t.Errorf("Retention for size %d: got %v, want between %v and %v",
tc.fileSize, got, tc.wantMin, tc.wantMax)
}
})
}
}
+75 -20
View File
@@ -5,14 +5,13 @@ import (
"fmt"
"net/http"
"path/filepath"
"strings"
)
func (app *App) Routes() *http.ServeMux {
mux := http.NewServeMux()
fs := http.FileServer(http.Dir("./web/static"))
mux.Handle("GET /static/", http.StripPrefix("/static/", fs))
mux.Handle("GET /static/", http.StripPrefix("/static/", app.handleStatic()))
mux.HandleFunc("GET /{$}", app.HandleHome)
mux.HandleFunc("POST /{$}", app.HandleUpload)
mux.HandleFunc("POST /upload/chunk", app.HandleChunk)
@@ -22,37 +21,93 @@ func (app *App) Routes() *http.ServeMux {
return mux
}
func (app *App) RespondWithLink(w http.ResponseWriter, r *http.Request, key []byte, originalName string) {
func (app *App) handleStatic() http.Handler {
fs := http.FileServer(http.FS(app.Assets))
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "" || strings.HasSuffix(r.URL.Path, "/") || strings.HasSuffix(r.URL.Path, ".html") {
http.NotFound(w, r)
return
}
fs.ServeHTTP(w, r)
})
}
func (app *App) HandleHome(writer http.ResponseWriter, request *http.Request) {
err := app.Tmpl.ExecuteTemplate(writer, "layout", map[string]any{
"MaxMB": app.Conf.MaxMB,
"Host": request.Host,
"Version": Version,
})
if err != nil {
app.Logger.Error("Template error", "err", err)
}
}
func (app *App) RespondWithLink(writer http.ResponseWriter, request *http.Request, key []byte, originalName string) {
keySlug := base64.RawURLEncoding.EncodeToString(key)
ext := filepath.Ext(originalName)
link := fmt.Sprintf("%s/%s%s", r.Host, keySlug, ext)
const unsafeChars = "\"<> \\/:;?@[]^`{}|~"
safeExt := strings.Map(func(r rune) rune {
if strings.ContainsRune(unsafeChars, r) {
return -1
}
return r
}, ext)
if r.Header.Get("X-Requested-With") == "XMLHttpRequest" {
fmt.Fprintf(w, `
<div style="text-align: left;">
<div class="dim" style="margin-bottom: 8px;">Upload Complete:</div>
link := fmt.Sprintf("%s/%s%s", request.Host, keySlug, safeExt)
if request.Header.Get("X-Requested-With") == "XMLHttpRequest" {
html := `
<div class="result-container">
<div class="dim result-label">Upload Complete:</div>
<div class="copy-box">
<input type="text" value="%s" id="share-url" readonly onclick="this.select()">
<button onclick="copyToClipboard(this)">Copy</button>
</div>
<button class="reset-btn" onclick="resetUI()">Upload another</button>
</div>`, link)
<div class="reset-wrapper">
<button class="reset-btn" onclick="resetUI()">Upload another</button>
</div>
</div>`
if _, err := fmt.Fprintf(writer, html, link); err != nil {
app.Logger.Error("Failed to write response", "err", err)
}
return
}
scheme := "https"
if r.TLS == nil {
scheme = "http"
scheme := request.Header.Get("X-Forwarded-Proto")
if scheme == "" {
scheme = "https"
if request.TLS == nil {
scheme = "http"
}
}
if _, err := fmt.Fprintf(writer, "%s://%s\n", scheme, link); err != nil {
app.Logger.Error("Failed to write response", "err", err)
}
fmt.Fprintf(w, "%s://%s\n", scheme, link)
}
func (app *App) SendError(w http.ResponseWriter, r *http.Request, code int) {
if r.Header.Get("X-Requested-With") == "XMLHttpRequest" {
w.WriteHeader(code)
fmt.Fprintf(w, `<div class="error-text">Error %d</div><button class="reset-btn" onclick="resetUI()">Try again</button>`, code)
func (app *App) SendError(writer http.ResponseWriter, request *http.Request, code int) {
if request.Header.Get("X-Requested-With") == "XMLHttpRequest" {
writer.WriteHeader(code)
html := `
<div class="result-container">
<div class="error-text">Error %d</div>
<div class="reset-wrapper">
<button class="reset-btn" onclick="resetUI()">Try again</button>
</div>
</div>`
if _, err := fmt.Fprintf(writer, html, code); err != nil {
app.Logger.Error("Failed to write error response", "err", err)
}
return
}
http.Error(w, http.StatusText(code), code)
http.Error(writer, http.StatusText(code), code)
}
+339
View File
@@ -0,0 +1,339 @@
package app
import (
"bytes"
"encoding/base64"
"fmt"
"io"
"log/slog"
"mime/multipart"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strings"
"testing"
"github.com/skidoodle/safebin/internal/crypto"
)
func setupTestApp(t *testing.T) (*App, string) {
storageDir := t.TempDir()
if err := os.MkdirAll(filepath.Join(storageDir, TempDirName), 0700); err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
webDir := filepath.Join(storageDir, "web")
if err := os.MkdirAll(webDir, 0700); err != nil {
t.Fatalf("Failed to create web dir: %v", err)
}
if err := os.WriteFile(filepath.Join(webDir, "layout.html"), []byte(`{{define "layout"}}{{template "content" .}}{{end}}`), 0600); err != nil {
t.Fatalf("Failed to write layout.html: %v", err)
}
if err := os.WriteFile(filepath.Join(webDir, "home.html"), []byte(`{{define "content"}}OK{{end}}`), 0600); err != nil {
t.Fatalf("Failed to write home.html: %v", err)
}
testFS := os.DirFS(webDir)
tmpl := ParseTemplates(testFS)
db, err := InitDB(storageDir)
if err != nil {
t.Fatalf("Failed to init db: %v", err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Errorf("Failed to close DB: %v", err)
}
})
app := &App{
Conf: Config{
StorageDir: storageDir,
MaxMB: 10,
},
Logger: discardLogger(),
Tmpl: tmpl,
Assets: testFS,
DB: db,
}
return app, storageDir
}
func discardLogger() *slog.Logger {
return slog.New(slog.NewTextHandler(io.Discard, nil))
}
func TestIntegration_StandardUploadAndDownload(t *testing.T) {
app, _ := setupTestApp(t)
server := httptest.NewServer(app.Routes())
defer server.Close()
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
part, err := writer.CreateFormFile("file", "test.txt")
if err != nil {
t.Fatalf("CreateFormFile failed: %v", err)
}
content := []byte("Hello Safebin")
if _, err := part.Write(content); err != nil {
t.Fatalf("Write part failed: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("Writer close failed: %v", err)
}
req, _ := http.NewRequest("POST", server.URL+"/", body)
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("Upload request failed: %v", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
t.Errorf("Failed to close response body: %v", err)
}
}()
if resp.StatusCode != http.StatusOK {
t.Fatalf("Upload failed status: %d", resp.StatusCode)
}
respBytes, _ := io.ReadAll(resp.Body)
respStr := string(respBytes)
parts := strings.Split(strings.TrimSpace(respStr), "/")
slugWithExt := parts[len(parts)-1]
downloadURL := fmt.Sprintf("%s/%s", server.URL, slugWithExt)
resp, err = http.Get(downloadURL)
if err != nil {
t.Fatalf("Download request failed: %v", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
t.Errorf("Failed to close download response body: %v", err)
}
}()
if resp.StatusCode != http.StatusOK {
t.Fatalf("Download failed status: %d", resp.StatusCode)
}
downloadedContent, _ := io.ReadAll(resp.Body)
if !bytes.Equal(content, downloadedContent) {
t.Errorf("Content mismatch. Want %s, got %s", content, downloadedContent)
}
}
func TestIntegration_ChunkedUpload(t *testing.T) {
app, _ := setupTestApp(t)
server := httptest.NewServer(app.Routes())
defer server.Close()
uploadID := "testchunkid123"
content := []byte("Chunk1Content-Chunk2Content")
chunk1 := content[:13]
chunk2 := content[13:]
uploadChunk(t, server.URL, uploadID, 0, chunk1)
uploadChunk(t, server.URL, uploadID, 1, chunk2)
finishURL := fmt.Sprintf("%s/upload/finish", server.URL)
form := map[string]string{
"upload_id": uploadID,
"total": "2",
"filename": "chunked.txt",
}
resp := postForm(t, finishURL, form)
defer func() {
if err := resp.Body.Close(); err != nil {
t.Errorf("Failed to close finish response body: %v", err)
}
}()
if resp.StatusCode != http.StatusOK {
t.Fatalf("Finish failed: %d", resp.StatusCode)
}
respBytes, _ := io.ReadAll(resp.Body)
respStr := string(respBytes)
parts := strings.Split(strings.TrimSpace(respStr), "/")
slugWithExt := parts[len(parts)-1]
downloadURL := fmt.Sprintf("%s/%s", server.URL, slugWithExt)
dlResp, err := http.Get(downloadURL)
if err != nil {
t.Fatalf("Download request failed: %v", err)
}
dlBytes, _ := io.ReadAll(dlResp.Body)
if err := dlResp.Body.Close(); err != nil {
t.Errorf("Failed to close download response body: %v", err)
}
if !bytes.Equal(content, dlBytes) {
t.Errorf("Chunked reassembly failed. Want %s, got %s", content, dlBytes)
}
}
func TestIntegration_ChunkedUpload_VerifyEncryption(t *testing.T) {
app, storageDir := setupTestApp(t)
server := httptest.NewServer(app.Routes())
defer server.Close()
uploadID := "securechunk123"
plaintext := []byte("This is a secret message that should be encrypted")
uploadChunk(t, server.URL, uploadID, 0, plaintext)
chunkPath := filepath.Join(storageDir, TempDirName, uploadID, "0")
encryptedData, err := os.ReadFile(chunkPath)
if err != nil {
t.Fatalf("Failed to read chunk file: %v", err)
}
if bytes.Contains(encryptedData, plaintext) {
t.Fatal("Chunk file contains plaintext data!")
}
if len(encryptedData) <= crypto.KeySize {
t.Fatalf("Chunk file too small: %d bytes", len(encryptedData))
}
key := encryptedData[:crypto.KeySize]
ciphertext := encryptedData[crypto.KeySize:]
streamer, err := crypto.NewGCMStreamer(key)
if err != nil {
t.Fatalf("Failed to create streamer: %v", err)
}
r := bytes.NewReader(ciphertext)
d := crypto.NewDecryptor(r, streamer.AEAD, int64(len(ciphertext)))
decrypted, err := io.ReadAll(d)
if err != nil {
t.Fatalf("Failed to decrypt chunk: %v", err)
}
if !bytes.Equal(decrypted, plaintext) {
t.Errorf("Decrypted data mismatch.\nWant: %s\nGot: %s", plaintext, decrypted)
}
}
func TestIntegration_Upload_VerifyEncryption(t *testing.T) {
app, storageDir := setupTestApp(t)
server := httptest.NewServer(app.Routes())
defer server.Close()
plaintext := []byte("Sensitive Data For Full Upload")
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
part, err := writer.CreateFormFile("file", "secret.txt")
if err != nil {
t.Fatalf("CreateFormFile failed: %v", err)
}
if _, err := part.Write(plaintext); err != nil {
t.Fatalf("Write failed: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("Writer close failed: %v", err)
}
req, _ := http.NewRequest("POST", server.URL+"/", body)
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
t.Errorf("Failed to close response body: %v", err)
}
}()
respBytes, _ := io.ReadAll(resp.Body)
slug := filepath.Base(strings.TrimSpace(string(respBytes)))
if len(slug) < SlugLength {
t.Fatalf("Invalid slug: %s", slug)
}
keyBase64 := slug[:SlugLength]
key, _ := base64.RawURLEncoding.DecodeString(keyBase64)
ext := filepath.Ext("secret.txt")
id := crypto.GetID(key, ext)
finalPath := filepath.Join(storageDir, id)
finalData, err := os.ReadFile(finalPath)
if err != nil {
t.Fatalf("Failed to read final file: %v", err)
}
if bytes.Contains(finalData, plaintext) {
t.Fatal("Final file contains plaintext!")
}
streamer, _ := crypto.NewGCMStreamer(key)
d := crypto.NewDecryptor(bytes.NewReader(finalData), streamer.AEAD, int64(len(finalData)))
decrypted, _ := io.ReadAll(d)
if !bytes.Equal(decrypted, plaintext) {
t.Error("Final file decryption failed")
}
}
func uploadChunk(t *testing.T, baseURL, uid string, idx int, data []byte) {
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
if err := writer.WriteField("upload_id", uid); err != nil {
t.Fatalf("WriteField upload_id failed: %v", err)
}
if err := writer.WriteField("index", fmt.Sprintf("%d", idx)); err != nil {
t.Fatalf("WriteField index failed: %v", err)
}
part, err := writer.CreateFormFile("chunk", "blob")
if err != nil {
t.Fatalf("CreateFormFile failed: %v", err)
}
if _, err := part.Write(data); err != nil {
t.Fatalf("Write part failed: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("Writer close failed: %v", err)
}
req, _ := http.NewRequest("POST", baseURL+"/upload/chunk", body)
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := http.DefaultClient.Do(req)
if err != nil || resp.StatusCode != http.StatusOK {
t.Fatalf("Chunk %d upload failed: %v", idx, err)
}
if err := resp.Body.Close(); err != nil {
t.Errorf("Failed to close chunk response body: %v", err)
}
}
func postForm(t *testing.T, url string, fields map[string]string) *http.Response {
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
for k, v := range fields {
if err := writer.WriteField(k, v); err != nil {
t.Fatalf("WriteField %s failed: %v", k, err)
}
}
if err := writer.Close(); err != nil {
t.Fatalf("Writer close failed: %v", err)
}
req, _ := http.NewRequest("POST", url, body)
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("Post form failed: %v", err)
}
return resp
}
+291 -20
View File
@@ -2,49 +2,320 @@ package app
import (
"context"
"crypto/rand"
"encoding/json"
"fmt"
"io"
"math"
"os"
"path/filepath"
"strconv"
"time"
"github.com/skidoodle/safebin/internal/crypto"
"go.etcd.io/bbolt"
)
func (app *App) StartCleanupTask(ctx context.Context) {
ticker := time.NewTicker(1 * time.Hour)
ticker := time.NewTicker(CleanupInterval)
for {
select {
case <-ctx.Done():
ticker.Stop()
return
case <-ticker.C:
app.CleanDir(app.Conf.StorageDir, false)
app.CleanDir(filepath.Join(app.Conf.StorageDir, "tmp"), true)
app.CleanStorage()
app.CleanTemp(filepath.Join(app.Conf.StorageDir, TempDirName))
}
}
}
func (app *App) CleanDir(path string, isTmp bool) {
entries, _ := os.ReadDir(path)
func (app *App) saveChunk(uid string, idx int, src io.Reader) error {
dir := filepath.Join(app.Conf.StorageDir, TempDirName, uid)
if err := os.MkdirAll(dir, PermUserRWX); err != nil {
return fmt.Errorf("create chunk dir: %w", err)
}
dest, err := os.Create(filepath.Join(dir, strconv.Itoa(idx)))
if err != nil {
return fmt.Errorf("create chunk file: %w", err)
}
defer func() {
if closeErr := dest.Close(); closeErr != nil {
app.Logger.Error("Failed to close chunk dest", "err", closeErr)
}
}()
key := make([]byte, crypto.KeySize)
if _, err := rand.Read(key); err != nil {
return fmt.Errorf("generate chunk key: %w", err)
}
if _, err := dest.Write(key); err != nil {
return fmt.Errorf("write chunk key: %w", err)
}
streamer, err := crypto.NewGCMStreamer(key)
if err != nil {
return fmt.Errorf("create streamer: %w", err)
}
if err := streamer.EncryptStream(dest, src); err != nil {
return fmt.Errorf("encrypt chunk: %w", err)
}
return nil
}
func (app *App) openChunkDecryptor(uid string, idx int) (io.ReadCloser, error) {
partPath := filepath.Join(app.Conf.StorageDir, TempDirName, uid, strconv.Itoa(idx))
f, err := os.Open(partPath)
if err != nil {
return nil, fmt.Errorf("open chunk %d: %w", idx, err)
}
key := make([]byte, crypto.KeySize)
if _, err := io.ReadFull(f, key); err != nil {
_ = f.Close()
return nil, fmt.Errorf("read chunk key %d: %w", idx, err)
}
info, err := f.Stat()
if err != nil {
_ = f.Close()
return nil, fmt.Errorf("stat chunk %d: %w", idx, err)
}
bodySize := info.Size() - int64(crypto.KeySize)
if bodySize < 0 {
_ = f.Close()
return nil, fmt.Errorf("invalid chunk size %d", idx)
}
bodyReader := io.NewSectionReader(f, int64(crypto.KeySize), bodySize)
streamer, err := crypto.NewGCMStreamer(key)
if err != nil {
_ = f.Close()
return nil, fmt.Errorf("create streamer %d: %w", idx, err)
}
decryptor := crypto.NewDecryptor(bodyReader, streamer.AEAD, bodySize)
return &chunkReadCloser{Decryptor: decryptor, f: f}, nil
}
type chunkReadCloser struct {
*crypto.Decryptor
f *os.File
}
func (c *chunkReadCloser) Close() error {
return c.f.Close()
}
type SequentialChunkReader struct {
app *App
uid string
total int
currentIdx int
currentRC io.ReadCloser
}
func (s *SequentialChunkReader) Read(p []byte) (n int, err error) {
if s.currentRC == nil {
if s.currentIdx >= s.total {
return 0, io.EOF
}
rc, err := s.app.openChunkDecryptor(s.uid, s.currentIdx)
if err != nil {
return 0, err
}
s.currentRC = rc
}
n, err = s.currentRC.Read(p)
if err == io.EOF {
_ = s.currentRC.Close()
s.currentRC = nil
s.currentIdx++
if n > 0 {
return n, nil
}
return s.Read(p)
}
return n, err
}
func (s *SequentialChunkReader) Close() error {
if s.currentRC != nil {
return s.currentRC.Close()
}
return nil
}
func (app *App) encryptAndSave(src io.Reader, key []byte, finalPath string) error {
out, err := os.Create(finalPath + ".tmp")
if err != nil {
return fmt.Errorf("create final file: %w", err)
}
var closed bool
defer func() {
if !closed {
if closeErr := out.Close(); closeErr != nil {
app.Logger.Error("Failed to close final file", "err", closeErr)
}
}
if removeErr := os.Remove(finalPath + ".tmp"); removeErr != nil && !os.IsNotExist(removeErr) {
app.Logger.Error("Failed to remove temp final file", "err", removeErr)
}
}()
streamer, err := crypto.NewGCMStreamer(key)
if err != nil {
return fmt.Errorf("create streamer: %w", err)
}
if err := streamer.EncryptStream(out, src); err != nil {
return fmt.Errorf("encrypt stream: %w", err)
}
if err := out.Close(); err != nil {
return fmt.Errorf("close final file: %w", err)
}
closed = true
if err := os.Rename(finalPath+".tmp", finalPath); err != nil {
return fmt.Errorf("rename final file: %w", err)
}
return nil
}
func (app *App) RegisterFile(id string, size int64) error {
retention := CalculateRetention(size, app.Conf.MaxMB)
meta := FileMeta{
ID: id,
Size: size,
CreatedAt: time.Now(),
ExpiresAt: time.Now().Add(retention),
}
return app.DB.Update(func(tx *bbolt.Tx) error {
bFiles := tx.Bucket([]byte(DBBucketName))
bIndex := tx.Bucket([]byte(DBBucketIndexName))
data, err := json.Marshal(meta)
if err != nil {
return err
}
if err := bFiles.Put([]byte(id), data); err != nil {
return err
}
indexKey := []byte(meta.ExpiresAt.Format(time.RFC3339) + "_" + id)
return bIndex.Put(indexKey, []byte(id))
})
}
func (app *App) CleanStorage() {
now := time.Now().Format(time.RFC3339)
var toDeleteIDs []string
var toDeleteKeys []string
err := app.DB.View(func(tx *bbolt.Tx) error {
bIndex := tx.Bucket([]byte(DBBucketIndexName))
if bIndex == nil {
return nil
}
c := bIndex.Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
if string(k) > now {
break
}
toDeleteKeys = append(toDeleteKeys, string(k))
toDeleteIDs = append(toDeleteIDs, string(v))
}
return nil
})
if err != nil {
app.Logger.Error("Failed to view DB for cleanup", "err", err)
return
}
if len(toDeleteIDs) == 0 {
return
}
err = app.DB.Update(func(tx *bbolt.Tx) error {
bFiles := tx.Bucket([]byte(DBBucketName))
bIndex := tx.Bucket([]byte(DBBucketIndexName))
for i, id := range toDeleteIDs {
path := filepath.Join(app.Conf.StorageDir, id)
if err := os.RemoveAll(path); err != nil {
app.Logger.Error("Failed to remove expired file", "path", id, "err", err)
}
if err := bFiles.Delete([]byte(id)); err != nil {
app.Logger.Error("Failed to delete metadata", "id", id, "err", err)
}
if err := bIndex.Delete([]byte(toDeleteKeys[i])); err != nil {
app.Logger.Error("Failed to delete index", "key", toDeleteKeys[i], "err", err)
}
}
return nil
})
if err != nil {
app.Logger.Error("Failed to update DB during cleanup", "err", err)
}
}
func (app *App) CleanTemp(path string) {
entries, err := os.ReadDir(path)
if err != nil {
app.Logger.Error("Failed to read temp dir", "err", err)
return
}
for _, entry := range entries {
info, _ := entry.Info()
expiry := 4 * time.Hour
if !isTmp {
expiry = CalculateRetention(info.Size(), app.Conf.MaxMB)
info, err := entry.Info()
if err != nil {
continue
}
if time.Since(info.ModTime()) > expiry {
os.RemoveAll(filepath.Join(path, entry.Name()))
if time.Since(info.ModTime()) > TempExpiry {
if err := os.RemoveAll(filepath.Join(path, entry.Name())); err != nil {
app.Logger.Error("Failed to remove expired temp file", "path", entry.Name(), "err", err)
}
}
}
}
func CalculateRetention(fileSize int64, maxMB int64) time.Duration {
const (
minAge = 24 * time.Hour
maxAge = 365 * 24 * time.Hour
)
ratio := math.Max(0, math.Min(1, float64(fileSize)/float64(maxMB<<20)))
retention := float64(maxAge) * math.Pow(1.0-ratio, 3)
if retention < float64(minAge) {
return minAge
func CalculateRetention(fileSize, maxMB int64) time.Duration {
ratio := math.Max(0, math.Min(1, float64(fileSize)/float64(maxMB*MegaByte)))
invRatio := 1.0 - ratio
retention := float64(MaxRetention) * (invRatio * invRatio * invRatio)
if retention < float64(MinRetention) {
return MinRetention
}
return time.Duration(retention)
}
+213
View File
@@ -0,0 +1,213 @@
package app
import (
"bytes"
"crypto/rand"
"encoding/json"
"io"
"os"
"path/filepath"
"testing"
"time"
"github.com/skidoodle/safebin/internal/crypto"
"go.etcd.io/bbolt"
)
func TestCleanup_AbandonedChunks(t *testing.T) {
tmpDir := t.TempDir()
tmpStorage := filepath.Join(tmpDir, TempDirName)
if err := os.MkdirAll(tmpStorage, 0700); err != nil {
t.Fatalf("MkdirAll failed: %v", err)
}
db, err := InitDB(tmpDir)
if err != nil {
t.Fatalf("InitDB failed: %v", err)
}
defer func() {
if err := db.Close(); err != nil {
t.Errorf("Failed to close DB: %v", err)
}
}()
app := &App{
Conf: Config{StorageDir: tmpDir},
Logger: discardLogger(),
DB: db,
}
chunkDir := filepath.Join(tmpStorage, "some_upload_id")
if err := os.MkdirAll(chunkDir, 0700); err != nil {
t.Fatalf("MkdirAll chunkDir failed: %v", err)
}
if err := os.WriteFile(filepath.Join(chunkDir, "0"), []byte("chunk data"), 0600); err != nil {
t.Fatalf("WriteFile chunk failed: %v", err)
}
oldTime := time.Now().Add(-TempExpiry - time.Hour)
if err := os.Chtimes(chunkDir, oldTime, oldTime); err != nil {
t.Fatalf("Chtimes failed: %v", err)
}
app.CleanTemp(tmpStorage)
if _, err := os.Stat(chunkDir); !os.IsNotExist(err) {
t.Error("Cleanup failed to remove abandoned chunk directory")
}
}
func TestCleanup_ExpiredStorage(t *testing.T) {
storageDir := t.TempDir()
db, err := InitDB(storageDir)
if err != nil {
t.Fatalf("InitDB failed: %v", err)
}
defer func() {
if err := db.Close(); err != nil {
t.Errorf("Failed to close DB: %v", err)
}
}()
app := &App{
Conf: Config{
StorageDir: storageDir,
MaxMB: 100,
},
Logger: discardLogger(),
DB: db,
}
filename := "large_file_id"
path := filepath.Join(storageDir, filename)
f, err := os.Create(path)
if err != nil {
t.Fatalf("Create file failed: %v", err)
}
if err := f.Truncate(100 * MegaByte); err != nil {
t.Fatalf("Truncate failed: %v", err)
}
if err := f.Close(); err != nil {
t.Fatalf("Close file failed: %v", err)
}
expiredMeta := FileMeta{
ID: filename,
Size: 100 * MegaByte,
CreatedAt: time.Now().Add(-MinRetention - 2*time.Hour),
ExpiresAt: time.Now().Add(-time.Hour),
}
if err := app.DB.Update(func(tx *bbolt.Tx) error {
bFiles := tx.Bucket([]byte(DBBucketName))
bIndex := tx.Bucket([]byte(DBBucketIndexName))
data, _ := json.Marshal(expiredMeta)
if err := bFiles.Put([]byte(filename), data); err != nil {
return err
}
indexKey := []byte(expiredMeta.ExpiresAt.Format(time.RFC3339) + "_" + filename)
return bIndex.Put(indexKey, []byte(filename))
}); err != nil {
t.Fatalf("DB Update failed: %v", err)
}
app.CleanStorage()
if _, err := os.Stat(path); !os.IsNotExist(err) {
t.Error("Cleanup failed to remove expired large file")
}
if err := app.DB.View(func(tx *bbolt.Tx) error {
bFiles := tx.Bucket([]byte(DBBucketName))
if v := bFiles.Get([]byte(filename)); v != nil {
t.Error("Cleanup failed to remove metadata")
}
bIndex := tx.Bucket([]byte(DBBucketIndexName))
indexKey := []byte(expiredMeta.ExpiresAt.Format(time.RFC3339) + "_" + filename)
if v := bIndex.Get(indexKey); v != nil {
t.Error("Cleanup failed to remove index entry")
}
return nil
}); err != nil {
t.Fatalf("DB View failed: %v", err)
}
}
func TestSaveChunk_EncryptsData(t *testing.T) {
tmpDir := t.TempDir()
app := &App{
Conf: Config{StorageDir: tmpDir},
Logger: discardLogger(),
}
uid := "test-encrypt-chunk"
plaintext := make([]byte, 1024)
if _, err := rand.Read(plaintext); err != nil {
t.Fatal(err)
}
if err := app.saveChunk(uid, 0, bytes.NewReader(plaintext)); err != nil {
t.Fatalf("saveChunk failed: %v", err)
}
path := filepath.Join(tmpDir, TempDirName, uid, "0")
fileData, err := os.ReadFile(path)
if err != nil {
t.Fatalf("ReadFile failed: %v", err)
}
if bytes.Equal(fileData, plaintext) {
t.Fatal("Chunk stored as plaintext!")
}
if bytes.Contains(fileData, plaintext) {
t.Fatal("Chunk contains plaintext!")
}
expectedSize := crypto.KeySize + len(plaintext) + 16
if len(fileData) != expectedSize {
t.Errorf("Unexpected file size. Want %d, got %d", expectedSize, len(fileData))
}
}
func TestSequentialChunkReader_RestoresData(t *testing.T) {
tmpDir := t.TempDir()
app := &App{
Conf: Config{StorageDir: tmpDir},
Logger: discardLogger(),
}
uid := "test-restore"
data1 := []byte("chunk one data")
data2 := []byte("chunk two data")
if err := app.saveChunk(uid, 0, bytes.NewReader(data1)); err != nil {
t.Fatal(err)
}
if err := app.saveChunk(uid, 1, bytes.NewReader(data2)); err != nil {
t.Fatal(err)
}
reader := &SequentialChunkReader{
app: app,
uid: uid,
total: 2,
}
defer func() {
if err := reader.Close(); err != nil {
t.Errorf("Failed to close reader: %v", err)
}
}()
restored, err := io.ReadAll(reader)
if err != nil {
t.Fatalf("ReadAll failed: %v", err)
}
expected := append(data1, data2...)
if !bytes.Equal(restored, expected) {
t.Errorf("Restored data mismatch.\nWant: %s\nGot: %s", expected, restored)
}
}
+269
View File
@@ -0,0 +1,269 @@
package app
import (
"crypto/rand"
"crypto/sha256"
"errors"
"io"
"net/http"
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
"github.com/skidoodle/safebin/internal/crypto"
)
var reUploadID = regexp.MustCompile(`^[a-zA-Z0-9]{10,50}$`)
func (app *App) HandleUpload(writer http.ResponseWriter, request *http.Request) {
limit := (app.Conf.MaxMB * MegaByte) + MegaByte
request.Body = http.MaxBytesReader(writer, request.Body, limit)
mr, err := request.MultipartReader()
if err != nil {
app.SendError(writer, request, http.StatusBadRequest)
return
}
var filename string
var partReader io.Reader
for {
part, err := mr.NextPart()
if err == io.EOF {
break
}
if err != nil {
app.SendError(writer, request, http.StatusBadRequest)
return
}
if part.FormName() == "file" {
filename = part.FileName()
partReader = part
break
}
}
if partReader == nil {
app.SendError(writer, request, http.StatusBadRequest)
return
}
tmp, err := os.CreateTemp(filepath.Join(app.Conf.StorageDir, TempDirName), "up_*")
if err != nil {
app.Logger.Error("Failed to create temp file", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
tmpPath := tmp.Name()
defer func() {
_ = tmp.Close()
if removeErr := os.Remove(tmpPath); removeErr != nil && !os.IsNotExist(removeErr) {
app.Logger.Error("Failed to remove temp file", "err", removeErr)
}
}()
ephemeralKey := make([]byte, crypto.KeySize)
if _, err := rand.Read(ephemeralKey); err != nil {
app.Logger.Error("Failed to generate ephemeral key", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
pr, pw := io.Pipe()
hasher := sha256.New()
errChan := make(chan error, 1)
go func() {
_, err := io.Copy(io.MultiWriter(hasher, pw), partReader)
_ = pw.CloseWithError(err)
errChan <- err
}()
streamer, err := crypto.NewGCMStreamer(ephemeralKey)
if err != nil {
_ = pr.Close()
app.Logger.Error("Failed to create streamer", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
if err := streamer.EncryptStream(tmp, pr); err != nil {
_ = pr.Close()
app.Logger.Error("Failed to encrypt stream", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
if err := <-errChan; err != nil {
if errors.Is(err, http.ErrMissingBoundary) || strings.Contains(err.Error(), "request body too large") {
app.SendError(writer, request, http.StatusRequestEntityTooLarge)
} else {
app.Logger.Error("Failed to read/hash upload", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
}
return
}
convergentKey := hasher.Sum(nil)[:crypto.KeySize]
if _, err := tmp.Seek(0, 0); err != nil {
app.Logger.Error("Seek failed", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
info, _ := tmp.Stat()
decryptor := crypto.NewDecryptor(tmp, streamer.AEAD, info.Size())
app.finalizeUpload(writer, request, decryptor, convergentKey, filename)
}
func (app *App) HandleChunk(writer http.ResponseWriter, request *http.Request) {
const MaxChunkBody = UploadChunkSize + (1 << 20)
request.Body = http.MaxBytesReader(writer, request.Body, MaxChunkBody)
uid := request.FormValue("upload_id")
idx, err := strconv.Atoi(request.FormValue("index"))
if err != nil {
app.SendError(writer, request, http.StatusBadRequest)
return
}
maxChunks := int((app.Conf.MaxMB*MegaByte)/MinChunkSize) + ChunkSafetyMargin
if !reUploadID.MatchString(uid) || idx > maxChunks || idx < 0 {
app.SendError(writer, request, http.StatusBadRequest)
return
}
file, _, err := request.FormFile("chunk")
if err != nil {
if strings.Contains(err.Error(), "request body too large") {
app.SendError(writer, request, http.StatusRequestEntityTooLarge)
return
}
app.SendError(writer, request, http.StatusBadRequest)
return
}
defer func() {
if closeErr := file.Close(); closeErr != nil {
app.Logger.Error("Failed to close chunk file", "err", closeErr)
}
}()
if err := app.saveChunk(uid, idx, file); err != nil {
app.Logger.Error("Failed to save chunk", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
}
}
func (app *App) HandleFinish(writer http.ResponseWriter, request *http.Request) {
uid := request.FormValue("upload_id")
total, err := strconv.Atoi(request.FormValue("total"))
if err != nil {
app.SendError(writer, request, http.StatusBadRequest)
return
}
maxChunks := int((app.Conf.MaxMB*MegaByte)/MinChunkSize) + ChunkSafetyMargin
if !reUploadID.MatchString(uid) || total > maxChunks || total <= 0 {
app.SendError(writer, request, http.StatusBadRequest)
return
}
defer func() {
if err := os.RemoveAll(filepath.Join(app.Conf.StorageDir, TempDirName, uid)); err != nil {
app.Logger.Error("Failed to remove chunk dir", "err", err)
}
}()
var totalSize int64
for i := range total {
info, err := os.Stat(filepath.Join(app.Conf.StorageDir, TempDirName, uid, strconv.Itoa(i)))
if err != nil {
app.Logger.Error("Missing chunk", "index", i, "err", err)
app.SendError(writer, request, http.StatusBadRequest)
return
}
chunkContentSize := info.Size() - crypto.KeySize
if chunkContentSize < 0 {
app.SendError(writer, request, http.StatusBadRequest)
return
}
totalSize += chunkContentSize
}
if totalSize > (app.Conf.MaxMB * MegaByte) {
app.Logger.Warn("Upload exceeded quota", "uid", uid, "size", totalSize)
app.SendError(writer, request, http.StatusRequestEntityTooLarge)
return
}
hasher := sha256.New()
for i := range total {
rc, err := app.openChunkDecryptor(uid, i)
if err != nil {
app.Logger.Error("Failed to open chunk for hashing", "index", i, "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
if _, err := io.Copy(hasher, rc); err != nil {
_ = rc.Close()
app.Logger.Error("Failed to hash chunk", "index", i, "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
_ = rc.Close()
}
convergentKey := hasher.Sum(nil)[:crypto.KeySize]
multiSrc := &SequentialChunkReader{
app: app,
uid: uid,
total: total,
}
defer func() {
if err := multiSrc.Close(); err != nil {
app.Logger.Error("Failed to close sequential reader", "uid", uid, "err", err)
}
}()
app.finalizeUpload(writer, request, multiSrc, convergentKey, request.FormValue("filename"))
}
func (app *App) finalizeUpload(writer http.ResponseWriter, request *http.Request, src io.Reader, key []byte, filename string) {
ext := filepath.Ext(filename)
id := crypto.GetID(key, ext)
finalPath := filepath.Join(app.Conf.StorageDir, id)
if info, err := os.Stat(finalPath); err == nil {
if err := app.RegisterFile(id, info.Size()); err != nil {
app.Logger.Error("Failed to update metadata for existing file", "err", err)
}
app.RespondWithLink(writer, request, key, filename)
return
}
if err := app.encryptAndSave(src, key, finalPath); err != nil {
app.Logger.Error("Encryption failed", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
if info, err := os.Stat(finalPath); err == nil {
if err := app.RegisterFile(id, info.Size()); err != nil {
app.Logger.Error("Failed to save metadata", "err", err)
}
} else {
app.Logger.Error("Failed to stat new file", "err", err)
}
app.RespondWithLink(writer, request, key, filename)
}
+37 -21
View File
@@ -6,27 +6,34 @@ import (
"crypto/sha256"
"encoding/base64"
"encoding/binary"
"errors"
"fmt"
"io"
)
const (
GCMChunkSize = 64 * 1024
NonceSize = 12
KeySize = 16
IDSize = 9
)
func DeriveKey(r io.Reader) ([]byte, error) {
h := sha256.New()
if _, err := io.Copy(h, r); err != nil {
return nil, err
func DeriveKey(reader io.Reader) ([]byte, error) {
hasher := sha256.New()
if _, err := io.Copy(hasher, reader); err != nil {
return nil, fmt.Errorf("failed to copy to hasher: %w", err)
}
return h.Sum(nil)[:16], nil
return hasher.Sum(nil)[:KeySize], nil
}
func GetID(key []byte, ext string) string {
h := sha256.New()
h.Write(key)
h.Write([]byte(ext))
return base64.RawURLEncoding.EncodeToString(h.Sum(nil)[:9])
hasher := sha256.New()
hasher.Write(key)
hasher.Write([]byte(ext))
return base64.RawURLEncoding.EncodeToString(hasher.Sum(nil)[:IDSize])
}
type GCMStreamer struct {
@@ -34,37 +41,46 @@ type GCMStreamer struct {
}
func NewGCMStreamer(key []byte) (*GCMStreamer, error) {
b, err := aes.NewCipher(key)
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to create cipher: %w", err)
}
g, err := cipher.NewGCM(b)
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to create GCM: %w", err)
}
return &GCMStreamer{AEAD: g}, nil
return &GCMStreamer{AEAD: gcm}, nil
}
func (g *GCMStreamer) EncryptStream(dst io.Writer, src io.Reader) error {
buf := make([]byte, GCMChunkSize)
var chunkIdx uint64 = 0
var chunkIdx uint64
for {
n, err := io.ReadFull(src, buf)
if n > 0 {
bytesRead, err := io.ReadFull(src, buf)
if bytesRead > 0 {
nonce := make([]byte, NonceSize)
binary.BigEndian.PutUint64(nonce[4:], chunkIdx)
ciphertext := g.AEAD.Seal(nil, nonce, buf[:n], nil)
ciphertext := g.AEAD.Seal(nil, nonce, buf[:bytesRead], nil)
if _, werr := dst.Write(ciphertext); werr != nil {
return werr
return fmt.Errorf("failed to write ciphertext: %w", werr)
}
chunkIdx++
}
if err == io.EOF || err == io.ErrUnexpectedEOF {
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) {
break
}
if err != nil {
return err
return fmt.Errorf("failed to read source: %w", err)
}
}
return nil
}
+152
View File
@@ -0,0 +1,152 @@
package crypto_test
import (
"bytes"
"crypto/rand"
"io"
"testing"
"github.com/skidoodle/safebin/internal/crypto"
)
func TestDeriveKey(t *testing.T) {
data := []byte("some random file content")
reader := bytes.NewReader(data)
key1, err := crypto.DeriveKey(reader)
if err != nil {
t.Fatalf("DeriveKey failed: %v", err)
}
if len(key1) != 16 {
t.Errorf("Expected key length 16, got %d", len(key1))
}
if _, err := reader.Seek(0, 0); err != nil {
t.Fatalf("Seek failed: %v", err)
}
key2, err := crypto.DeriveKey(reader)
if err != nil {
t.Fatalf("DeriveKey failed second time: %v", err)
}
if !bytes.Equal(key1, key2) {
t.Error("DeriveKey is not deterministic")
}
}
func TestGetID(t *testing.T) {
key := make([]byte, 16)
ext := ".txt"
id1 := crypto.GetID(key, ext)
id2 := crypto.GetID(key, ext)
if id1 != id2 {
t.Error("GetID is not deterministic")
}
if len(id1) == 0 {
t.Error("GetID returned empty string")
}
}
func TestEncryptDecryptStream(t *testing.T) {
payloadSize := (64 * 1024) * 3
payload := make([]byte, payloadSize)
if _, err := rand.Read(payload); err != nil {
t.Fatalf("rand.Read payload failed: %v", err)
}
key := make([]byte, 16)
if _, err := rand.Read(key); err != nil {
t.Fatalf("rand.Read key failed: %v", err)
}
var encryptedBuf bytes.Buffer
streamer, err := crypto.NewGCMStreamer(key)
if err != nil {
t.Fatalf("Failed to create streamer: %v", err)
}
if err := streamer.EncryptStream(&encryptedBuf, bytes.NewReader(payload)); err != nil {
t.Fatalf("EncryptStream failed: %v", err)
}
encryptedReader := bytes.NewReader(encryptedBuf.Bytes())
decryptor := crypto.NewDecryptor(encryptedReader, streamer.AEAD, int64(encryptedBuf.Len()))
decrypted := make([]byte, payloadSize)
n, err := io.ReadFull(decryptor, decrypted)
if err != nil {
t.Fatalf("ReadFull failed: %v", err)
}
if n != payloadSize {
t.Errorf("Expected %d bytes, got %d", payloadSize, n)
}
if !bytes.Equal(payload, decrypted) {
t.Error("Decrypted content does not match original payload")
}
}
func TestDecryptorSeeking(t *testing.T) {
chunkSize := 64 * 1024
payload := make([]byte, chunkSize*4)
for i := range len(payload) {
payload[i] = byte(i % 255)
}
key := make([]byte, 16)
if _, err := rand.Read(key); err != nil {
t.Fatalf("rand.Read key failed: %v", err)
}
var encryptedBuf bytes.Buffer
streamer, _ := crypto.NewGCMStreamer(key)
if err := streamer.EncryptStream(&encryptedBuf, bytes.NewReader(payload)); err != nil {
t.Fatalf("EncryptStream failed: %v", err)
}
r := bytes.NewReader(encryptedBuf.Bytes())
d := crypto.NewDecryptor(r, streamer.AEAD, int64(encryptedBuf.Len()))
tests := []struct {
name string
offset int64
whence int
read int
}{
{"Start of file", 0, io.SeekStart, 100},
{"Middle of chunk 1", 1000, io.SeekStart, 100},
{"Start of chunk 2", int64(chunkSize), io.SeekStart, 100},
{"Middle of chunk 2", int64(chunkSize) + 50, io.SeekStart, 100},
{"Near end", int64(len(payload)) - 10, io.SeekStart, 10},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
pos, err := d.Seek(tc.offset, tc.whence)
if err != nil {
t.Fatalf("Seek failed: %v", err)
}
if pos != tc.offset {
t.Errorf("Expected pos %d, got %d", tc.offset, pos)
}
buf := make([]byte, tc.read)
n, err := io.ReadFull(d, buf)
if err != nil {
t.Fatalf("Read failed: %v", err)
}
if n != tc.read {
t.Errorf("Expected %d bytes, got %d", tc.read, n)
}
expected := payload[tc.offset : tc.offset+int64(tc.read)]
if !bytes.Equal(buf, expected) {
t.Errorf("Data mismatch at offset %d", tc.offset)
}
})
}
}
+49 -24
View File
@@ -4,34 +4,43 @@ import (
"crypto/cipher"
"encoding/binary"
"errors"
"fmt"
"io"
)
var ErrInvalidWhence = errors.New("invalid whence")
var ErrNegativeBias = errors.New("negative bias")
type Decryptor struct {
rs io.ReadSeeker
aead cipher.AEAD
size int64
offset int64
readSeeker io.ReadSeeker
aead cipher.AEAD
size int64
offset int64
phyOffset int64
}
func NewDecryptor(rs io.ReadSeeker, aead cipher.AEAD, encryptedSize int64) *Decryptor {
func NewDecryptor(readSeeker io.ReadSeeker, aead cipher.AEAD, encryptedSize int64) *Decryptor {
overhead := int64(aead.Overhead())
fullBlocks := encryptedSize / (GCMChunkSize + overhead)
remainder := encryptedSize % (GCMChunkSize + overhead)
chunkWithOverhead := int64(GCMChunkSize) + overhead
plainSize := (fullBlocks * GCMChunkSize)
fullBlocks := encryptedSize / chunkWithOverhead
remainder := encryptedSize % chunkWithOverhead
plainSize := fullBlocks * GCMChunkSize
if remainder > overhead {
plainSize += (remainder - overhead)
}
return &Decryptor{
rs: rs,
aead: aead,
size: plainSize,
readSeeker: readSeeker,
aead: aead,
size: plainSize,
offset: 0,
phyOffset: -1,
}
}
func (d *Decryptor) Read(p []byte) (int, error) {
func (d *Decryptor) Read(buf []byte) (int, error) {
if d.offset >= d.size {
return 0, io.EOF
}
@@ -40,25 +49,37 @@ func (d *Decryptor) Read(p []byte) (int, error) {
overhang := d.offset % GCMChunkSize
overhead := int64(d.aead.Overhead())
actualChunkSize := int64(GCMChunkSize + overhead)
actualChunkSize := int64(GCMChunkSize) + overhead
_, err := d.rs.Seek(chunkIdx*actualChunkSize, io.SeekStart)
if err != nil {
return 0, err
targetOffset := chunkIdx * actualChunkSize
if d.phyOffset != targetOffset {
if _, err := d.readSeeker.Seek(targetOffset, io.SeekStart); err != nil {
return 0, fmt.Errorf("failed to seek: %w", err)
}
d.phyOffset = targetOffset
}
encrypted := make([]byte, actualChunkSize)
n, err := io.ReadFull(d.rs, encrypted)
if err != nil && err != io.ErrUnexpectedEOF {
return 0, err
bytesRead, err := io.ReadFull(d.readSeeker, encrypted)
if bytesRead > 0 {
d.phyOffset += int64(bytesRead)
}
if err != nil && !errors.Is(err, io.ErrUnexpectedEOF) {
return 0, fmt.Errorf("failed to read encrypted data: %w", err)
}
nonce := make([]byte, NonceSize)
if chunkIdx < 0 {
return 0, fmt.Errorf("invalid chunk index")
}
binary.BigEndian.PutUint64(nonce[4:], uint64(chunkIdx))
plaintext, err := d.aead.Open(nil, nonce, encrypted[:n], nil)
plaintext, err := d.aead.Open(nil, nonce, encrypted[:bytesRead], nil)
if err != nil {
return 0, err
return 0, fmt.Errorf("failed to decrypt: %w", err)
}
if overhang >= int64(len(plaintext)) {
@@ -66,7 +87,7 @@ func (d *Decryptor) Read(p []byte) (int, error) {
}
available := plaintext[overhang:]
nCopied := copy(p, available)
nCopied := copy(buf, available)
d.offset += int64(nCopied)
return nCopied, nil
@@ -74,6 +95,7 @@ func (d *Decryptor) Read(p []byte) (int, error) {
func (d *Decryptor) Seek(offset int64, whence int) (int64, error) {
var abs int64
switch whence {
case io.SeekStart:
abs = offset
@@ -82,11 +104,14 @@ func (d *Decryptor) Seek(offset int64, whence int) (int64, error) {
case io.SeekEnd:
abs = d.size + offset
default:
return 0, errors.New("invalid whence")
return 0, ErrInvalidWhence
}
if abs < 0 {
return 0, errors.New("negative bias")
return 0, ErrNegativeBias
}
d.offset = abs
return abs, nil
}
+31 -8
View File
@@ -2,35 +2,54 @@ package main
import (
"context"
"errors"
"fmt"
"log/slog"
"net/http"
"os"
"os/signal"
"path/filepath"
"syscall"
"time"
"github.com/skidoodle/safebin/internal/app"
"github.com/skidoodle/safebin/web"
)
func main() {
cfg := app.LoadConfig()
logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelDebug}))
logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{
Level: slog.LevelDebug,
AddSource: true,
}))
logger.Info("Initializing Safebin Server",
"storage_dir", cfg.StorageDir,
"max_file_size", fmt.Sprintf("%dMB", cfg.MaxMB),
)
if err := os.MkdirAll(fmt.Sprintf("%s/tmp", cfg.StorageDir), 0700); err != nil {
tmpDir := filepath.Join(cfg.StorageDir, app.TempDirName)
if err := os.MkdirAll(tmpDir, app.PermUserRWX); err != nil {
logger.Error("Failed to initialize storage directory", "err", err)
os.Exit(1)
}
db, err := app.InitDB(cfg.StorageDir)
if err != nil {
logger.Error("Failed to initialize database", "err", err)
os.Exit(1)
}
defer func() {
if err := db.Close(); err != nil {
logger.Error("Failed to close database", "err", err)
}
}()
application := &app.App{
Conf: cfg,
Logger: logger,
Tmpl: app.ParseTemplates(),
Tmpl: app.ParseTemplates(web.Assets),
Assets: web.Assets,
DB: db,
}
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
@@ -41,13 +60,15 @@ func main() {
srv := &http.Server{
Addr: cfg.Addr,
Handler: application.Routes(),
ReadTimeout: 10 * time.Minute,
WriteTimeout: 10 * time.Minute,
ReadTimeout: app.ServerTimeout,
WriteTimeout: app.ServerTimeout,
IdleTimeout: app.ServerTimeout,
}
go func() {
application.Logger.Info("Server is ready and listening", "addr", cfg.Addr)
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
application.Logger.Error("Server failed to start", "err", err)
os.Exit(1)
}
@@ -56,10 +77,12 @@ func main() {
<-ctx.Done()
application.Logger.Info("Shutting down gracefully...")
shutdownCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
shutdownCtx, cancel := context.WithTimeout(context.Background(), app.ShutdownTimeout)
defer cancel()
if err := srv.Shutdown(shutdownCtx); err != nil {
application.Logger.Error("Forced shutdown", "err", err)
}
application.Logger.Info("Server stopped")
}
+30 -7
View File
@@ -4,7 +4,7 @@ const fileInput = $("file-input");
if (dropZone) {
dropZone.onclick = () => {
if ($("idle-state").style.display !== "none") fileInput.click();
if (!$("idle-state").classList.contains("hidden")) fileInput.click();
};
fileInput.onchange = () => {
@@ -32,10 +32,25 @@ if (dropZone) {
}
async function handleUpload(file) {
$("idle-state").style.display = "none";
$("busy-state").style.display = "block";
const maxMB = parseInt(dropZone.dataset.maxMb);
if (file.size > maxMB * 1024 * 1024) {
$("idle-state").classList.add("hidden");
$("result-state").classList.remove("hidden");
$("result-state").innerHTML = `
<div class="result-container">
<div class="error-text">File too large (Max ${maxMB}MB)</div>
<div class="reset-wrapper">
<button class="reset-btn" onclick="resetUI()">Try again</button>
</div>
</div>`;
return;
}
const uploadID = Math.random().toString(36).substring(2, 15);
$("idle-state").classList.add("hidden");
$("busy-state").classList.remove("hidden");
$("p-bar-container").classList.add("visible");
const uploadID = Array.from(window.crypto.getRandomValues(new Uint8Array(16)), (b) => b.toString(16).padStart(2, "0")).join("");
const chunkSize = 1024 * 1024 * 8;
const total = Math.ceil(file.size / chunkSize);
@@ -61,11 +76,19 @@ async function handleUpload(file) {
headers: { "X-Requested-With": "XMLHttpRequest" },
});
$("busy-state").style.display = "none";
$("busy-state").classList.add("hidden");
$("result-state").classList.remove("hidden");
$("result-state").innerHTML = await res.text();
} catch (e) {
$("busy-state").style.display = "none";
$("result-state").innerHTML = `<div class="error-text">Upload Failed</div><button class="reset-btn" onclick="resetUI()">Try again</button>`;
$("busy-state").classList.add("hidden");
$("result-state").classList.remove("hidden");
$("result-state").innerHTML = `
<div class="result-container">
<div class="error-text">Upload Failed</div>
<div class="reset-wrapper">
<button class="reset-btn" onclick="resetUI()">Try again</button>
</div>
</div>`;
}
}
+6
View File
@@ -0,0 +1,6 @@
package web
import "embed"
//go:embed *.html *.css *.js *.ico
var Assets embed.FS
BIN
View File
Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

+16
View File
@@ -0,0 +1,16 @@
{{define "content"}}
<main class="upload-area" id="drop-zone" data-max-mb="{{.MaxMB}}">
<div id="idle-state">
<div class="upload-icon"></div>
<div class="upload-text">Click or drag to upload</div>
<div class="dim">Max size: {{.MaxMB}}MB</div>
</div>
<div id="busy-state" class="hidden">
<div id="status-msg" class="status-text">Uploading...</div>
<div class="progress-bar" id="p-bar-container">
<div class="progress-fill" id="p-fill"></div>
</div>
</div>
<div id="result-state" class="hidden"></div>
</main>
{{end}}
+46
View File
@@ -0,0 +1,46 @@
{{define "layout"}}
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="icon" type="image/vnd.microsoft.icon" href="/static/favicon.ico" />
<title>safebin</title>
<link rel="stylesheet" href="/static/style.css" />
</head>
<body>
<div class="container">
<header class="header">
<div>
<h2 class="header-title">safebin</h2>
<div class="dim">Encrypted Temporary File Storage</div>
</div>
<a href="https://github.com/skidoodle/safebin" class="github-btn" target="_blank" rel="noopener noreferrer">
<svg height="16" width="16" viewBox="0 0 16 16" fill="currentColor">
<path
d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z"
></path>
</svg>
<span>GitHub</span>
</a>
</header>
{{template "content" .}}
<section class="cli-section">
<div class="dim cli-label">CLI Usage</div>
<pre class="cli-pre">curl -F file=@yourfile {{.Host}}</pre>
</section>
<footer class="footer">
<div class="dim">
{{if eq .Version "dev"}}
<a href="https://github.com/skidoodle/safebin" target="_blank" rel="noopener noreferrer">dev</a>
{{else}}
<a href="https://github.com/skidoodle/safebin/releases/tag/v{{.Version}}" target="_blank" rel="noopener noreferrer">v{{.Version}}</a>
{{end}}
</div>
</footer>
</div>
<input type="file" id="file-input" class="hidden" />
<script src="/static/app.js"></script>
</body>
</html>
{{end}}
-110
View File
@@ -1,110 +0,0 @@
:root {
--bg: #0d1117;
--fg: #adbac7;
--accent: #4493f8;
--border: #30363d;
--success: #3fb950;
--header-white: #f0f6fc;
}
body {
background: var(--bg);
color: var(--fg);
font-family: -apple-system, system-ui, sans-serif;
margin: 0;
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
}
.container {
width: 100%;
max-width: 600px;
padding: 20px;
}
.header {
margin-bottom: 30px;
border-left: 3px solid var(--accent);
padding-left: 16px;
}
.upload-area {
border: 2px dashed var(--border);
border-radius: 12px;
padding: 60px 20px;
text-align: center;
cursor: pointer;
background: #161b22;
transition: 0.2s;
}
.upload-area:hover,
.upload-area.dragover {
border-color: var(--accent);
background: #1c2128;
}
.progress-bar {
height: 6px;
background: var(--border);
border-radius: 10px;
margin: 25px 0;
overflow: hidden;
display: none;
}
.progress-fill {
height: 100%;
background: var(--accent);
width: 0%;
transition: width 0.3s;
}
.copy-box {
display: flex;
margin-top: 20px;
gap: 8px;
}
input[type="text"] {
flex: 1;
background: #0d1117;
border: 1px solid var(--border);
color: var(--success);
padding: 12px;
border-radius: 6px;
font-family: monospace;
outline: none;
}
button {
background: var(--accent);
color: white;
border: none;
padding: 10px 20px;
border-radius: 6px;
cursor: pointer;
font-weight: 600;
}
.reset-btn {
background: transparent;
color: var(--fg);
text-decoration: underline;
margin-top: 20px;
border: none;
cursor: pointer;
opacity: 0.7;
}
.dim {
color: #768390;
font-size: 13px;
}
.error-text {
color: #f85149;
margin-bottom: 10px;
}
+263
View File
@@ -0,0 +1,263 @@
:root {
--bg: #0d1117;
--fg: #adbac7;
--accent: #4493f8;
--border: #30363d;
--success: #3fb950;
--header-white: #f0f6fc;
}
body {
background: var(--bg);
color: var(--fg);
font-family: -apple-system, system-ui, sans-serif;
margin: 0;
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
}
.container {
width: 100%;
max-width: 800px;
padding: 20px;
}
.header {
margin-bottom: 30px;
border-left: 3px solid var(--accent);
padding-left: 16px;
display: flex;
justify-content: space-between;
align-items: center;
}
.header-title {
margin: 0;
color: var(--header-white);
}
.upload-area {
border: 2px dashed var(--border);
border-radius: 12px;
padding: 20px;
text-align: center;
cursor: pointer;
background: #161b22;
transition:
border-color 0.2s,
background 0.2s;
height: 220px;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
box-sizing: border-box;
overflow: hidden;
}
.upload-area:hover,
.upload-area.dragover {
border-color: var(--accent);
background: #1c2128;
}
.upload-icon {
font-size: 32px;
color: var(--accent);
margin-bottom: 8px;
}
.upload-text {
font-weight: 500;
color: var(--header-white);
}
.progress-bar {
height: 6px;
background: var(--border);
border-radius: 10px;
margin: 25px 0;
overflow: hidden;
display: none;
width: 95%;
}
.progress-bar.visible {
display: block;
}
.progress-fill {
height: 100%;
background: var(--accent);
width: 0%;
transition: width 0.3s;
}
#busy-state {
width: 100%;
display: flex;
flex-direction: column;
align-items: center;
}
#result-state {
width: 100%;
display: flex;
justify-content: center;
}
.result-container {
width: 100%;
max-width: 700px;
display: flex;
flex-direction: column;
padding: 0 20px;
box-sizing: border-box;
}
.result-label {
text-align: left;
margin-bottom: 8px;
}
.copy-box {
display: flex;
gap: 8px;
width: 100%;
}
input[type="text"] {
flex: 1;
background: #0d1117;
border: 1px solid var(--border);
color: var(--success);
padding: 12px;
border-radius: 6px;
font-family: monospace;
font-size: 14px;
outline: none;
min-width: 0;
width: 100%;
}
button {
background: var(--accent);
color: white;
border: none;
padding: 10px 20px;
border-radius: 6px;
cursor: pointer;
font-weight: 600;
white-space: nowrap;
}
.reset-wrapper {
margin-top: 20px;
display: flex;
justify-content: center;
}
.reset-btn {
background: transparent;
color: var(--fg);
text-decoration: underline;
border: none;
cursor: pointer;
opacity: 0.7;
font-size: 14px;
}
.reset-btn:hover {
opacity: 1;
}
.dim {
color: #768390;
font-size: 13px;
}
.error-text {
color: #f85149;
margin-bottom: 10px;
}
.github-btn {
display: flex;
align-items: center;
gap: 8px;
padding: 6px 12px;
background: #21262d;
border: 1px solid var(--border);
border-radius: 6px;
color: var(--header-white);
text-decoration: none;
font-size: 13px;
font-weight: 500;
transition: 0.2s;
}
.github-btn:hover {
background: #30363d;
border-color: #8b949e;
}
.github-btn svg {
opacity: 0.9;
}
.cli-section {
margin-top: 40px;
padding-top: 24px;
border-top: 1px solid var(--border);
}
.cli-label {
text-transform: uppercase;
font-size: 11px;
font-weight: 700;
letter-spacing: 1px;
}
.cli-pre {
background: #161b22;
padding: 16px;
border-radius: 8px;
font-size: 13px;
overflow-x: auto;
border: 1px solid var(--border);
}
.status-text {
font-weight: 500;
}
.hidden {
display: none !important;
}
.footer {
margin-top: 20px;
text-align: center;
opacity: 0.5;
}
.footer a {
color: inherit;
text-decoration: none;
}
.footer a:hover {
text-decoration: underline;
}
@media (max-width: 400px) {
.github-btn span {
display: none;
}
.github-btn {
padding: 6px;
}
}
-31
View File
@@ -1,31 +0,0 @@
{{define "base"}}
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>safebin</title>
<link rel="stylesheet" href="/static/css/style.css" />
</head>
<body>
<div class="container">
<header class="header">
<h2 style="margin: 0; color: var(--header-white)">safebin</h2>
<div class="dim">Encrypted Temporary File Storage</div>
</header>
{{template "content" .}}
<section style="margin-top: 40px; padding-top: 24px; border-top: 1px solid var(--border)">
<div class="dim" style="text-transform: uppercase; font-size: 11px; font-weight: 700; letter-spacing: 1px">CLI Usage</div>
<pre style="background: #161b22; padding: 16px; border-radius: 8px; font-size: 13px; overflow-x: auto; border: 1px solid var(--border)">
curl -F file=@yourfile {{.Host}}</pre
>
</section>
</div>
<input type="file" id="file-input" style="display: none" />
<script src="/static/js/app.js"></script>
</body>
</html>
{{end}}
-18
View File
@@ -1,18 +0,0 @@
{{define "content"}}
<main class="upload-area" id="drop-zone">
<div id="idle-state">
<div style="font-size: 32px; color: var(--accent)"></div>
<div style="font-weight: 500; color: var(--header-white)">Click or drag to upload</div>
<div class="dim">Max size: {{.MaxMB}}MB</div>
</div>
<div id="busy-state" style="display: none">
<div id="status-msg" style="font-weight: 500">Uploading...</div>
<div class="progress-bar" id="p-bar-container" style="display: block">
<div class="progress-fill" id="p-fill"></div>
</div>
</div>
<div id="result-state"></div>
</main>
{{end}}