7 Commits

Author SHA1 Message Date
x aca7267301 refactor: internals
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-17 22:58:38 +01:00
x 5bc9497fa0 fix: enforce max file size
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 04:16:17 +01:00
x 956dff48eb fix: web responsivity
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 04:03:38 +01:00
x d7ba7f63c6 fix: remove goreleaser changelog requirement
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 03:24:18 +01:00
x fc129b7e9f fix: install media-types in docker 2026-01-16 03:18:43 +01:00
x 2d1b2aac48 chore: remove old trash
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 03:02:47 +01:00
x 39ea3ba48d docs: update readme
Signed-off-by: skidoodle <contact@albert.lol>
2026-01-16 02:50:15 +01:00
18 changed files with 792 additions and 318 deletions
-1
View File
@@ -28,7 +28,6 @@ archives:
files: files:
- web/**/* - web/**/*
- README.md - README.md
- CHANGELOG.md
dockers: dockers:
- image_templates: - image_templates:
-44
View File
@@ -1,44 +0,0 @@
# Changelog
## [3.0.0](https://github.com/skidoodle/safebin/compare/v2.0.0...v3.0.0) (2026-01-16)
### ⚠ BREAKING CHANGES
* Docker volume paths and environment variables have been updated. The internal storage path in the container has changed from `/home/appuser/storage` to `/app/storage`. Existing deployments must update their volume mappings and environment variable names to maintain persistence.
### Code Refactoring
* relocate core logic to internal package and modernize project structure ([43be383](https://github.com/skidoodle/safebin/commit/43be383fdbfb0263036284b8beb0ce3c646db87c))
## [2.0.0](https://github.com/skidoodle/safebin/compare/v1.1.0...v2.0.0) (2026-01-16)
### ⚠ BREAKING CHANGES
* The encryption scheme and URL structure have been completely redesigned. Links generated with previous versions of safebin are no longer compatible and cannot be decrypted by this version.
### Features
* overhaul encryption to zero-knowledge at rest and modernize UI ([599347e](https://github.com/skidoodle/safebin/commit/599347e867444288fa58f8e358269121c5d32e36))
## [1.1.0](https://github.com/skidoodle/safebin/compare/v1.0.1...v1.1.0) (2026-01-14)
### Features
* implement chunked uploads and environment-based configuration ([1ccc80a](https://github.com/skidoodle/safebin/commit/1ccc80ad4e5b949a8f1d1f3a8b3b4e8c4d2e1353))
## [1.0.1](https://github.com/skidoodle/safebin/compare/v1.0.0...v1.0.1) (2026-01-14)
### Bug Fixes
* better dockerfile ([c1ecbe5](https://github.com/skidoodle/safebin/commit/c1ecbe567a24eb4e755f19fee68422025f3b15b2))
## 1.0.0 (2026-01-13)
### Features
* add automated release and docker workflow ([e40e6d0](https://github.com/skidoodle/safebin/commit/e40e6d01afd0067bba5d0cf4a9b1ff3d7122259f))
+2 -1
View File
@@ -1,4 +1,4 @@
FROM --platform=$BUILDPLATFORM golang:1.25.5 AS builder FROM --platform=$BUILDPLATFORM golang:1.25.6 AS builder
WORKDIR /app WORKDIR /app
@@ -21,6 +21,7 @@ LABEL org.opencontainers.image.licenses="GPL-2.0-only"
RUN apt-get update && apt-get install -y --no-install-recommends \ RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \ ca-certificates \
media-types \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN useradd -m -u 10001 -s /bin/bash appuser RUN useradd -m -u 10001 -s /bin/bash appuser
+1
View File
@@ -2,6 +2,7 @@ FROM debian:trixie-slim
RUN apt-get update && apt-get install -y --no-install-recommends \ RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \ ca-certificates \
media-types \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN useradd -m -u 10001 -s /bin/bash appuser RUN useradd -m -u 10001 -s /bin/bash appuser
+37 -52
View File
@@ -4,74 +4,47 @@
## Features ## Features
- **Server-Side Encryption**: Files are encrypted using AES-256-GCM before touching the disk. - **End-to-End Encryption**: Files are encrypted using AES-128-GCM before being written to disk.
- **Log-Safe Keys**: The decryption key is stored in the URL fragment (`#`). Since fragments are never sent to the server, the key never appears in your HTTP access logs. - **Key-Derived URLs**: The decryption key is part of the URL. The server uses this key to locate and decrypt the file on the fly.
- **Integrity**: Uses GCM (Galois/Counter Mode) to ensure files cannot be tampered with while stored. - **Integrity**: Uses GCM (Galois/Counter Mode) to ensure files cannot be tampered with while stored.
- **Deterministic**: Identical files result in the same ID, allowing for storage deduplication. - **Storage Deduplication**: Identical files result in the same ID, saving disk space.
- **Chunked Uploads**: Supports large file uploads via the web interface using 8MB chunks.
## Usage ## Usage
You can interact with the service via the web interface or through the command line. ### Web Interface
Simply drag and drop files into the browser. The interface handles chunking and provides a shareable link once the upload is finalized.
### Uploading a file ### Command Line (CLI)
You can upload files directly using `curl`:
```bash ```bash
curl -F 'file=@archive.zip' https://bin.example.com curl -F 'file=@photo.jpg' https://bin.example.com
``` ```
The server will return a URL containing the file ID and the decryption key: The server will return a direct link:
`https://bin.example.com/vS6_1_8pS-Y_8-8_...` `https://bin.example.com/0iEZGtW-ikVdu...jpg`
### Downloading a file
Simply open the link in a browser or use `curl`:
```bash
curl https://bin.example.com/vS6_1_8pS-Y_8-8_... > archive.zip
```
## Configuration ## Configuration
`safebin` is configured via command-line flags: `safebin` can be configured via environment variables or command-line flags:
| Flag | Description | Default | | Flag | Environment Variable | Description | Default |
| :--- | :--- | :--- | | :--- | :--- | :--- | :--- |
| `-h` | Bind address for the server. | `0.0.0.0` | | `-h` | `SAFEBIN_HOST` | Bind address for the server. | `0.0.0.0` |
| `-p` | Port to listen on. | `8080` | | `-p` | `SAFEBIN_PORT` | Port to listen on. | `8080` |
| `-s` | Directory where encrypted files are stored. | `./storage` | | `-s` | `SAFEBIN_STORAGE` | Directory for encrypted storage. | `./storage` |
| `-m` | Maximum file size in mb. | `512` | | `-m` | `SAFEBIN_MAX_MB` | Maximum file size in MB. | `512` |
## Running Locally ## Deployment
### With Docker
```bash
git clone https://github.com/skidoodle/safebin
cd safebin
docker compose -f compose.dev.yaml up --build
```
### Without Docker
Requires Go 1.25 or higher.
```bash
git clone https://github.com/skidoodle/safebin
cd safebin
go build -o safebin .
./safebin -p 8080 -s ./data
```
## Deploying
### Docker Compose ### Docker Compose
The easiest way to deploy is using the provided `compose.yaml`:
The easiest way to deploy is using the provided `compose.yaml`.
```yaml ```yaml
services: services:
safebin: safebin:
image: ghcr.io/skidoodle/safebin:main image: ghcr.io/skidoodle/safebin:latest
container_name: safebin container_name: safebin
restart: unless-stopped restart: unless-stopped
ports: ports:
@@ -88,10 +61,22 @@ volumes:
data: data:
``` ```
### Manual Build
Requires Go 1.25 or higher.
```bash
go build -o safebin .
./safebin -p 8080 -s ./data
```
## Retention Policy ## Retention Policy
The server runs a cleanup task every hour. Retention is calculated using a cubic scaling formula to balance disk usage: The server runs a background cleanup task every hour. Retention is calculated using a cubic scaling formula to prioritize small files:
- **Small files (< 1MB)**: Up to 365 days.
- **Large files (512MB)**: 24 hours.
This ensures that the server doesn't run out of disk space due to large binary blobs while allowing small text files or images to persist for longer periods. - **Small files (e.g., < 1MB)**: Kept for up to **365 days**.
- **Large files (at Max MB)**: Kept for **24 hours**.
- **Temporary Uploads**: Unfinished chunked uploads are purged after **4 hours**.
## License
This project is licensed under the **GNU General Public License v2.0**.
+1 -1
View File
@@ -1,6 +1,6 @@
services: services:
safebin: safebin:
image: ghcr.io/skidoodle/safebin:main image: ghcr.io/skidoodle/safebin:latest
container_name: safebin container_name: safebin
restart: unless-stopped restart: unless-stopped
ports: ports:
+1 -1
View File
@@ -1,3 +1,3 @@
module github.com/skidoodle/safebin module github.com/skidoodle/safebin
go 1.25.5 go 1.25.6
+38 -20
View File
@@ -21,36 +21,54 @@ type App struct {
Logger *slog.Logger Logger *slog.Logger
} }
func LoadConfig() Config { const (
h := getEnv("SAFEBIN_HOST", "0.0.0.0") defaultHost = "0.0.0.0"
p := getEnvInt("SAFEBIN_PORT", 8080) defaultPort = 8080
s := getEnv("SAFEBIN_STORAGE", "./storage") defaultStorage = "./storage"
mDefault := int64(getEnvInt("SAFEBIN_MAX_MB", 512)) defaultMaxMB = 512
)
var m int64 func LoadConfig() Config {
flag.StringVar(&h, "h", h, "Bind address") hostEnv := getEnv("SAFEBIN_HOST", defaultHost)
flag.IntVar(&p, "p", p, "Port") portEnv := getEnvInt("SAFEBIN_PORT", defaultPort)
flag.StringVar(&s, "s", s, "Storage directory") storageEnv := getEnv("SAFEBIN_STORAGE", defaultStorage)
flag.Int64Var(&m, "m", mDefault, "Max file size in MB") maxMBEnv := int64(getEnvInt("SAFEBIN_MAX_MB", defaultMaxMB))
var host string
var port int
var storage string
var maxMB int64
flag.StringVar(&host, "h", hostEnv, "Bind address")
flag.IntVar(&port, "p", portEnv, "Port")
flag.StringVar(&storage, "s", storageEnv, "Storage directory")
flag.Int64Var(&maxMB, "m", maxMBEnv, "Max file size in MB")
flag.Parse() flag.Parse()
return Config{Addr: fmt.Sprintf("%s:%d", h, p), StorageDir: s, MaxMB: m} return Config{
Addr: fmt.Sprintf("%s:%d", host, port),
StorageDir: storage,
MaxMB: maxMB,
}
} }
func getEnv(k, f string) string { func getEnv(key, fallback string) string {
if v, ok := os.LookupEnv(k); ok { if value, ok := os.LookupEnv(key); ok {
return v return value
}
return f
} }
func getEnvInt(k string, f int) int { return fallback
if v, ok := os.LookupEnv(k); ok { }
if i, err := strconv.Atoi(v); err == nil {
func getEnvInt(key string, fallback int) int {
if value, ok := os.LookupEnv(key); ok {
i, err := strconv.Atoi(value)
if err == nil {
return i return i
} }
} }
return f
return fallback
} }
func ParseTemplates() *template.Template { func ParseTemplates() *template.Template {
+318 -75
View File
@@ -2,6 +2,7 @@ package app
import ( import (
"encoding/base64" "encoding/base64"
"errors"
"fmt" "fmt"
"io" "io"
"mime" "mime"
@@ -14,106 +15,268 @@ import (
"github.com/skidoodle/safebin/internal/crypto" "github.com/skidoodle/safebin/internal/crypto"
) )
const (
uploadChunkSize = 8 << 20
maxRequestOverhead = 10 << 20
permUserRWX = 0o700
slugLength = 22
keyLength = 16
megaByte = 1 << 20
chunkSafetyMargin = 2
)
var reUploadID = regexp.MustCompile(`^[a-zA-Z0-9]{10,50}$`) var reUploadID = regexp.MustCompile(`^[a-zA-Z0-9]{10,50}$`)
func (app *App) HandleHome(w http.ResponseWriter, r *http.Request) { func (app *App) HandleHome(writer http.ResponseWriter, request *http.Request) {
err := app.Tmpl.ExecuteTemplate(w, "base", map[string]any{ err := app.Tmpl.ExecuteTemplate(writer, "base", map[string]any{
"MaxMB": app.Conf.MaxMB, "MaxMB": app.Conf.MaxMB,
"Host": r.Host, "Host": request.Host,
}) })
if err != nil { if err != nil {
app.Logger.Error("Template error", "err", err) app.Logger.Error("Template error", "err", err)
} }
} }
func (app *App) HandleUpload(w http.ResponseWriter, r *http.Request) { func (app *App) HandleUpload(writer http.ResponseWriter, request *http.Request) {
limit := (app.Conf.MaxMB << 20) + (1 << 20) limit := (app.Conf.MaxMB * megaByte) + megaByte
r.Body = http.MaxBytesReader(w, r.Body, limit) request.Body = http.MaxBytesReader(writer, request.Body, limit)
file, header, err := request.FormFile("file")
file, header, err := r.FormFile("file")
if err != nil { if err != nil {
app.SendError(w, r, http.StatusBadRequest) if err.Error() == "http: request body too large" {
app.SendError(writer, request, http.StatusRequestEntityTooLarge)
return return
} }
defer file.Close()
tmpPath := filepath.Join(app.Conf.StorageDir, "tmp", fmt.Sprintf("up_%d", os.Getpid())) app.SendError(writer, request, http.StatusBadRequest)
tmp, _ := os.Create(tmpPath)
defer os.Remove(tmpPath) return
defer tmp.Close() }
defer func() {
if closeErr := file.Close(); closeErr != nil {
app.Logger.Error("Failed to close upload file", "err", closeErr)
}
}()
tmp, err := os.CreateTemp(filepath.Join(app.Conf.StorageDir, "tmp"), "up_*")
if err != nil {
app.Logger.Error("Failed to create temp file", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
tmpPath := tmp.Name()
defer func() {
if removeErr := os.Remove(tmpPath); removeErr != nil && !os.IsNotExist(removeErr) {
app.Logger.Error("Failed to remove temp file", "err", removeErr)
}
}()
defer func() {
if closeErr := tmp.Close(); closeErr != nil {
app.Logger.Error("Failed to close temp file", "err", closeErr)
}
}()
if _, err := io.Copy(tmp, file); err != nil { if _, err := io.Copy(tmp, file); err != nil {
app.SendError(w, r, http.StatusRequestEntityTooLarge) app.Logger.Error("Failed to write temp file", "err", err)
app.SendError(writer, request, http.StatusRequestEntityTooLarge)
return return
} }
app.FinalizeFile(w, r, tmp, header.Filename) app.FinalizeFile(writer, request, tmp, header.Filename)
} }
func (app *App) HandleChunk(w http.ResponseWriter, r *http.Request) { func (app *App) HandleChunk(writer http.ResponseWriter, request *http.Request) {
uid := r.FormValue("upload_id") request.Body = http.MaxBytesReader(writer, request.Body, maxRequestOverhead)
idx, _ := strconv.Atoi(r.FormValue("index"))
if !reUploadID.MatchString(uid) || idx > 1000 { uid := request.FormValue("upload_id")
app.SendError(w, r, http.StatusBadRequest)
return
}
file, _, err := r.FormFile("chunk") idx, err := strconv.Atoi(request.FormValue("index"))
if err != nil { if err != nil {
app.SendError(writer, request, http.StatusBadRequest)
return return
} }
defer file.Close()
maxChunks := int((app.Conf.MaxMB*megaByte)/uploadChunkSize) + chunkSafetyMargin
if !reUploadID.MatchString(uid) || idx > maxChunks || idx < 0 {
app.SendError(writer, request, http.StatusBadRequest)
return
}
file, _, err := request.FormFile("chunk")
if err != nil {
if err.Error() == "http: request body too large" {
app.SendError(writer, request, http.StatusRequestEntityTooLarge)
return
}
app.SendError(writer, request, http.StatusBadRequest)
return
}
defer func() {
if closeErr := file.Close(); closeErr != nil {
app.Logger.Error("Failed to close chunk file", "err", closeErr)
}
}()
if err := app.saveChunk(uid, idx, file); err != nil {
app.Logger.Error("Failed to save chunk", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
}
}
func (app *App) saveChunk(uid string, idx int, src io.Reader) error {
dir := filepath.Join(app.Conf.StorageDir, "tmp", uid) dir := filepath.Join(app.Conf.StorageDir, "tmp", uid)
os.MkdirAll(dir, 0700)
dest, _ := os.Create(filepath.Join(dir, strconv.Itoa(idx))) if err := os.MkdirAll(dir, permUserRWX); err != nil {
defer dest.Close() return fmt.Errorf("create chunk dir: %w", err)
io.Copy(dest, file)
} }
func (app *App) HandleFinish(w http.ResponseWriter, r *http.Request) { dest, err := os.Create(filepath.Join(dir, strconv.Itoa(idx)))
uid := r.FormValue("upload_id") if err != nil {
total, _ := strconv.Atoi(r.FormValue("total")) return fmt.Errorf("create chunk file: %w", err)
}
if !reUploadID.MatchString(uid) || total > 1000 { defer func() {
app.SendError(w, r, http.StatusBadRequest) if closeErr := dest.Close(); closeErr != nil {
app.Logger.Error("Failed to close chunk dest", "err", closeErr)
}
}()
if _, err := io.Copy(dest, src); err != nil {
return fmt.Errorf("copy chunk: %w", err)
}
return nil
}
func (app *App) HandleFinish(writer http.ResponseWriter, request *http.Request) {
uid := request.FormValue("upload_id")
total, err := strconv.Atoi(request.FormValue("total"))
if err != nil {
app.SendError(writer, request, http.StatusBadRequest)
return return
} }
maxChunks := int((app.Conf.MaxMB*megaByte)/uploadChunkSize) + chunkSafetyMargin
if !reUploadID.MatchString(uid) || total > maxChunks || total <= 0 {
app.SendError(writer, request, http.StatusBadRequest)
return
}
mergedPath, err := app.mergeChunks(uid, total)
if err != nil {
app.Logger.Error("Merge failed", "err", err)
if errors.Is(err, io.ErrShortWrite) {
app.SendError(writer, request, http.StatusRequestEntityTooLarge)
} else {
app.SendError(writer, request, http.StatusInternalServerError)
}
return
}
defer func() {
if removeErr := os.Remove(mergedPath); removeErr != nil && !os.IsNotExist(removeErr) {
app.Logger.Error("Failed to remove merged file", "err", removeErr)
}
}()
mergedRead, err := os.Open(mergedPath)
if err != nil {
app.Logger.Error("Failed to open merged file", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
defer func() {
if closeErr := mergedRead.Close(); closeErr != nil {
app.Logger.Error("Failed to close merged reader", "err", closeErr)
}
}()
app.FinalizeFile(writer, request, mergedRead, request.FormValue("filename"))
if err := os.RemoveAll(filepath.Join(app.Conf.StorageDir, "tmp", uid)); err != nil {
app.Logger.Error("Failed to remove chunk dir", "err", err)
}
}
func (app *App) mergeChunks(uid string, total int) (string, error) {
tmpPath := filepath.Join(app.Conf.StorageDir, "tmp", "m_"+uid) tmpPath := filepath.Join(app.Conf.StorageDir, "tmp", "m_"+uid)
merged, _ := os.Create(tmpPath)
defer os.Remove(tmpPath) merged, err := os.Create(tmpPath)
defer merged.Close() if err != nil {
return "", fmt.Errorf("create merge file: %w", err)
}
defer func() {
if closeErr := merged.Close(); closeErr != nil {
app.Logger.Error("Failed to close merged file", "err", closeErr)
}
}()
limit := app.Conf.MaxMB * megaByte
var written int64
for i := range total { for i := range total {
partPath := filepath.Join(app.Conf.StorageDir, "tmp", uid, strconv.Itoa(i)) partPath := filepath.Join(app.Conf.StorageDir, "tmp", uid, strconv.Itoa(i))
part, err := os.Open(partPath) part, err := os.Open(partPath)
if err != nil { if err != nil {
continue return "", fmt.Errorf("open chunk %d: %w", i, err)
}
io.Copy(merged, part)
part.Close()
} }
app.FinalizeFile(w, r, merged, r.FormValue("filename")) n, err := io.Copy(merged, part)
os.RemoveAll(filepath.Join(app.Conf.StorageDir, "tmp", uid))
if closeErr := part.Close(); closeErr != nil {
app.Logger.Error("Failed to close chunk part", "err", closeErr)
} }
func (app *App) HandleGetFile(w http.ResponseWriter, r *http.Request) { if err != nil {
slug := r.PathValue("slug") return "", fmt.Errorf("append chunk %d: %w", i, err)
if len(slug) < 22 { }
app.SendError(w, r, http.StatusBadRequest)
written += n
if written > limit {
return "", io.ErrShortWrite
}
}
return tmpPath, nil
}
func (app *App) HandleGetFile(writer http.ResponseWriter, request *http.Request) {
slug := request.PathValue("slug")
if len(slug) < slugLength {
app.SendError(writer, request, http.StatusBadRequest)
return return
} }
keyBase64 := slug[:22] keyBase64 := slug[:slugLength]
ext := slug[22:] ext := slug[slugLength:]
key, err := base64.RawURLEncoding.DecodeString(keyBase64) key, err := base64.RawURLEncoding.DecodeString(keyBase64)
if err != nil || len(key) != 16 { if err != nil || len(key) != keyLength {
app.SendError(w, r, http.StatusUnauthorized) app.SendError(writer, request, http.StatusUnauthorized)
return return
} }
@@ -122,53 +285,133 @@ func (app *App) HandleGetFile(w http.ResponseWriter, r *http.Request) {
info, err := os.Stat(path) info, err := os.Stat(path)
if err != nil { if err != nil {
app.SendError(w, r, http.StatusNotFound) app.SendError(writer, request, http.StatusNotFound)
return return
} }
f, _ := os.Open(path) file, err := os.Open(path)
defer f.Close()
streamer, _ := crypto.NewGCMStreamer(key) if err != nil {
decryptor := crypto.NewDecryptor(f, streamer.AEAD, info.Size()) app.Logger.Error("Failed to open file", "path", path, "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
defer func() {
if closeErr := file.Close(); closeErr != nil {
app.Logger.Error("Failed to close file", "err", closeErr)
}
}()
streamer, err := crypto.NewGCMStreamer(key)
if err != nil {
app.Logger.Error("Failed to create crypto streamer", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
decryptor := crypto.NewDecryptor(file, streamer.AEAD, info.Size())
contentType := mime.TypeByExtension(ext) contentType := mime.TypeByExtension(ext)
if contentType == "" { if contentType == "" {
contentType = "application/octet-stream" contentType = "application/octet-stream"
} }
w.Header().Set("Content-Type", contentType) csp := "default-src 'none'; img-src 'self' data:; media-src 'self' data:; " +
w.Header().Set("Content-Security-Policy", "default-src 'none'; img-src 'self' data:; media-src 'self' data:; style-src 'unsafe-inline'; sandbox allow-forms allow-scripts allow-downloads allow-same-origin") "style-src 'unsafe-inline'; sandbox allow-forms allow-scripts allow-downloads allow-same-origin"
w.Header().Set("X-Content-Type-Options", "nosniff")
w.Header().Set("Content-Disposition", fmt.Sprintf("inline; filename=%q", slug))
http.ServeContent(w, r, slug, info.ModTime(), decryptor) writer.Header().Set("Content-Type", contentType)
writer.Header().Set("Content-Security-Policy", csp)
writer.Header().Set("X-Content-Type-Options", "nosniff")
writer.Header().Set("Content-Disposition", fmt.Sprintf("inline; filename=%q", slug))
http.ServeContent(writer, request, slug, info.ModTime(), decryptor)
} }
func (app *App) FinalizeFile(w http.ResponseWriter, r *http.Request, src *os.File, filename string) { func (app *App) FinalizeFile(writer http.ResponseWriter, request *http.Request, src *os.File, filename string) {
src.Seek(0, 0) if _, err := src.Seek(0, 0); err != nil {
key, _ := crypto.DeriveKey(src) app.Logger.Error("Seek failed", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
key, err := crypto.DeriveKey(src)
if err != nil {
app.Logger.Error("Key derivation failed", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
ext := filepath.Ext(filename) ext := filepath.Ext(filename)
id := crypto.GetID(key, ext) id := crypto.GetID(key, ext)
src.Seek(0, 0)
finalPath := filepath.Join(app.Conf.StorageDir, id) finalPath := filepath.Join(app.Conf.StorageDir, id)
if _, err := os.Stat(finalPath); err == nil { if _, err := os.Stat(finalPath); err == nil {
app.RespondWithLink(w, r, key, filename) app.RespondWithLink(writer, request, key, filename)
return return
} }
out, _ := os.Create(finalPath + ".tmp") if _, err := src.Seek(0, 0); err != nil {
streamer, _ := crypto.NewGCMStreamer(key) app.Logger.Error("Seek failed", "err", err)
if err := streamer.EncryptStream(out, src); err != nil { app.SendError(writer, request, http.StatusInternalServerError)
out.Close()
os.Remove(finalPath + ".tmp")
app.SendError(w, r, http.StatusInternalServerError)
return return
} }
out.Close()
os.Rename(finalPath+".tmp", finalPath) if err := app.encryptAndSave(src, key, finalPath); err != nil {
app.RespondWithLink(w, r, key, filename) app.Logger.Error("Encryption failed", "err", err)
app.SendError(writer, request, http.StatusInternalServerError)
return
}
app.RespondWithLink(writer, request, key, filename)
}
func (app *App) encryptAndSave(src io.Reader, key []byte, finalPath string) error {
out, err := os.Create(finalPath + ".tmp")
if err != nil {
return fmt.Errorf("create final file: %w", err)
}
var closed bool
defer func() {
if !closed {
if closeErr := out.Close(); closeErr != nil {
app.Logger.Error("Failed to close final file", "err", closeErr)
}
}
if removeErr := os.Remove(finalPath + ".tmp"); removeErr != nil && !os.IsNotExist(removeErr) {
app.Logger.Error("Failed to remove temp final file", "err", removeErr)
}
}()
streamer, err := crypto.NewGCMStreamer(key)
if err != nil {
return fmt.Errorf("create streamer: %w", err)
}
if err := streamer.EncryptStream(out, src); err != nil {
return fmt.Errorf("encrypt stream: %w", err)
}
if err := out.Close(); err != nil {
return fmt.Errorf("close final file: %w", err)
}
closed = true
if err := os.Rename(finalPath+".tmp", finalPath); err != nil {
return fmt.Errorf("rename final file: %w", err)
}
return nil
} }
+40 -18
View File
@@ -9,10 +9,9 @@ import (
func (app *App) Routes() *http.ServeMux { func (app *App) Routes() *http.ServeMux {
mux := http.NewServeMux() mux := http.NewServeMux()
fileServer := http.FileServer(http.Dir("./web/static"))
fs := http.FileServer(http.Dir("./web/static")) mux.Handle("GET /static/", http.StripPrefix("/static/", fileServer))
mux.Handle("GET /static/", http.StripPrefix("/static/", fs))
mux.HandleFunc("GET /{$}", app.HandleHome) mux.HandleFunc("GET /{$}", app.HandleHome)
mux.HandleFunc("POST /{$}", app.HandleUpload) mux.HandleFunc("POST /{$}", app.HandleUpload)
mux.HandleFunc("POST /upload/chunk", app.HandleChunk) mux.HandleFunc("POST /upload/chunk", app.HandleChunk)
@@ -22,37 +21,60 @@ func (app *App) Routes() *http.ServeMux {
return mux return mux
} }
func (app *App) RespondWithLink(w http.ResponseWriter, r *http.Request, key []byte, originalName string) { func (app *App) RespondWithLink(writer http.ResponseWriter, request *http.Request, key []byte, originalName string) {
keySlug := base64.RawURLEncoding.EncodeToString(key) keySlug := base64.RawURLEncoding.EncodeToString(key)
ext := filepath.Ext(originalName) ext := filepath.Ext(originalName)
link := fmt.Sprintf("%s/%s%s", request.Host, keySlug, ext)
link := fmt.Sprintf("%s/%s%s", r.Host, keySlug, ext) if request.Header.Get("X-Requested-With") == "XMLHttpRequest" {
html := `
if r.Header.Get("X-Requested-With") == "XMLHttpRequest" { <div class="result-container">
fmt.Fprintf(w, ` <div class="dim result-label">Upload Complete:</div>
<div style="text-align: left;">
<div class="dim" style="margin-bottom: 8px;">Upload Complete:</div>
<div class="copy-box"> <div class="copy-box">
<input type="text" value="%s" id="share-url" readonly onclick="this.select()"> <input type="text" value="%s" id="share-url" readonly onclick="this.select()">
<button onclick="copyToClipboard(this)">Copy</button> <button onclick="copyToClipboard(this)">Copy</button>
</div> </div>
<div class="reset-wrapper">
<button class="reset-btn" onclick="resetUI()">Upload another</button> <button class="reset-btn" onclick="resetUI()">Upload another</button>
</div>`, link) </div>
</div>`
if _, err := fmt.Fprintf(writer, html, link); err != nil {
app.Logger.Error("Failed to write response", "err", err)
}
return return
} }
scheme := "https" scheme := "https"
if r.TLS == nil {
if request.TLS == nil {
scheme = "http" scheme = "http"
} }
fmt.Fprintf(w, "%s://%s\n", scheme, link)
if _, err := fmt.Fprintf(writer, "%s://%s\n", scheme, link); err != nil {
app.Logger.Error("Failed to write response", "err", err)
}
}
func (app *App) SendError(writer http.ResponseWriter, request *http.Request, code int) {
if request.Header.Get("X-Requested-With") == "XMLHttpRequest" {
writer.WriteHeader(code)
html := `
<div class="result-container">
<div class="error-text">Error %d</div>
<div class="reset-wrapper">
<button class="reset-btn" onclick="resetUI()">Try again</button>
</div>
</div>`
if _, err := fmt.Fprintf(writer, html, code); err != nil {
app.Logger.Error("Failed to write error response", "err", err)
} }
func (app *App) SendError(w http.ResponseWriter, r *http.Request, code int) {
if r.Header.Get("X-Requested-With") == "XMLHttpRequest" {
w.WriteHeader(code)
fmt.Fprintf(w, `<div class="error-text">Error %d</div><button class="reset-btn" onclick="resetUI()">Try again</button>`, code)
return return
} }
http.Error(w, http.StatusText(code), code)
http.Error(writer, http.StatusText(code), code)
} }
+59 -20
View File
@@ -8,43 +8,82 @@ import (
"time" "time"
) )
const (
cleanupInterval = 1 * time.Hour
tempExpiry = 4 * time.Hour
minRetention = 24 * time.Hour
maxRetention = 365 * 24 * time.Hour
bytesInMB = 1 << 20
)
func (app *App) StartCleanupTask(ctx context.Context) { func (app *App) StartCleanupTask(ctx context.Context) {
ticker := time.NewTicker(1 * time.Hour) ticker := time.NewTicker(cleanupInterval)
for { for {
select { select {
case <-ctx.Done(): case <-ctx.Done():
ticker.Stop()
return return
case <-ticker.C: case <-ticker.C:
app.CleanDir(app.Conf.StorageDir, false) app.CleanStorage(app.Conf.StorageDir)
app.CleanDir(filepath.Join(app.Conf.StorageDir, "tmp"), true) app.CleanTemp(filepath.Join(app.Conf.StorageDir, "tmp"))
} }
} }
} }
func (app *App) CleanDir(path string, isTmp bool) { func (app *App) CleanStorage(path string) {
entries, _ := os.ReadDir(path) entries, err := os.ReadDir(path)
for _, entry := range entries { if err != nil {
info, _ := entry.Info() app.Logger.Error("Failed to read storage dir", "err", err)
expiry := 4 * time.Hour return
if !isTmp {
expiry = CalculateRetention(info.Size(), app.Conf.MaxMB)
} }
for _, entry := range entries {
info, err := entry.Info()
if err != nil {
continue
}
expiry := CalculateRetention(info.Size(), app.Conf.MaxMB)
if time.Since(info.ModTime()) > expiry { if time.Since(info.ModTime()) > expiry {
os.RemoveAll(filepath.Join(path, entry.Name())) if err := os.RemoveAll(filepath.Join(path, entry.Name())); err != nil {
app.Logger.Error("Failed to remove expired file", "path", entry.Name(), "err", err)
}
} }
} }
} }
func CalculateRetention(fileSize int64, maxMB int64) time.Duration { func (app *App) CleanTemp(path string) {
const ( entries, err := os.ReadDir(path)
minAge = 24 * time.Hour if err != nil {
maxAge = 365 * 24 * time.Hour app.Logger.Error("Failed to read temp dir", "err", err)
) return
ratio := math.Max(0, math.Min(1, float64(fileSize)/float64(maxMB<<20)))
retention := float64(maxAge) * math.Pow(1.0-ratio, 3)
if retention < float64(minAge) {
return minAge
} }
for _, entry := range entries {
info, err := entry.Info()
if err != nil {
continue
}
if time.Since(info.ModTime()) > tempExpiry {
if err := os.RemoveAll(filepath.Join(path, entry.Name())); err != nil {
app.Logger.Error("Failed to remove expired temp file", "path", entry.Name(), "err", err)
}
}
}
}
func CalculateRetention(fileSize, maxMB int64) time.Duration {
ratio := math.Max(0, math.Min(1, float64(fileSize)/float64(maxMB*bytesInMB)))
invRatio := 1.0 - ratio
retention := float64(maxRetention) * (invRatio * invRatio * invRatio)
if retention < float64(minRetention) {
return minRetention
}
return time.Duration(retention) return time.Duration(retention)
} }
+37 -21
View File
@@ -6,27 +6,34 @@ import (
"crypto/sha256" "crypto/sha256"
"encoding/base64" "encoding/base64"
"encoding/binary" "encoding/binary"
"errors"
"fmt"
"io" "io"
) )
const ( const (
GCMChunkSize = 64 * 1024 GCMChunkSize = 64 * 1024
NonceSize = 12 NonceSize = 12
KeySize = 16
IDSize = 9
) )
func DeriveKey(r io.Reader) ([]byte, error) { func DeriveKey(reader io.Reader) ([]byte, error) {
h := sha256.New() hasher := sha256.New()
if _, err := io.Copy(h, r); err != nil {
return nil, err if _, err := io.Copy(hasher, reader); err != nil {
return nil, fmt.Errorf("failed to copy to hasher: %w", err)
} }
return h.Sum(nil)[:16], nil
return hasher.Sum(nil)[:KeySize], nil
} }
func GetID(key []byte, ext string) string { func GetID(key []byte, ext string) string {
h := sha256.New() hasher := sha256.New()
h.Write(key) hasher.Write(key)
h.Write([]byte(ext)) hasher.Write([]byte(ext))
return base64.RawURLEncoding.EncodeToString(h.Sum(nil)[:9])
return base64.RawURLEncoding.EncodeToString(hasher.Sum(nil)[:IDSize])
} }
type GCMStreamer struct { type GCMStreamer struct {
@@ -34,37 +41,46 @@ type GCMStreamer struct {
} }
func NewGCMStreamer(key []byte) (*GCMStreamer, error) { func NewGCMStreamer(key []byte) (*GCMStreamer, error) {
b, err := aes.NewCipher(key) block, err := aes.NewCipher(key)
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("failed to create cipher: %w", err)
} }
g, err := cipher.NewGCM(b)
gcm, err := cipher.NewGCM(block)
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("failed to create GCM: %w", err)
} }
return &GCMStreamer{AEAD: g}, nil
return &GCMStreamer{AEAD: gcm}, nil
} }
func (g *GCMStreamer) EncryptStream(dst io.Writer, src io.Reader) error { func (g *GCMStreamer) EncryptStream(dst io.Writer, src io.Reader) error {
buf := make([]byte, GCMChunkSize) buf := make([]byte, GCMChunkSize)
var chunkIdx uint64 = 0 var chunkIdx uint64
for { for {
n, err := io.ReadFull(src, buf) bytesRead, err := io.ReadFull(src, buf)
if n > 0 { if bytesRead > 0 {
nonce := make([]byte, NonceSize) nonce := make([]byte, NonceSize)
binary.BigEndian.PutUint64(nonce[4:], chunkIdx) binary.BigEndian.PutUint64(nonce[4:], chunkIdx)
ciphertext := g.AEAD.Seal(nil, nonce, buf[:n], nil)
ciphertext := g.AEAD.Seal(nil, nonce, buf[:bytesRead], nil)
if _, werr := dst.Write(ciphertext); werr != nil { if _, werr := dst.Write(ciphertext); werr != nil {
return werr return fmt.Errorf("failed to write ciphertext: %w", werr)
} }
chunkIdx++ chunkIdx++
} }
if err == io.EOF || err == io.ErrUnexpectedEOF {
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) {
break break
} }
if err != nil { if err != nil {
return err return fmt.Errorf("failed to read source: %w", err)
} }
} }
return nil return nil
} }
+33 -18
View File
@@ -4,34 +4,41 @@ import (
"crypto/cipher" "crypto/cipher"
"encoding/binary" "encoding/binary"
"errors" "errors"
"fmt"
"io" "io"
) )
var ErrInvalidWhence = errors.New("invalid whence")
var ErrNegativeBias = errors.New("negative bias")
type Decryptor struct { type Decryptor struct {
rs io.ReadSeeker readSeeker io.ReadSeeker
aead cipher.AEAD aead cipher.AEAD
size int64 size int64
offset int64 offset int64
} }
func NewDecryptor(rs io.ReadSeeker, aead cipher.AEAD, encryptedSize int64) *Decryptor { func NewDecryptor(readSeeker io.ReadSeeker, aead cipher.AEAD, encryptedSize int64) *Decryptor {
overhead := int64(aead.Overhead()) overhead := int64(aead.Overhead())
fullBlocks := encryptedSize / (GCMChunkSize + overhead) chunkWithOverhead := int64(GCMChunkSize) + overhead
remainder := encryptedSize % (GCMChunkSize + overhead)
plainSize := (fullBlocks * GCMChunkSize) fullBlocks := encryptedSize / chunkWithOverhead
remainder := encryptedSize % chunkWithOverhead
plainSize := fullBlocks * GCMChunkSize
if remainder > overhead { if remainder > overhead {
plainSize += (remainder - overhead) plainSize += (remainder - overhead)
} }
return &Decryptor{ return &Decryptor{
rs: rs, readSeeker: readSeeker,
aead: aead, aead: aead,
size: plainSize, size: plainSize,
offset: 0,
} }
} }
func (d *Decryptor) Read(p []byte) (int, error) { func (d *Decryptor) Read(buf []byte) (int, error) {
if d.offset >= d.size { if d.offset >= d.size {
return 0, io.EOF return 0, io.EOF
} }
@@ -40,25 +47,29 @@ func (d *Decryptor) Read(p []byte) (int, error) {
overhang := d.offset % GCMChunkSize overhang := d.offset % GCMChunkSize
overhead := int64(d.aead.Overhead()) overhead := int64(d.aead.Overhead())
actualChunkSize := int64(GCMChunkSize + overhead) actualChunkSize := int64(GCMChunkSize) + overhead
_, err := d.rs.Seek(chunkIdx*actualChunkSize, io.SeekStart) _, err := d.readSeeker.Seek(chunkIdx*actualChunkSize, io.SeekStart)
if err != nil { if err != nil {
return 0, err return 0, fmt.Errorf("failed to seek: %w", err)
} }
encrypted := make([]byte, actualChunkSize) encrypted := make([]byte, actualChunkSize)
n, err := io.ReadFull(d.rs, encrypted)
if err != nil && err != io.ErrUnexpectedEOF { bytesRead, err := io.ReadFull(d.readSeeker, encrypted)
return 0, err if err != nil && !errors.Is(err, io.ErrUnexpectedEOF) {
return 0, fmt.Errorf("failed to read encrypted data: %w", err)
} }
nonce := make([]byte, NonceSize) nonce := make([]byte, NonceSize)
if chunkIdx < 0 {
return 0, fmt.Errorf("invalid chunk index")
}
binary.BigEndian.PutUint64(nonce[4:], uint64(chunkIdx)) binary.BigEndian.PutUint64(nonce[4:], uint64(chunkIdx))
plaintext, err := d.aead.Open(nil, nonce, encrypted[:n], nil) plaintext, err := d.aead.Open(nil, nonce, encrypted[:bytesRead], nil)
if err != nil { if err != nil {
return 0, err return 0, fmt.Errorf("failed to decrypt: %w", err)
} }
if overhang >= int64(len(plaintext)) { if overhang >= int64(len(plaintext)) {
@@ -66,7 +77,7 @@ func (d *Decryptor) Read(p []byte) (int, error) {
} }
available := plaintext[overhang:] available := plaintext[overhang:]
nCopied := copy(p, available) nCopied := copy(buf, available)
d.offset += int64(nCopied) d.offset += int64(nCopied)
return nCopied, nil return nCopied, nil
@@ -74,6 +85,7 @@ func (d *Decryptor) Read(p []byte) (int, error) {
func (d *Decryptor) Seek(offset int64, whence int) (int64, error) { func (d *Decryptor) Seek(offset int64, whence int) (int64, error) {
var abs int64 var abs int64
switch whence { switch whence {
case io.SeekStart: case io.SeekStart:
abs = offset abs = offset
@@ -82,11 +94,14 @@ func (d *Decryptor) Seek(offset int64, whence int) (int64, error) {
case io.SeekEnd: case io.SeekEnd:
abs = d.size + offset abs = d.size + offset
default: default:
return 0, errors.New("invalid whence") return 0, ErrInvalidWhence
} }
if abs < 0 { if abs < 0 {
return 0, errors.New("negative bias") return 0, ErrNegativeBias
} }
d.offset = abs d.offset = abs
return abs, nil return abs, nil
} }
+22 -6
View File
@@ -2,27 +2,39 @@ package main
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"log/slog" "log/slog"
"net/http" "net/http"
"os" "os"
"os/signal" "os/signal"
"path/filepath"
"syscall" "syscall"
"time" "time"
"github.com/skidoodle/safebin/internal/app" "github.com/skidoodle/safebin/internal/app"
) )
const (
permUserRWX = 0o700
serverTimeout = 10 * time.Minute
shutdownTimeout = 10 * time.Second
)
func main() { func main() {
cfg := app.LoadConfig() cfg := app.LoadConfig()
logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelDebug})) logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{
Level: slog.LevelDebug,
AddSource: true,
}))
logger.Info("Initializing Safebin Server", logger.Info("Initializing Safebin Server",
"storage_dir", cfg.StorageDir, "storage_dir", cfg.StorageDir,
"max_file_size", fmt.Sprintf("%dMB", cfg.MaxMB), "max_file_size", fmt.Sprintf("%dMB", cfg.MaxMB),
) )
if err := os.MkdirAll(fmt.Sprintf("%s/tmp", cfg.StorageDir), 0700); err != nil { tmpDir := filepath.Join(cfg.StorageDir, "tmp")
if err := os.MkdirAll(tmpDir, permUserRWX); err != nil {
logger.Error("Failed to initialize storage directory", "err", err) logger.Error("Failed to initialize storage directory", "err", err)
os.Exit(1) os.Exit(1)
} }
@@ -41,13 +53,15 @@ func main() {
srv := &http.Server{ srv := &http.Server{
Addr: cfg.Addr, Addr: cfg.Addr,
Handler: application.Routes(), Handler: application.Routes(),
ReadTimeout: 10 * time.Minute, ReadTimeout: serverTimeout,
WriteTimeout: 10 * time.Minute, WriteTimeout: serverTimeout,
IdleTimeout: serverTimeout,
} }
go func() { go func() {
application.Logger.Info("Server is ready and listening", "addr", cfg.Addr) application.Logger.Info("Server is ready and listening", "addr", cfg.Addr)
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
application.Logger.Error("Server failed to start", "err", err) application.Logger.Error("Server failed to start", "err", err)
os.Exit(1) os.Exit(1)
} }
@@ -56,10 +70,12 @@ func main() {
<-ctx.Done() <-ctx.Done()
application.Logger.Info("Shutting down gracefully...") application.Logger.Info("Shutting down gracefully...")
shutdownCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second) shutdownCtx, cancel := context.WithTimeout(context.Background(), shutdownTimeout)
defer cancel() defer cancel()
if err := srv.Shutdown(shutdownCtx); err != nil { if err := srv.Shutdown(shutdownCtx); err != nil {
application.Logger.Error("Forced shutdown", "err", err) application.Logger.Error("Forced shutdown", "err", err)
} }
application.Logger.Info("Server stopped") application.Logger.Info("Server stopped")
} }
+142 -5
View File
@@ -20,7 +20,7 @@ body {
.container { .container {
width: 100%; width: 100%;
max-width: 600px; max-width: 800px;
padding: 20px; padding: 20px;
} }
@@ -28,16 +28,33 @@ body {
margin-bottom: 30px; margin-bottom: 30px;
border-left: 3px solid var(--accent); border-left: 3px solid var(--accent);
padding-left: 16px; padding-left: 16px;
display: flex;
justify-content: space-between;
align-items: center;
}
.header-title {
margin: 0;
color: var(--header-white);
} }
.upload-area { .upload-area {
border: 2px dashed var(--border); border: 2px dashed var(--border);
border-radius: 12px; border-radius: 12px;
padding: 60px 20px; padding: 20px;
text-align: center; text-align: center;
cursor: pointer; cursor: pointer;
background: #161b22; background: #161b22;
transition: 0.2s; transition:
border-color 0.2s,
background 0.2s;
height: 220px;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
box-sizing: border-box;
overflow: hidden;
} }
.upload-area:hover, .upload-area:hover,
@@ -46,6 +63,17 @@ body {
background: #1c2128; background: #1c2128;
} }
.upload-icon {
font-size: 32px;
color: var(--accent);
margin-bottom: 8px;
}
.upload-text {
font-weight: 500;
color: var(--header-white);
}
.progress-bar { .progress-bar {
height: 6px; height: 6px;
background: var(--border); background: var(--border);
@@ -53,6 +81,11 @@ body {
margin: 25px 0; margin: 25px 0;
overflow: hidden; overflow: hidden;
display: none; display: none;
width: 95%;
}
.progress-bar.visible {
display: block;
} }
.progress-fill { .progress-fill {
@@ -62,10 +95,37 @@ body {
transition: width 0.3s; transition: width 0.3s;
} }
#busy-state {
width: 100%;
display: flex;
flex-direction: column;
align-items: center;
}
#result-state {
width: 100%;
display: flex;
justify-content: center;
}
.result-container {
width: 100%;
max-width: 700px;
display: flex;
flex-direction: column;
padding: 0 20px;
box-sizing: border-box;
}
.result-label {
text-align: left;
margin-bottom: 8px;
}
.copy-box { .copy-box {
display: flex; display: flex;
margin-top: 20px;
gap: 8px; gap: 8px;
width: 100%;
} }
input[type="text"] { input[type="text"] {
@@ -76,7 +136,10 @@ input[type="text"] {
padding: 12px; padding: 12px;
border-radius: 6px; border-radius: 6px;
font-family: monospace; font-family: monospace;
font-size: 14px;
outline: none; outline: none;
min-width: 0;
width: 100%;
} }
button { button {
@@ -87,16 +150,27 @@ button {
border-radius: 6px; border-radius: 6px;
cursor: pointer; cursor: pointer;
font-weight: 600; font-weight: 600;
white-space: nowrap;
}
.reset-wrapper {
margin-top: 20px;
display: flex;
justify-content: center;
} }
.reset-btn { .reset-btn {
background: transparent; background: transparent;
color: var(--fg); color: var(--fg);
text-decoration: underline; text-decoration: underline;
margin-top: 20px;
border: none; border: none;
cursor: pointer; cursor: pointer;
opacity: 0.7; opacity: 0.7;
font-size: 14px;
}
.reset-btn:hover {
opacity: 1;
} }
.dim { .dim {
@@ -108,3 +182,66 @@ button {
color: #f85149; color: #f85149;
margin-bottom: 10px; margin-bottom: 10px;
} }
.github-btn {
display: flex;
align-items: center;
gap: 8px;
padding: 6px 12px;
background: #21262d;
border: 1px solid var(--border);
border-radius: 6px;
color: var(--header-white);
text-decoration: none;
font-size: 13px;
font-weight: 500;
transition: 0.2s;
}
.github-btn:hover {
background: #30363d;
border-color: #8b949e;
}
.github-btn svg {
opacity: 0.9;
}
.cli-section {
margin-top: 40px;
padding-top: 24px;
border-top: 1px solid var(--border);
}
.cli-label {
text-transform: uppercase;
font-size: 11px;
font-weight: 700;
letter-spacing: 1px;
}
.cli-pre {
background: #161b22;
padding: 16px;
border-radius: 8px;
font-size: 13px;
overflow-x: auto;
border: 1px solid var(--border);
}
.status-text {
font-weight: 500;
}
.hidden {
display: none !important;
}
@media (max-width: 400px) {
.github-btn span {
display: none;
}
.github-btn {
padding: 6px;
}
}
+29 -6
View File
@@ -4,7 +4,7 @@ const fileInput = $("file-input");
if (dropZone) { if (dropZone) {
dropZone.onclick = () => { dropZone.onclick = () => {
if ($("idle-state").style.display !== "none") fileInput.click(); if (!$("idle-state").classList.contains("hidden")) fileInput.click();
}; };
fileInput.onchange = () => { fileInput.onchange = () => {
@@ -32,8 +32,23 @@ if (dropZone) {
} }
async function handleUpload(file) { async function handleUpload(file) {
$("idle-state").style.display = "none"; const maxMB = parseInt(dropZone.dataset.maxMb);
$("busy-state").style.display = "block"; if (file.size > maxMB * 1024 * 1024) {
$("idle-state").classList.add("hidden");
$("result-state").classList.remove("hidden");
$("result-state").innerHTML = `
<div class="result-container">
<div class="error-text">File too large (Max ${maxMB}MB)</div>
<div class="reset-wrapper">
<button class="reset-btn" onclick="resetUI()">Try again</button>
</div>
</div>`;
return;
}
$("idle-state").classList.add("hidden");
$("busy-state").classList.remove("hidden");
$("p-bar-container").classList.add("visible");
const uploadID = Math.random().toString(36).substring(2, 15); const uploadID = Math.random().toString(36).substring(2, 15);
const chunkSize = 1024 * 1024 * 8; const chunkSize = 1024 * 1024 * 8;
@@ -61,11 +76,19 @@ async function handleUpload(file) {
headers: { "X-Requested-With": "XMLHttpRequest" }, headers: { "X-Requested-With": "XMLHttpRequest" },
}); });
$("busy-state").style.display = "none"; $("busy-state").classList.add("hidden");
$("result-state").classList.remove("hidden");
$("result-state").innerHTML = await res.text(); $("result-state").innerHTML = await res.text();
} catch (e) { } catch (e) {
$("busy-state").style.display = "none"; $("busy-state").classList.add("hidden");
$("result-state").innerHTML = `<div class="error-text">Upload Failed</div><button class="reset-btn" onclick="resetUI()">Try again</button>`; $("result-state").classList.remove("hidden");
$("result-state").innerHTML = `
<div class="result-container">
<div class="error-text">Upload Failed</div>
<div class="reset-wrapper">
<button class="reset-btn" onclick="resetUI()">Try again</button>
</div>
</div>`;
} }
} }
+15 -10
View File
@@ -10,21 +10,26 @@
<body> <body>
<div class="container"> <div class="container">
<header class="header"> <header class="header">
<h2 style="margin: 0; color: var(--header-white)">safebin</h2> <div>
<h2 class="header-title">safebin</h2>
<div class="dim">Encrypted Temporary File Storage</div> <div class="dim">Encrypted Temporary File Storage</div>
</div>
<a href="https://github.com/skidoodle/safebin" class="github-btn" target="_blank" rel="noopener noreferrer">
<svg height="16" width="16" viewBox="0 0 16 16" fill="currentColor">
<path
d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z"
></path>
</svg>
<span>GitHub</span>
</a>
</header> </header>
{{template "content" .}} {{template "content" .}}
<section class="cli-section">
<section style="margin-top: 40px; padding-top: 24px; border-top: 1px solid var(--border)"> <div class="dim cli-label">CLI Usage</div>
<div class="dim" style="text-transform: uppercase; font-size: 11px; font-weight: 700; letter-spacing: 1px">CLI Usage</div> <pre class="cli-pre">curl -F file=@yourfile {{.Host}}</pre>
<pre style="background: #161b22; padding: 16px; border-radius: 8px; font-size: 13px; overflow-x: auto; border: 1px solid var(--border)">
curl -F file=@yourfile {{.Host}}</pre
>
</section> </section>
</div> </div>
<input type="file" id="file-input" class="hidden" />
<input type="file" id="file-input" style="display: none" />
<script src="/static/js/app.js"></script> <script src="/static/js/app.js"></script>
</body> </body>
</html> </html>
+7 -9
View File
@@ -1,18 +1,16 @@
{{define "content"}} {{define "content"}}
<main class="upload-area" id="drop-zone"> <main class="upload-area" id="drop-zone" data-max-mb="{{.MaxMB}}">
<div id="idle-state"> <div id="idle-state">
<div style="font-size: 32px; color: var(--accent)"></div> <div class="upload-icon"></div>
<div style="font-weight: 500; color: var(--header-white)">Click or drag to upload</div> <div class="upload-text">Click or drag to upload</div>
<div class="dim">Max size: {{.MaxMB}}MB</div> <div class="dim">Max size: {{.MaxMB}}MB</div>
</div> </div>
<div id="busy-state" class="hidden">
<div id="busy-state" style="display: none"> <div id="status-msg" class="status-text">Uploading...</div>
<div id="status-msg" style="font-weight: 500">Uploading...</div> <div class="progress-bar" id="p-bar-container">
<div class="progress-bar" id="p-bar-container" style="display: block">
<div class="progress-fill" id="p-fill"></div> <div class="progress-fill" id="p-fill"></div>
</div> </div>
</div> </div>
<div id="result-state" class="hidden"></div>
<div id="result-state"></div>
</main> </main>
{{end}} {{end}}