Ch…Ch…Ch…Changes; Cleaning Out The Garage
Hey Drop fam! It’s been quite the hiatus, this time ’round. The prep for (and travel to) our talk at [un]prompted took quite a lot out of me, especially the part where I was seated next to a gentleman on the 6+ hour flight out who was in the throes of some plague. Despite masking the entire time and being wary of touching, well, anything on the plane, I did come back with a touch of something that made the remaining weeks at my, now, former employer that much more grueling.
I send out today’s Bonus Drop also just starting to get over the Norovirus (this thing is super bad, folks…try not to get it…I’m pretty sure I know the one time I failed to wash hands after a grocery store run that caused me to contract it). I’ve been penning it off-and-on (between sleeps this week), and it is mostly a singular topic regarding a better way to do local “S3” than the Garage, with some container and wireguard (via tailscale) cruncy goodness baked in.
But, first, y’all are special folks, so you get to hear about the new gig before the official drop at 0900 ET on Monday.
Ch…Ch…Ch…Changes
I’ve spent the weekend unemployed! My former work-home was a solid run for four years, and I will dearly miss my team and many of the folks still cranking away at “intercoursing the noise”.
On Monday, I start at Censys as a Distinguished Engineer, working with folks to build a team and craft the most elegant and comprehensive self-sourced threat intelligence “knowledge fabric” ever created by non-gov entities. Think of it as an ever-expanding temporal knowledge graph of the internet with more facets than you can possibly imagine.
As such, Drops are back, but will be intermittent, both as I continue to recover from this plague and get settled into the new digs.
Cleaning Out The Garage

A while ago, we covered the Garage, which is/was a slightly over-complicated way to have S3-object storage at home on a plain-ol-filesystem. I never got to use the setup much (old job priorities, and such), and was really not thrilled with it after even light poking over a few weeks.
Then I came across the Versity S3 Gateway (VersityGW). It’s an [Apache 2 licensed] FOSS componet of Versity’s commercial offering but it’s stupid easy to get up and running (single Go binary) on a basic linux filesystem, provided you are fine with ports and plain HTTP. If you are, stop reading, go to the GitHub, and have at thee (though I do show an example, below, of using it with DuckDB that might be useful). I was not, so here’s how to get it working with Docker (Podman works fine, too), fronted by Tailscale TLS, in about fifteen minutes.
If you’re still here, that means you have files on a Linux box. You want S3 access to them — from your laptop, from a script, from anywhere on your tailnet — with real TLS and a stable hostname. No VPN port forwarding, no self-signed cert warnings, no exposing services to the internet, no icky ports in client config files.
What you’ll end up with:
https://s3.your-tailnet.ts.net– S3 API with valid TLS, accessible from any device on your tailnet (thes3.bit is whatev you want; mine isversity.)https://s3.your-tailnet.ts.net/ui/– Web management UI at a path prefix on the same port (versity by default hates 2-letter bucket names, so this is fine)- S3 signature authentication still enforced (tailnet membership is transport auth, not application auth)
- Docker Compose stack that survives reboots
- A pattern you can repeat for any other HTTP service you want on your tailnet
What you’ll need:
- A Linux server (your “home base”) with Docker and Docker Compose installed (again, Podman folks go crazy, too)
- A Tailscale account with a tailnet configured
- Directory on the server with data you want to serve as S3 buckets
- The
awsCLI installed somewhere for testing
There are three moving parts, two containers and some Tailsale magic:
- VersityGW – an S3 gateway that translates S3 API calls into filesystem operations. Subdirectories of a root directory become buckets; files become objects. It’s stateless, so you can run multiple instances behind a load balancer if you ever need to.
- caddy-tailscale – a Caddy web server with Tailscale’s
tsnetlibrary baked in. It handles TLS termination using Tailscale’s built-in certificate authority and proxies all requests to VersityGW over plain HTTP inside the Docker network. - Tailscale OAuth – the caddy-tailscale container authenticates to your tailnet using an OAuth client credential, not a user login. This gives it its own identity (separate from your server’s host Tailscale install) and auto-provisions a TLS certificate.
Tis is the general concept:
s3.your-tailnet.ts.net (Tailscale TLS, automatic) | [caddy-tailscale container] - terminates TLS - reverse_proxy to versitygw:7070 | [versitygw container] - versity/versitygw:latest - port 7070 (S3 API + WebUI) - filesystem backend: your data directory |
The reason why this works is that VersityGW has a --webui-s3-prefix flag that mounts the WebUI under a path prefix on the S3 port itself. This means Caddy needs only one reverse_proxy directive. No path-based routing, no dual-upstream headaches, no S3 bucket name colliding with your WebUI path.
We first need to Generate a tailscale oauth client key. The caddy-tailscale container needs an OAuth client credential to authenticate to your tailnet and claim a hostname. This is not a regular auth key. It’s a client credential that can provision devices with a specific ACL tag.
- Go to your Tailscale admin console
- Click Generate OAuth client
- Give it a description (e.g., “Docker container services”)
- Under Grant tag, select or create a tag like
tag:container - Copy the client secret – it starts with
tskey-client-
You’ll use (or at least can use) this same key for every caddy-tailscale container you deploy. One key, many services.
ACL configuration: Make sure your Tailscale ACLs allow devices tagged tag:container to be reached. If you haven’t restricted tailnet access, all members can reach tagged devices by default. If you have custom ACLs, add an allow rule for tag:container destinations:
{ "action": "accept", "src": ["autogroup:member"], "dst": ["tag:container:*"]}
Next, we’ll create the project directory. Choose a location on your server. I keep mine next to the data:
mkdir -p /data/s3gw/docker/ts-statechown -R $(whoami) /data/s3gw/docker
The ts-state/ directory persists the container’s Tailscale identity across restarts. Without it, the container would re-authenticate on every start and potentially get a new IP.
We now need a Caddyfile that tells Caddy which Tailscale tag to claim and where to proxy:
{ tailscale { tags tag:container }}s3.your-tailnet.ts.net { bind tailscale/s3 reverse_proxy versitygw:7070}
Replace your-tailnet with your actual tailnet name and s3 with the hostname you want. The bind tailscale/s3 line tells caddy-tailscale to register as hostname s3 on your tailnet.
The tags tag:container block must match the tag you granted to your OAuth client way back at the beginning.
Despite a “recent” Drop discussing the modern encrypted replacements for it, we’ll use a trust ol’ .env file for seekrits:
cat > /data/s3gw/docker/.env << 'EOF'# Tailscale OAuth client credential for caddy-tailscaleTS_AUTHKEY=tskey-client-YOUR-KEY-HERE# VersityGW root S3 credentialsROOT_ACCESS_KEY=your-access-keyROOT_SECRET_KEY=your-secret-keyEOFchmod 600 /data/s3gw/docker/.env
As you likely know, the chmod 600 matters. This file contains credentials that can add devices to your tailnet.
Pick strong values for ROOT_ACCESS_KEY and ROOT_SECRET_KEY. These are your S3 root credentials. Any S3 client connecting to this gateway will need them.
Here’s the docker-compose.yml
name: s3gwservices: versitygw: image: versity/versitygw:latest container_name: versitygw restart: unless-stopped environment: - ROOT_ACCESS_KEY=${ROOT_ACCESS_KEY} - ROOT_SECRET_KEY=${ROOT_SECRET_KEY} - VGW_WEBUI_S3_PREFIX=/ui - VGW_WEBUI_NO_TLS=true - VGW_WEBUI_GATEWAYS=https://s3.your-tailnet.ts.net - VGW_IAM_DIR=/data/gw - VGW_HEALTH=/health command: - --port - :7070 - posix - --versioning-dir - /data/vers - /data/gw volumes: - /data/s3gw/gw:/data/gw # your bucket data - /data/s3gw/vers:/data/vers # versioning storage healthcheck: test: ["CMD", "wget", "--spider", "-q", "http://localhost:7070/health"] interval: 30s timeout: 10s retries: 3 networks: - s3gw-net caddy: container_name: versitygw-caddy image: ghcr.io/tailscale/caddy-tailscale:main restart: unless-stopped volumes: - ./Caddyfile:/etc/caddy/Caddyfile:ro - ./ts-state:/config environment: - TS_AUTHKEY=${TS_AUTHKEY} depends_on: - versitygw networks: - s3gw-netnetworks: s3gw-net: driver: bridge
Let me walk through the non-obvious bits.
VersityGW configuration via environment variables: The Docker image’s entrypoint calls the versitygw binary directly. You can pass CLI flags via command:, but many flags also have environment variable equivalents ($ROOT_ACCESS_KEY, $VGW_WEBUI_S3_PREFIX, etc.). I use env vars for credentials and feature flags, and command: only for the port and the posix subcommand with its arguments. This keeps secrets out of the compose file and makes the configuration more readable.
VGW_WEBUI_S3_PREFIX=/ui: This is the trick. Instead of running the WebUI on its own port and having Caddy route between two upstreams, this flag mounts the WebUI under /ui/ on the S3 port (7070). Caddy proxies everything to one port, and VersityGW handles the routing internally. No bucket named ui can exist, but that’s a small constraint.
VGW_WEBUI_NO_TLS=true: Caddy handles TLS. VersityGW’s WebUI doesn’t need its own.
VGW_WEBUI_GATEWAYS: The WebUI auto-detects its gateway URL from the listen address (which would be http://localhost:7070 inside the container). Override it so the WebUI tells clients to use the tailnet URL instead.
VGW_IAM_DIR=/data/gw: VersityGW stores IAM data (user accounts beyond the root key) in a JSON file in this directory. In my setup, it’s the same directory as the bucket data, so the bind mount covers both.
The posix subcommand: posix --versioning-dir /data/vers /data/gw tells VersityGW to use a POSIX filesystem backend with /data/gw as the root (subdirectories become buckets) and /data/vers for storing old object versions when versioning is enabled.
The bridge network: Both containers sit on s3gw-net, which lets Caddy resolve versitygw as a hostname. No ports are exposed to the host – access is only through the Tailscale tunnel.
Now, fire it up!
cd /data/s3gw/dockerdocker compose pulldocker compose up -d
Wait about 30 seconds for the health check and Tailscale authentication. Then verify:
# Check containers are runningdocker ps --filter name=versitygw --format '{{.Names}}\t{{.Status}}'# Check Tailscale registrationtailscale status | grep s3# Check health endpointcurl -sk https://s3.your-tailnet.ts.net/health
You should see: – Both containers running, versitygw marked healthy – A s3 entry in tailscale status with a Tailscale IP – OK from the health endpoint
Give it a quick test with AWS CLI tooling:
export AWS_ACCESS_KEY_ID=your-access-keyexport AWS_SECRET_ACCESS_KEY=your-secret-keyaws s3 ls --endpoint-url https://s3.your-tailnet.ts.net --no-verify-ssl
You need --no-verify-ssl if your system doesn’t trust Tailscale’s certificate authority. Tailscale-aware clients (anything running on a device with Tailscale installed) handle this automatically.
If your data directory has subdirectories, you’ll see them listed as buckets:
2026-04-11 10:00:00 backups2026-04-12 08:30:00 data
Upload a file to verify writes work too:
echo "hello from the tailnet" | aws s3 cp - s3://data/test.txt --endpoint-url https://s3.your-tailnet.ts.net --no-verify-ssl
Then check it’s there on the filesystem:
ls /data/s3gw/gw/data/test.txtcat /data/s3gw/gw/data/test.txt
The WebUI is (again) at: https://s3.your-tailnet.ts.net/ui/.
For DuckDB folks, you can directly use this setup with one tweak (the URL_STYLE):
CREATE OR REPLACE SECRET secret ( TYPE s3, PROVIDER credential_chain, URL_STYLE 'path', ENDPOINT 'versity.tail47d2.ts.net');CREATE TABLE dropText AS FROM read_json('s3://obs/*.json');FROM dropText SELECT TO_TIMESTAMP((obs::JSON)[0][0]::DOUBLE)::TIMESTAMP ts LIMIT 10;┌─────────────────────┐│ ts ││ timestamp │├─────────────────────┤│ 2023-08-27 12:18:04 ││ 2023-08-27 12:19:04 ││ 2023-08-27 12:20:04 ││ 2023-08-27 12:22:04 ││ 2023-08-27 12:31:04 ││ 2023-08-27 12:32:04 ││ 2023-08-27 12:33:04 ││ 2023-08-27 12:34:04 ││ 2023-08-27 12:35:04 ││ 2023-08-27 12:36:04 │└─────────────────────┘ 10 rows
This assumes you have your AWS config/credential files in place.
That was…alot! But, it’s been a minute since the last Drop and I wanted to leave y’all with something substantial out of the [new] gate.
FIN
Remember, you can follow and interact with the full text of The Daily Drop’s free posts on:
- 🐘 Mastodon via
@dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev - 🦋 Bluesky via
https://bsky.app/profile/dailydrop.hrbrmstr.dev.web.brid.gy
☮️
Leave a comment