Migrating from Edge Cloud Computing to self-hosting with Caddy and Docker
More Privacy. More Control. Better Reliability.
In my quest to improve my privacy and control over the things I create and share, in the last few months I've achieved a couple of important milestones I had my sight on for a while:
- Having my own git server, for my personal and private repos/apps/configs.
- Deploy apps on my own server, simply.
The first one was an important step in achieving the second, because I didn't want to keep my personal server settings/configs and keys in GitHub or Gitlab (somewhere I didn't control), so I've always had them backed up in an encrypted format in my home server and a couple of external disks (and cloud).
The main contenders were Gitlab and Gitea, and Gitea won because it's much lighter and faster to run (and simpler). Additionally, I found its Actions easier to setup and get working (especially for some personal repos I migrated from GitHub).
After I got that going with, effectively, a couple of docker-compose.yml
files (I already had one for my personal Nextcloud instance) and a Caddyfile
(caddy
is so much better than nginx
and comes with automated certificate handling for SSL) to properly reverse proxy domains to the right docker containers, I had a nice boilerplate for my apps server, too.
Since most of what I build personally nowadays is with Deno, I've had most of these apps deployed on Deno Deploy with a lot of ease. While I have found it to be a bit too unstable (I've had apps stop working because they released a new engine version that wasn't backwards-compatible more than a couple of times, for example) and with slower cold starts than what I would expect, my main reason for getting the apps into my own server was to have more control of them. The fact they load much more faster now is a bonus.
So what I've had to do in these repos, is basically create a docker-compose.yml
file and a Dockerfile
so that docker-compose up
brings the app up and running.
Those are simple.
The Dockerfile
:
FROM denoland/deno:alpine-1.34.1
EXPOSE 8000
WORKDIR /app
Prefer not to run as root.
USER deno
These steps will be re-run upon each file change in your working directory:
ADD . /app
Compile the main app so that it doesn't need to be compiled each startup/entry.
RUN deno cache --reload main.ts
CMD ["run", "--allow-all", "main.ts"]
The docker-compose.yml
:
version: '3'
services:
website:
build: .
restart: always
ports:
- 127.0.0.1:3000:8000
Each app does require me to use a specific port, which I keep track of in a repo for my server settings/config/keys. I only open it up to the localhost (127.0.0.1
) so that you can't access the app without caddy
in front of it. I want that mostly for SSL, but also to potentially throttle/rate-limit more easily.
The server's Caddyfile
entry for such a Deno app is simple too:
example.com { tls you@example.com encode gzip zstd
reverse_proxy localhost:3000 { header_up Host {host} header_up X-Real-IP {remote} header_up X-Forwarded-Host {upstream_hostport} } }
And that's it.
Now I have complete control over the Deno version/engine running the app, and can use dynamic imports or even npm:
specifiers if I wished (though I don't).
To have it be deployed automatically on a push to the main
branch, I have a .gitea/workflows/deploy.yml
action, which basically SSH's into the server (I've created a key for it, shared with the apps server), pulls the main
branch, and restarts the docker container(s). It only looks complicated because of the SSH setup, but in reality it's simple:
name: Deploy
on: push: branches: - main workflow_dispatch:
jobs: deploy: runs-on: ubuntu-latest steps: - uses: https://gitea.com/actions/checkout@v3 - name: Configure SSH run: | mkdir -p ~/.ssh/ echo "$SSH_KEY" | tr -d '\r' > ~/.ssh/server.key chmod 600 ~/.ssh/server.key cat >>~/.ssh/config <<END Host server HostName server.example.com User root IdentityFile ~/.ssh/server.key StrictHostKeyChecking no END cat ~/.ssh/config env: SSH_KEY: ${{ secrets.APPS_SSH_KEY }}
- name: Deploy via SSH run: ssh server 'cd apps/website && git pull origin main && git remote prune origin && docker system prune -f && docker-compose up -d --build && docker-compose ps && docker-compose logs'
That works really well for my use case (a few Deno apps, some with some specific needs for Redis or Postgres).
As for my apps server, I have it on a DigitalOcean droplet, just because I trust their ability to keep things running, including backups, and I only have "code" there, not really private keys. All of that is in my git server, which is on my home server, and backed up to a couple of different places, physically, and encrypted.
Thank you for your attention and kindness. I really appreciate it!