Backups & Disaster Recovery
This document outlines the backup schedule, what is being backed up, and the procedures for disaster recovery.
Backup System
Section titled “Backup System”- Script Location:
/opt/scripts/backup.sh - Schedule: Runs as a cron job nightly at 3:00 AM server time (
0 3 * * *). - Logs:
/var/log/backup.log
What It Does
Section titled “What It Does”-
Database Dumps
- Dumps the
directusandplausiblePostgreSQL databases. - Compresses (gzip) and encrypts (GPG) the dumps.
- Uploads the resulting
.sql.gz.gpgfiles to thewebsite2026bucket in Backblaze B2. - You can list the backed-up files with
b2 ls b2://website2026.
- Dumps the
-
File Uploads (Assets)
- Uses
rcloneto sync thedirectus_uploadsDocker volume (which contains all user-uploaded images and files) to theuploads/folder in the same Backblaze B2 bucket. - You can list these files with
rclone ls b2_backup:website2026/directus/uploads | grep -v "__".
- Uses
-
Automatic Cleanup
- The script automatically deletes database backups older than 30 days from the B2 bucket.
rclone syncimplicitly handles the cleanup of the uploads folder.
Disaster Recovery Plan
Section titled “Disaster Recovery Plan”This section details how to restore the entire system from scratch.
Scenario 1: Restoring from a Backup on the Same Server
Section titled “Scenario 1: Restoring from a Backup on the Same Server”Use this procedure if you need to revert the system to a previous state using an existing backup.
- Stop containers:
Terminal window docker stop directus plausible clickhouse - Drop existing databases:
Terminal window docker stop directus plausibledocker exec -i postgres dropdb -U postgres --if-exists directusdocker exec -i postgres createdb -U postgres directusdocker exec -i postgres dropdb -U postgres --if-exists plausibledocker exec -i postgres createdb -U postgres plausible
Restore Databases
- List the available backups in Backblaze B2 to find the filename you need (e.g., for the
directusdatabase):Terminal window b2 ls b2://website2026/directus - Download the desired backup file:
Terminal window # For Directusb2 file download b2://website2026/directus/FILENAME.sql.gz.gpg directus.gpg# For Plausibleb2 file download b2://website2026/directus/FILENAME.sql.gz.gpg plausible.gpg - Decrypt, decompress, and import the data into the running
postgrescontainer.Terminal window # For Directusgpg --decrypt --batch --passphrase "" FILENAME.sql.gz.gpg | gunzip -c | docker exec -i postgres psql -U postgres -d directus# For Plausiblegpg --decrypt --batch --passphrase "" FILENAME.sql.gz.gpg | gunzip -c | docker exec -i postgres psql -U postgres -d plausible
Restore Plausible Analytics Data
- List the available backups in Backblaze B2 to find the filename you need (e.g., for the
directusdatabase):Terminal window b2 ls b2://website2026/clickhouse - Download the desired backup file:
Terminal window b2 file download b2://website2026/clickhouse/FILENAME.tar.gz.gpg clickhouse.gpg - Decrypt and Extract to Volume: (This wipes the current volume and unzips the backup into it).
Terminal window gpg --decrypt --batch --passphrase "" clickhouse.gpg | \docker run --rm -i -v main_clickhouse_data:/data alpine sh -c "rm -rf /data/* && tar -xz -C /data"
Restore Uploaded Files (Assets)
- Use
rcloneto sync the backed-up files from Backblaze B2 back into thedirectus_uploadsvolume.Terminal window docker run --rm --volumes-from directus -v /root/.config/rclone:/config/rclone rclone/rclone:latest sync b2_backup:website2026/directus/uploads /directus/uploads
Clean up local backup files:
rm *.gpg- Start containers:
Terminal window docker start directus plausible clickhouse
Scenario 2: Full Rebuild on a New Server
Section titled “Scenario 2: Full Rebuild on a New Server”This covers the case where the original server is completely lost.
- Spin up a new server (e.g., on DigitalOcean) with Docker installed.
- Clone the
websitebackendGit repository. - Re-create the
.envfiles from your password manager (e.g., Bitwarden). - Generate new Origin Certificates from Cloudflare and place them on the server.
- Install and authorize the
b2andrcloneCLI tools. - Follow the restore steps from Scenario 1 to restore the databases and uploaded files.
- Use
rcloneto also sync theclickhousedata from B2 back into theclickhouse_datavolume. - Start the full stack in Portainer.
- Update the Cloudflare DNS records to point to the new server’s IP address or Tunnel ID.