Skip to content

Backups & Disaster Recovery

This document outlines the backup schedule, what is being backed up, and the procedures for disaster recovery.

  • Script Location: /opt/scripts/backup.sh
  • Schedule: Runs as a cron job nightly at 3:00 AM server time (0 3 * * *).
  • Logs: /var/log/backup.log
  1. Database Dumps

    • Dumps the directus and plausible PostgreSQL databases.
    • Compresses (gzip) and encrypts (GPG) the dumps.
    • Uploads the resulting .sql.gz.gpg files to the website2026 bucket in Backblaze B2.
    • You can list the backed-up files with b2 ls b2://website2026.
  2. File Uploads (Assets)

    • Uses rclone to sync the directus_uploads Docker volume (which contains all user-uploaded images and files) to the uploads/ folder in the same Backblaze B2 bucket.
    • You can list these files with rclone ls b2_backup:website2026/directus/uploads | grep -v "__".
  3. Automatic Cleanup

    • The script automatically deletes database backups older than 30 days from the B2 bucket.
    • rclone sync implicitly handles the cleanup of the uploads folder.

This section details how to restore the entire system from scratch.

Scenario 1: Restoring from a Backup on the Same Server

Section titled “Scenario 1: Restoring from a Backup on the Same Server”

Use this procedure if you need to revert the system to a previous state using an existing backup.

  • Stop containers:
    Terminal window
    docker stop directus plausible clickhouse
  • Drop existing databases:
    Terminal window
    docker stop directus plausible
    docker exec -i postgres dropdb -U postgres --if-exists directus
    docker exec -i postgres createdb -U postgres directus
    docker exec -i postgres dropdb -U postgres --if-exists plausible
    docker exec -i postgres createdb -U postgres plausible

Restore Databases

  • List the available backups in Backblaze B2 to find the filename you need (e.g., for the directus database):
    Terminal window
    b2 ls b2://website2026/directus
  • Download the desired backup file:
    Terminal window
    # For Directus
    b2 file download b2://website2026/directus/FILENAME.sql.gz.gpg directus.gpg
    # For Plausible
    b2 file download b2://website2026/directus/FILENAME.sql.gz.gpg plausible.gpg
  • Decrypt, decompress, and import the data into the running postgres container.
    Terminal window
    # For Directus
    gpg --decrypt --batch --passphrase "" FILENAME.sql.gz.gpg | gunzip -c | docker exec -i postgres psql -U postgres -d directus
    # For Plausible
    gpg --decrypt --batch --passphrase "" FILENAME.sql.gz.gpg | gunzip -c | docker exec -i postgres psql -U postgres -d plausible

Restore Plausible Analytics Data

  • List the available backups in Backblaze B2 to find the filename you need (e.g., for the directus database):
    Terminal window
    b2 ls b2://website2026/clickhouse
  • Download the desired backup file:
    Terminal window
    b2 file download b2://website2026/clickhouse/FILENAME.tar.gz.gpg clickhouse.gpg
  • Decrypt and Extract to Volume: (This wipes the current volume and unzips the backup into it).
    Terminal window
    gpg --decrypt --batch --passphrase "" clickhouse.gpg | \
    docker run --rm -i -v main_clickhouse_data:/data alpine sh -c "rm -rf /data/* && tar -xz -C /data"

Restore Uploaded Files (Assets)

  • Use rclone to sync the backed-up files from Backblaze B2 back into the directus_uploads volume.
    Terminal window
    docker run --rm --volumes-from directus -v /root/.config/rclone:/config/rclone rclone/rclone:latest sync b2_backup:website2026/directus/uploads /directus/uploads

Clean up local backup files:

Terminal window
rm *.gpg
  • Start containers:
    Terminal window
    docker start directus plausible clickhouse

This covers the case where the original server is completely lost.

  1. Spin up a new server (e.g., on DigitalOcean) with Docker installed.
  2. Clone the websitebackend Git repository.
  3. Re-create the .env files from your password manager (e.g., Bitwarden).
  4. Generate new Origin Certificates from Cloudflare and place them on the server.
  5. Install and authorize the b2 and rclone CLI tools.
  6. Follow the restore steps from Scenario 1 to restore the databases and uploaded files.
  7. Use rclone to also sync the clickhouse data from B2 back into the clickhouse_data volume.
  8. Start the full stack in Portainer.
  9. Update the Cloudflare DNS records to point to the new server’s IP address or Tunnel ID.