Files
workdock-platform/DEPLOYMENT.md
2026-03-30 13:08:20 +02:00

17 KiB

Deployment and CI/CD

Current deployment model

  • one private GitHub repository
  • develop deploys to the test server
  • main is reserved for production deployment
  • GitHub Actions uploads the repository contents to the server over SSH
  • the server does not need GitHub access to deploy

This is intentional. For a private repository, server-side git clone adds unnecessary credential management.

Branch strategy

  • develop: test deployment branch
  • main: production branch
  • feature branches: normal product work

Environments

Development / test

  • target server: 192.168.2.55
  • deployment path: /opt/workdock
  • stack file: docker-compose.prod.yml
  • env file on server: .env.test
  • current access URL: http://192.168.2.55:8088

Production

  • same deployment mechanism
  • usually a different server
  • env file on server: .env.prod
  • branch: main
  • should run behind real HTTPS
  • should keep DEBUG=0

Important design choice

The current test server is a LAN-only HTTP deployment.

Because the Django settings enforce secure-cookie checks when DEBUG=0, the test deployment uses:

  • DJANGO_DEBUG=1
  • RUN_DJANGO_CHECK=0

That is acceptable for this internal test box only.

Production must use:

  • DJANGO_DEBUG=0
  • DJANGO_SECURE_COOKIES=1
  • HTTPS
  • RUN_DJANGO_CHECK=1

Files used for deployment

What deploy_stack.sh does

The deployment script:

  1. validates the env file exists
  2. builds web, worker, and caddy
  3. starts db and redis
  4. initializes writable volume ownership for:
    • /app/media
    • /app/staticfiles
    • /app/backups
  5. runs:
    • migrate
    • bootstrap_initial_users
    • collectstatic
  6. optionally runs manage.py check
  7. starts:
    • web
    • worker
    • caddy
  8. waits for /healthz/

Proxmox / LXC requirement

This project is running in an Ubuntu CT on Proxmox, with Docker inside the CT.

For this to work, the CT needed Proxmox-side configuration in:

  • /etc/pve/lxc/<CTID>.conf

Required settings:

features: nesting=1,keyctl=1
lxc.apparmor.profile: unconfined
lxc.mount.entry: /dev/null sys/module/apparmor/parameters/enabled none bind 0 0

Then restart the CT:

pct restart <CTID>

Without this, Docker containers in the CT fail with:

open sysctl net.ipv4.ip_unprivileged_port_start ... permission denied

This is a Proxmox/LXC nested-Docker issue, not an application bug.

Server bootstrap

Run on the server once:

apt-get update
apt-get install -y ca-certificates curl gnupg git
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable --now docker

Server directory layout

Current test server path:

/opt/workdock

Important server-local files:

  • /opt/workdock/.env.test
  • later /opt/workdock/.env.prod

These env files are intentionally not uploaded from GitHub Actions.

Test env file

Create on the server:

cp .env.test.example .env.test

Current important values for the LAN test box:

DJANGO_DEBUG=1
DJANGO_ALLOWED_HOSTS=192.168.2.55,localhost,127.0.0.1
DJANGO_CSRF_TRUSTED_ORIGINS=http://192.168.2.55:8088
DJANGO_SECURE_COOKIES=0
DJANGO_SECURE_SSL_REDIRECT=0
APP_PORT=8088
SITE_ADDRESS=:80

Generate strong values for:

  • DJANGO_SECRET_KEY
  • POSTGRES_PASSWORD

Production env file

Production should use:

DJANGO_DEBUG=0
DJANGO_SECURE_COOKIES=1
DJANGO_SECURE_SSL_REDIRECT=1

And a real HTTPS hostname in:

  • DJANGO_ALLOWED_HOSTS
  • DJANGO_CSRF_TRUSTED_ORIGINS
  • SITE_ADDRESS

Manual test deployment

For a LAN-only test server, this is the recommended CD path.

One-command local deployment from your Mac

Use:

./scripts/deploy_test_from_mac.sh

What it does:

  1. requires the current branch to be develop
  2. fast-forwards from origin/develop
  3. verifies that the server env file exists before syncing
  4. syncs the repo to /opt/workdock via rsync
  5. runs the remote deployment script
  6. verifies the health endpoint
  7. prints the deployed commit and branch

Important:

  • the helper preserves server-local env files:
    • .env.test
    • .env.prod
  • those files are not supposed to be replaced from your Mac checkout

Default assumptions:

  • target host: root@192.168.2.55
  • target path: /opt/workdock
  • env file: .env.test
  • health URL: http://192.168.2.55:8088/healthz/

Optional overrides:

DEPLOY_HOST=root@192.168.2.55 \
DEPLOY_PATH=/opt/workdock \
HEALTH_URL=http://192.168.2.55:8088/healthz/ \
./scripts/deploy_test_from_mac.sh

Manual production deployment

For production, use a dedicated helper instead of the test script.

One-command production deployment from your Mac

Use:

./scripts/deploy_prod_from_mac.sh

What it does:

  1. requires the current branch to be main
  2. fast-forwards from origin/main
  3. verifies that the server env file exists before syncing
  4. syncs the repo to the production path via rsync
  5. runs the remote deployment script with RUN_DJANGO_CHECK=1
  6. verifies the production health endpoint
  7. prints the deployed commit and branch

Important:

  • the production helper preserves server-local env files:
    • .env.test
    • .env.prod
  • do not use the test helper for production

Default assumptions:

  • target host: root@192.168.2.55
  • target path: /opt/workdock
  • env file: .env.prod
  • health URL: https://workdock.bostame.de/healthz/

Optional overrides:

DEPLOY_HOST=root@192.168.2.55 \
DEPLOY_PATH=/opt/workdock \
HEALTH_URL=https://workdock.bostame.de/healthz/ \
./scripts/deploy_prod_from_mac.sh

Manual server-side deploy only

If the latest code is already on the server:

cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml

Manual production deployment:

cd /opt/workdock
RUN_DJANGO_CHECK=1 DEPLOY_HEALTH_URL="https://workdock.bostame.de/healthz/" ./scripts/deploy_stack.sh .env.prod docker-compose.prod.yml

Runtime config sync

Deployment updates code. It does not automatically overwrite runtime database configuration.

Use explicit sync when you want local configuration to be compared or applied to another environment.

Supported sync scopes

  • PortalAppConfig
  • PortalBranding
  • PortalCompanyConfig

Step 1: export locally

docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json
docker compose cp web:/tmp/portal-app-config.json /tmp/portal-app-config.json
docker compose cp web:/tmp/portal-deployment-config.json /tmp/portal-deployment-config.json

Step 2: copy JSON files to the server host

scp -4 /tmp/portal-app-config.json /tmp/portal-deployment-config.json root@192.168.2.55:/opt/workdock/

Step 3: copy JSON files into the running web container

The server uses baked images, not a bind-mounted app tree. Because of that, the running web container cannot automatically read arbitrary files from /opt/workdock.

Use:

ssh -4 root@192.168.2.55 '
  docker cp /opt/workdock/portal-app-config.json workdock-web-1:/tmp/portal-app-config.json &&
  docker cp /opt/workdock/portal-deployment-config.json workdock-web-1:/tmp/portal-deployment-config.json
'

Step 4: dry-run the import on the server

ssh -4 root@192.168.2.55 '
  docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run &&
  docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run
'

Step 5: apply the import on the server

Only do this if the dry run looks correct.

ssh -4 root@192.168.2.55 '
  docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json &&
  docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json
'

Notes

  • PortalAppConfig covers app order, section, visibility, and overrides.
  • deployment-config sync covers branding/company text and metadata.
  • uploaded branding files are intentionally excluded:
    • logo
    • favicon
    • PDF letterhead
  • use dry-run first. Treat config sync as an explicit operator action, not something hidden inside deploy.

GitHub Actions workflows

Test deployment workflow

File:

Behavior:

  • triggers on push to develop
  • can also be run manually with workflow_dispatch
  • checks out the repo in GitHub Actions
  • uploads the working tree to the server over SSH
  • runs the server deployment script

Important:

  • this workflow only works if the GitHub runner can reach the server
  • it is not suitable for a pure LAN-only target using a private IP like 192.168.2.55
  • for the current environment, prefer the local Mac deploy script or a self-hosted runner on the LAN

Production deployment workflow

File:

Behavior:

  • manual only
  • uploads the working tree to the production server
  • runs the production deployment script

GitHub environment setup

In GitHub:

  1. open repository settings
  2. open Environments
  3. create:
    • development
    • production

Exact GitHub UI path

  1. Open the private repository:
    • https://github.com/Bostame/workdock-platform
  2. Click:
    • Settings
  3. In the left sidebar, open:
    • Environments
  4. Click:
    • New environment
  5. Create:
    • development
  6. Repeat and create:
    • production
  7. Open the development environment
  8. Under Environment secrets, click:
    • Add environment secret
  9. Add each required secret one by one
  10. Repeat the same pattern later for production

Development environment secrets

Add:

  • TEST_DEPLOY_HOST
  • TEST_DEPLOY_USER
  • TEST_DEPLOY_PORT
  • TEST_DEPLOY_PATH
  • TEST_DEPLOY_SSH_KEY

Current test values:

  • TEST_DEPLOY_HOST=192.168.2.55
  • TEST_DEPLOY_USER=root
  • TEST_DEPLOY_PORT=22
  • TEST_DEPLOY_PATH=/opt/workdock
  • TEST_DEPLOY_SSH_KEY=<private key that can ssh to root@192.168.2.55>

Development secret entry example

Use these exact values in the development environment:

TEST_DEPLOY_HOST

192.168.2.55

TEST_DEPLOY_USER

root

TEST_DEPLOY_PORT

22

TEST_DEPLOY_PATH

/opt/workdock

TEST_DEPLOY_SSH_KEY

<paste the full private SSH key that can log in to root@192.168.2.55>

The SSH key must include the full multi-line content, for example:

-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----

How to verify the SSH key before adding it

From your local machine:

ssh -4 root@192.168.2.55

If that works without asking for a password, the matching private key is the correct one to store in TEST_DEPLOY_SSH_KEY.

Production environment secrets

Add:

  • PROD_DEPLOY_HOST
  • PROD_DEPLOY_USER
  • PROD_DEPLOY_PORT
  • PROD_DEPLOY_PATH
  • PROD_DEPLOY_SSH_KEY

How the CI/CD test deploy works

Normal flow

  1. push code to develop
  2. GitHub Actions runs Deploy Test
  3. workflow uploads repository contents to /opt/workdock
  4. server keeps its local .env.test
  5. deploy_stack.sh rebuilds and restarts the stack
  6. workflow succeeds only after /healthz/ is healthy

Manual trigger

From GitHub Actions:

  1. open Deploy Test
  2. click Run workflow

First GitHub Actions validation

After you add the development environment secrets:

  1. Open:
    • https://github.com/Bostame/workdock-platform/actions
  2. Open workflow:
    • Deploy Test
  3. Click:
    • Run workflow
  4. Select branch:
    • develop
  5. Run it
  6. Wait until both steps complete:
    • upload bundle
    • deploy over SSH
  7. Verify:
    • http://192.168.2.55:8088/healthz/
  8. Then open the app home page in the browser

What success looks like

  • workflow status is green in GitHub Actions
  • Deploy Test job finishes without SSH or health-check errors
  • /healthz/ returns 200 OK
  • the containers on the test server remain up

If the workflow fails

Check in this order:

  1. wrong or incomplete TEST_DEPLOY_SSH_KEY
  2. wrong TEST_DEPLOY_USER
  3. wrong TEST_DEPLOY_PATH
  4. changed server host key
  5. server disk-space or Docker runtime issue

How to validate a deployment

From your machine

curl -I http://192.168.2.55:8088/healthz/

On the server

cd /opt/workdock
docker compose --env-file .env.test -f docker-compose.prod.yml ps
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 web
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 worker
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 caddy

If localhost still looks wrong after the server is fixed

Use this order before assuming the local checkout is missing code:

  1. hard refresh the page with Cmd + Shift + R
  2. clear site data for 127.0.0.1:8088 in browser devtools and sign in again
  3. restart the local web container:
    docker compose restart web
    
  4. if it still survives, rebuild the local stack:
    docker compose up -d --build
    

This is the correct recovery path for stale browser state, shared-header fixes, page-local CSS fixes, and versioned static asset updates.

Rollback

This deployment path is source-upload based, not image-tag based.

Rollback options:

  1. revert the bad commit on develop and let GitHub Actions deploy again
  2. manually re-upload a previous working checkout and rerun deploy_stack.sh

For production, you may later want image-tag based rollback. That is not necessary yet for the test box.

Operational notes

  • server-local env files must survive deployments
  • do not store .env.test or .env.prod in Git
  • test deployment is intentionally weaker than production on transport security
  • production should not reuse the test env model

Command reference

Use this as the short operational index.

Local development

docker compose up -d --build

Start or rebuild the local stack.

docker compose restart web
docker compose restart worker

Restart the app services after code/template changes.

Validation

docker compose exec -T web python manage.py check

Run Django system checks.

docker compose exec -T web python manage.py test

Run the full test suite.

Local test deployment

./scripts/deploy_test_from_mac.sh

Sync the current develop checkout to the LAN test server and deploy it.

Direct server deployment

cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml

Deploy when code is already present on the server.

Config export/import

docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json

Export runtime configuration from local.

docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run

Validate server-side config import before applying it.

Backup

make backup-create
make backup-verify BACKUP_DIR=backups/backup_YYYYmmdd_HHMMSS

Create and verify backup bundles.

Current known-good state

Validated manually:

  • repository pushed to private GitHub
  • server bootstrap completed
  • test stack deployed successfully
  • health check reachable at:
    • http://192.168.2.55:8088/healthz/