{% extends 'workflows/base_shell.html' %} {% load static %} {% block title %}Developer Handbook{% endblock %} {% block extra_css %} {% endblock %} {% block shell_body %} {% include 'workflows/includes/app_header.html' with header_show_home=1 header_inside_shell=1 %}

Developer Handbook

Project Wiki

Engineering guide for development, deployment, maintenance, and extension of the current portal deployment.

Overview Structure Workflow Local Dev Docker Database Guidelines Translations PDF Pipeline Email Pipeline Nextcloud Builders Testing Backup Hosts & Domains CI/CD Deployment TUBCO Setup Commands Troubleshooting Security

1) Overview

This handbook is for developers and maintainers. It documents the engineering workflow of the standalone product repository.

2) Repository Structure

3) Working Model

Branch strategy

Normal change flow

  1. start from develop
  2. implement the change
  3. run validation
  4. update docs if architecture or workflow changed
  5. push to GitHub and let CI run
  6. deploy from the Mac if test-server verification is needed
  7. promote develop into main when stable

Dual remote rule

./scripts/git_remote_target.sh status

Use the helper above before pushing if there is any doubt about which remote should receive the change.

Plain git push should default to origin, and a repo-local pre-push hook blocks accidental pushes to tubco unless the ref is an approved TUBCO branch or baseline tag.

Customer release branches

Use a dedicated release branch when a customer should receive the current stable product line but not future features by default.

This keeps the customer line stable while the main product keeps evolving.

4) Local Development Workflow

Start

cd /path/to/workdock-platform
docker compose up -d --build

Main URLs

Bootstrap users

5) Docker Operations

docker compose up -d --build
docker compose restart web
docker compose restart worker
docker compose logs --no-color --tail=120 web
docker compose logs --no-color --tail=120 worker
docker compose down
docker compose down -v
The source code is bind-mounted into the container. Most template/view/static changes only require a web restart, not a full rebuild. Image changes such as system packages require docker compose up -d --build.

6) Database and Migrations

docker compose exec -T web python manage.py makemigrations
docker compose exec -T web python manage.py migrate
docker compose exec -T web python manage.py check

Role and Permission Model

7) Engineering Guidelines

Core rules

  1. Preserve behavior while refactoring.
  2. Prefer shared components over page-local special cases.
  3. Do not overwrite environment-specific runtime config as a side effect of code deploys.
  4. Keep code-driven behavior and data-driven behavior mentally separate.
  5. Update documentation in the same branch when operational workflow changes.
  6. Keep branded error handling wired through the root URL handlers so production does not fall back to Django default error pages.

Code vs data

8) Translation Workflow

Standard Django i18n path

make i18n-update-en
make i18n-compile

Equivalent raw commands:

docker compose exec -T web django-admin makemessages -l en
docker compose exec -T web django-admin compilemessages

9) PDF Pipeline

xhtml2pdf is sensitive to layout complexity. Keep print templates conservative and verify every structural change with a real generated PDF.

10) Email Pipeline

11) Nextcloud Integration

12) Branding

12b) App Registry

12c) Trial Lifecycle

13) Builder Architecture

Form Builder

Intro Builder

Dynamic content should use explicit DE/EN fields with German fallback, not machine translation at runtime.

Audit Trail

14) Testing and Validation

docker compose exec -T web python manage.py check
docker compose exec -T web python manage.py test
docker compose exec -T web python manage.py run_staging_e2e_check

15) Backup and Restore

make backup-create
make backup-verify BACKUP_DIR=backups/backup_YYYYmmdd_HHMMSS

16) Host and Domain Configuration

17) CI/CD

Current operating model

What is good enough today vs what is standard later

Why deploy is manual right now

The test server is inside the local network and uses a private IP address 192.168.2.55. GitHub-hosted runners on the public internet cannot reliably reach that target. Because of that, the correct deployment path today is:

  1. push code to GitHub
  2. let GitHub run CI
  3. deploy from the Mac on the same LAN

Automatic CD from GitHub becomes appropriate only after moving to a public server or using a self-hosted runner inside the LAN.

Recommended standard CI/CD model

  1. Keep one private repository.
  2. Use short-lived feature branches from develop.
  3. Require CI to pass before merge into develop.
  4. Deploy develop to a staging environment.
  5. Promote develop into main only after staging validation.
  6. Deploy main to production behind HTTPS with DEBUG=0.
  7. Protect production with GitHub environment approvals, production secrets, and rollback steps.

If you want the most standard shape, the long-term target is:

feature branch -> CI -> develop -> staging deploy -> validate -> main -> production deploy

What to change when this becomes a standard deployment

  1. Move staging and production onto hosts that GitHub or a self-hosted runner can reach reliably.
  2. Keep separate env files and secrets for staging and production.
  3. Run production with DJANGO_DEBUG=0, secure cookies, and HTTPS only.
  4. Add GitHub environment protection rules for development and production.
  5. Use the production deploy path only from main.
  6. Add backup verification and health verification as standard post-deploy checks.
  7. Later, consider image-based deploys if you want cleaner rollbacks than source-upload deploys.

What to do for normal work

  1. Start from develop.
  2. Do the implementation work.
  3. Push to GitHub.
  4. Let CI finish.
  5. Run the local test deployment helper from the Mac.
  6. Verify the updated version in the browser.
  7. When the integration line is stable, merge develop into main.
  8. Deploy production from the Mac only from main.

Decision guide

One-command test deployment

From the Mac on the same network:

git checkout develop
./scripts/deploy_test_from_mac.sh

This helper script does all of the following:

  1. checks that the current branch is develop
  2. fast-forwards from origin/develop
  3. checks that the server env file exists
  4. syncs the repository to /opt/workdock with rsync
  5. preserves server-local env files like .env.test and .env.prod
  6. runs the remote deployment script
  7. waits for the health endpoint
  8. prints the deployed commit and branch

One-command production deployment

From the Mac, only after the change has been promoted into main:

git checkout main
./scripts/deploy_prod_from_mac.sh

This helper script does all of the following:

  1. checks that the current branch is main
  2. fast-forwards from origin/main
  3. checks that the server env file exists
  4. syncs the repository to /opt/workdock with rsync
  5. preserves server-local env files like .env.test and .env.prod
  6. runs the remote deployment script with RUN_DJANGO_CHECK=1
  7. waits for the public health endpoint
  8. prints the deployed commit and branch

Test server values

GitHub Actions status

Minimal standard checklist for later

If the local deploy helper fails

  1. Check whether /opt/workdock/.env.test still exists on the server.
  2. Check SSH access from the Mac:
    ssh -4 root@192.168.2.55
  3. Check server health directly:
    curl -I http://192.168.2.55:8088/healthz/
  4. Check container status:
    ssh root@192.168.2.55 "cd /opt/workdock && docker compose --env-file .env.test -f docker-compose.prod.yml ps"
The LAN test deployment intentionally uses DJANGO_DEBUG=1 in .env.test because the security checks correctly reject insecure cookie settings when DEBUG=0 and the deployment is still plain HTTP behind a local test topology. This is acceptable for the test box only. Production must run with HTTPS and DEBUG=0.
If you still want branded wrong-URL and permission pages on the LAN test server while keeping DJANGO_DEBUG=1, enable FORCE_BRANDED_ERROR_PAGES=1 in .env.test. Full branded 500 behavior still requires DEBUG=0, which remains the correct production-style setup.

18) Deployment

Test server stack

What the deploy script does

  1. Validate env file presence
  2. Build web, worker, and caddy
  3. Start db and redis
  4. Initialize writable volume ownership for media/static/backups
  5. Run migrations
  6. Run bootstrap_initial_users
  7. Run collectstatic
  8. Optionally run manage.py check
  9. Start web, worker, and caddy
  10. Wait until /healthz/ becomes healthy

Manual deploy

The preferred test-deployment path is the local helper script from a Mac or another LAN-connected workstation:

./scripts/deploy_test_from_mac.sh

This script fast-forwards develop, checks that the remote env file exists, syncs the repo to the server with rsync, runs the remote deployment, verifies the health endpoint, and prints the deployed commit hash.

The helper scripts explicitly preserve server-local env files such as .env.test and .env.prod so deployment does not wipe machine-specific secrets.

Manual production deploy from the Mac

Use the production helper only from main:

git checkout main
./scripts/deploy_prod_from_mac.sh

This script fast-forwards main, checks that .env.prod exists on the target server, syncs the repo, runs the production deployment with RUN_DJANGO_CHECK=1, verifies https://workdock.bostame.de/healthz/, and prints the deployed commit hash.

Direct server-side deploy is still available if the code is already on the server:

cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml
cd /opt/workdock
RUN_DJANGO_CHECK=1 DEPLOY_HEALTH_URL="https://workdock.bostame.de/healthz/" ./scripts/deploy_stack.sh .env.prod docker-compose.prod.yml

Validation after deploy

curl -I http://192.168.2.55:8088/healthz/
ssh root@192.168.2.55 "cd /opt/workdock && docker compose --env-file .env.test -f docker-compose.prod.yml ps"

Runtime config sync

Deployment updates code. It does not automatically overwrite runtime database configuration. Use explicit sync when you want local configuration compared or applied to the server.

Supported sync scopes:

Export locally:

docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json
docker compose cp web:/tmp/portal-app-config.json /tmp/portal-app-config.json
docker compose cp web:/tmp/portal-deployment-config.json /tmp/portal-deployment-config.json

Copy the JSON files to the server host:

scp -4 /tmp/portal-app-config.json /tmp/portal-deployment-config.json root@192.168.2.55:/opt/workdock/

Because the server runs baked container images instead of a bind-mounted app tree, copy the files into the running web container before importing:

ssh -4 root@192.168.2.55 '
docker cp /opt/workdock/portal-app-config.json workdock-web-1:/tmp/portal-app-config.json &&
docker cp /opt/workdock/portal-deployment-config.json workdock-web-1:/tmp/portal-deployment-config.json
'

Dry-run the import first:

ssh -4 root@192.168.2.55 '
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run &&
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run
'

Only apply the import after the dry run looks correct:

ssh -4 root@192.168.2.55 '
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json &&
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json
'
Uploaded branding assets such as logo, favicon, and PDF letterhead are intentionally not included in deployment-config sync. They remain explicit media assets.

Proxmox / LXC requirement

The current server is an Ubuntu CT on Proxmox running Docker inside the container. The CT required Proxmox-side configuration before Docker containers could start correctly.

features: nesting=1,keyctl=1
lxc.apparmor.profile: unconfined
lxc.mount.entry: /dev/null sys/module/apparmor/parameters/enabled none bind 0 0

Those lines belong in /etc/pve/lxc/<CTID>.conf on the Proxmox host, followed by pct restart <CTID>.

Production expectations

Release checklist

  1. Run manage.py check
  2. Run tests or targeted verification
  3. Run translation compile step
  4. Rebuild containers if Python dependencies changed, then verify python -c "import requests" does not emit a compatibility warning
  5. Generate at least one onboarding/offboarding PDF if PDF templates changed
  6. Verify MailHog or SMTP path if email behavior changed
  7. Verify Nextcloud upload if integration behavior changed
  8. Update Project Wiki and Developer Handbook if architecture or operational workflow changed
  9. Take a snapshot commit before major next-phase work

18b) TUBCO Customer Setup

What this branch is for

First-time customer setup

  1. Check out release/tubco-v1.
  2. Create .env.prod on the target server.
  3. Run the destructive reset/bootstrap helper from the Mac.
  4. Import the intended TUBCO config baseline.
  5. Verify https://portal.tub.co/healthz/.
git checkout release/tubco-v1
RESET_CONFIRM=RESET \
EXPECTED_BRANCH=release/tubco-v1 \
DEPLOY_HOST=root@<customer-host> \
DEPLOY_PATH=/opt/workdock \
REMOTE_ENV_FILE=.env.prod \
HEALTH_URL=https://portal.tub.co/healthz/ \
RUN_DJANGO_CHECK=1 \
./scripts/reset_stack_from_mac.sh

Required production env values

APP_DOMAIN=portal.tub.co
APP_BASE_URL=https://portal.tub.co
DJANGO_DEBUG=0
DJANGO_SECURE_COOKIES=1
DJANGO_SECURE_SSL_REDIRECT=1

The customer server also needs strong values for DJANGO_SECRET_KEY and POSTGRES_PASSWORD.

Config baseline import

Export the intended local baseline:

docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json
docker compose cp web:/tmp/portal-app-config.json /tmp/portal-app-config.json
docker compose cp web:/tmp/portal-deployment-config.json /tmp/portal-deployment-config.json

Copy the payloads to the customer server and then into the running web container:

scp -4 /tmp/portal-app-config.json /tmp/portal-deployment-config.json root@<customer-host>:/opt/workdock/
ssh -4 root@<customer-host> '
docker cp /opt/workdock/portal-app-config.json workdock-web-1:/tmp/portal-app-config.json &&
docker cp /opt/workdock/portal-deployment-config.json workdock-web-1:/tmp/portal-deployment-config.json
'

Dry-run first, then apply:

ssh -4 root@<customer-host> '
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run &&
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run
'

ssh -4 root@<customer-host> '
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json &&
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json
'

Uploaded assets such as logo, favicon, and PDF letterhead are still separate media and need explicit upload.

Normal TUBCO updates

When you intentionally want to update the customer branch remote:

./scripts/git_remote_target.sh status
./scripts/git_remote_target.sh push-tubco release/tubco-v1

Use a TUBCO personal access token stored in the macOS keychain, not a reusable account password.

Customer role boundary

19) Command Reference

Local development

docker compose up -d --build

Start or rebuild the local stack.

docker compose restart web
docker compose restart worker

Restart app services after code or template changes.

./scripts/git_remote_target.sh status

Show the current branch, active local identity, and both remotes before pushing.

Validation

docker compose exec -T web python manage.py check

Run Django system checks.

docker compose exec -T web python manage.py test

Run the full test suite.

Local test deployment

./scripts/deploy_test_from_mac.sh

Sync the current develop checkout to the LAN test server and deploy it.

Reset a stack from scratch

git checkout develop
RESET_CONFIRM=RESET EXPECTED_BRANCH=develop ./scripts/reset_stack_from_mac.sh

Wipe the current test stack state and rebuild it with default bootstrap data.

git checkout release/tubco-v1
RESET_CONFIRM=RESET \
EXPECTED_BRANCH=release/tubco-v1 \
DEPLOY_HOST=root@<customer-host> \
DEPLOY_PATH=/opt/workdock \
REMOTE_ENV_FILE=.env.prod \
HEALTH_URL=https://portal.tub.co/healthz/ \
RUN_DJANGO_CHECK=1 \
./scripts/reset_stack_from_mac.sh

Use the second form for a customer setup from scratch. This is destructive and removes database/media/static/backups before bootstrapping again.

TUBCO setup

git checkout release/tubco-v1
RESET_CONFIRM=RESET \
EXPECTED_BRANCH=release/tubco-v1 \
DEPLOY_HOST=root@<customer-host> \
DEPLOY_PATH=/opt/workdock \
REMOTE_ENV_FILE=.env.prod \
HEALTH_URL=https://portal.tub.co/healthz/ \
RUN_DJANGO_CHECK=1 \
./scripts/reset_stack_from_mac.sh

Rebuild a fresh TUBCO environment from the customer branch.

./scripts/git_remote_target.sh push-tubco release/tubco-v1

Push an explicitly approved customer update to the TUBCO remote.

Production deployment

./scripts/deploy_prod_from_mac.sh

Sync the current main checkout to the production target and deploy it with production checks enabled.

Remote targeting

./scripts/git_remote_target.sh push-origin
./scripts/git_remote_target.sh push-tubco release/tubco-v1

Push to the intended remote explicitly instead of relying on memory.

./scripts/git_remote_target.sh set-own-identity
./scripts/git_remote_target.sh set-tubco-identity

Switch between the normal commit identity and the TUBCO customer identity when needed.

For the TUBCO HTTPS remote, prefer a personal access token instead of a reusable account password.

This repo now uses credential.helper=osxkeychain locally, so the TUBCO PAT should be stored in the macOS keychain instead of being embedded in remote URLs.

Direct server deployment

cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml

Deploy when code is already present on the server.

cd /opt/workdock
RUN_DJANGO_CHECK=1 DEPLOY_HEALTH_URL="https://workdock.bostame.de/healthz/" ./scripts/deploy_stack.sh .env.prod docker-compose.prod.yml

Production deploy when code is already present on the server.

Config sync

docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json

Export runtime configuration from local.

docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run

Validate server-side config import before applying it.

Backup

make backup-create
make backup-verify BACKUP_DIR=backups/backup_YYYYmmdd_HHMMSS

Create and verify backup bundles.

20) Troubleshooting

Localhost still looks stale after the server is already fixed

  1. Hard refresh the page with Cmd + Shift + R.
  2. If it still looks wrong, clear site data for 127.0.0.1:8088 in the browser devtools and sign in again.
  3. Restart the local web container:
    docker compose restart web
  4. If the issue still survives, rebuild the local stack:
    docker compose up -d --build

This is the right order when shared header fixes, page-local CSS fixes, or versioned static assets look correct on the server but localhost still shows the old UI.

21) Security and Maintenance Notes

For the next coder

{% endblock %}