{% extends 'workflows/base_shell.html' %} {% load static %} {% block title %}Developer Handbook{% endblock %} {% block extra_css %} {% endblock %} {% block shell_body %} {% include 'workflows/includes/app_header.html' with header_show_home=1 header_inside_shell=1 %}
Engineering guide for development, deployment, maintenance, and extension of the current portal deployment.
This handbook is for developers and maintainers. It documents the engineering workflow of the standalone product repository.
workdock-platformCONTRIBUTING.md/backend/config/: Django settings, WSGI, URL config/backend/workflows/: application logic, views, models, tasks, templates, static assets/backend/workflows/views.py: thin route wrapper layer; most view logic now lives in split modules by domain/backend/workflows/models.py: stable model import surface over split model modules/backend/workflows/tasks.py: async task entrypoints; PDF and notification logic were moved into dedicated modules/backend/workflows/templates/workflows/base_shell.html: standard page shell for new staff-facing pages/backend/workflows/roles.py: centralized role names, capability matrix, and template permission helpersbase_shell.html; do not rebuild topbar/frame logic in page-local templates./backend/workflows/pdf_assets/: default PDF HTML templates and default letterhead/backend/media/pdfs/: generated PDF outputs on host volume/backend/locale/: translation catalogs/docker-compose.yml: local runtime orchestration/Makefile: repeatable translation commands/.github/workflows/i18n.yml: translation compile validation in CIdevelop is the active integration branch.main is the stable branch intended for production promotion.develop and merge back into develop.main when a customer must not receive future feature work automatically.developdevelop into main when stableorigin is the normal product remote on GitHub.tubco is the customer remote for TUBCO only.origin.tubco only when you explicitly want to update the customer branch../scripts/git_remote_target.sh status
Use the helper above before pushing if there is any doubt about which remote should receive the change.
Use a dedicated release branch when a customer should receive the current stable product line but not future features by default.
release/tubco-v1tubco-baseline-2026-03main.develop.This keeps the customer line stable while the main product keeps evolving.
cd /path/to/workdock-platform
docker compose up -d --build
http://127.0.0.1:8088/http://127.0.0.1:8025/http://127.0.0.1:8088/healthz/admin_test / admin12345user_test / user12345docker compose up -d --build
docker compose restart web
docker compose restart worker
docker compose logs --no-color --tail=120 web
docker compose logs --no-color --tail=120 worker
docker compose down
docker compose down -v
docker compose up -d --build.
docker compose exec -T web python manage.py makemigrations
docker compose exec -T web python manage.py migrate
docker compose exec -T web python manage.py check
entrypoint-web.sh.Platform Owner, Super Admin, Admin, IT Staff, Staff.post_migrate hook in workflows.signals.workflows.roles.CAPABILITIES._require_capability(...) in views instead of flat is_staff checks.workflows.context_processors.role_context.Platform Owner is the product-level role. Company roles remain Super Admin, Admin, IT Staff, and Staff./admin-tools/users/ and is the preferred path for normal role assignment, account activation, invitation mail dispatch, password-reset mail dispatch, and controlled user deletion.is_staff=True but no explicit role group currently fall back to the Admin capability set.superuser accounts resolve to Super Admin.roles.py, gate the view, and hide the UI affordance when the capability is absent.make i18n-update-en
make i18n-compile
Equivalent raw commands:
docker compose exec -T web django-admin makemessages -l en
docker compose exec -T web django-admin compilemessages
gettext is installed in the Docker image..mo compilation anymore.NotificationTemplate, NotificationRule, and the welcome-email settings UI with explicit DE/EN subject/body fields.preferred_language so workflow emails can render in the submitter's active UI language with German fallback.preferred_language is normalized in model save() and also has a DB default of de, so alternate creation paths cannot insert null values.xhtml2pdf.templates.pdf, but can now be replaced from Admin Apps → Branding.backend/workflows/tasks.py.pdf_rendering.py and pdf_sections.py.preferred_language, with German fallback.onboarding_template.htmloffboarding_template.htmlonboarding_intro_template.htmlonboarding_intro_session_pdf.htmlNotificationTemplate and NotificationRule.preferred_language value.email_workflows.py and notification_dispatch.py.backend/workflows/services.py.Integrationen → Backup-Ziel.PortalBranding.Branding.@tub.co code changes.docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json
docker compose exec -T web python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run
docker compose exec -T web python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json
workflows/app_registry.py.PortalAppConfig.roles.py./admin-tools/apps/ for Platform Owner.docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run
docker compose exec -T web python manage.py import_portal_app_config /tmp/portal-app-config.json
PortalTrialConfig./admin-tools/trial/ for Platform Owner.workflows.middleware.TrialModeMiddleware, so expiry is handled centrally instead of per-view.docker compose exec -T web python manage.py cleanup_expired_trial_workspace --yes-delete
FormFieldConfig + FormOptionform_builder_config.py.form_builder_runtime.py.form_builder.py is the stable facade import path.IntroChecklistItemAdminAuditLog/admin-tools/audit-log//admin-tools/backups/ for create, verify, and delete actions. Keep real restore CLI-only.docker compose exec -T web python manage.py check
docker compose exec -T web python manage.py test
docker compose exec -T web python manage.py run_staging_e2e_check
check after model/view/template changes.base_shell.html and keep header/frame logic out of page-local templates.submitted → processing → completed/failed.submitted, and enqueue the appropriate Celery task again.make backup-create
make backup-verify BACKUP_DIR=backups/backup_YYYYmmdd_HHMMSS
backend/backups/ and ignored by git.media/ archive, metadata, and SHA256 checksums.Integrationen → Backup-Ziel.nextcloud implemented, s3 and nfs config-ready but not yet implemented.docker compose exec -T web python manage.py verify_latest_backup --create-if-missing
./scripts/backup_restore.sh --yes-restore backend/backups/backup_YYYYmmdd_HHMMSS
APP_DOMAIN: canonical hostname without schemeAPP_BASE_URL: canonical external URL including schemeDJANGO_ALLOWED_HOSTS: explicit host/IP allow-listDJANGO_CSRF_TRUSTED_ORIGINS: explicit origin allow-list with schemeAPP_DOMAIN and the hostname from APP_BASE_URL into the effective allowed-host configuration automatically.APP_BASE_URL is also appended to trusted CSRF origins automatically.APP_DOMAIN and APP_BASE_URL as the primary deployment-facing values instead of repeatedly editing long host/origin strings.DJANGO_ALLOWED_HOSTS and, if needed, in DJANGO_CSRF_TRUSTED_ORIGINS./admin-tools/deployment-hosts/.Invalid HTTP_HOST header failure happens before normal page routing, so a broken hostname cannot render a custom error page on that same broken host. Use a working host or IP to access the runbook and fix the env file.develop.main.main and a separate helper.The test server is inside the local network and uses a private IP address 192.168.2.55. GitHub-hosted runners on the public internet cannot reliably reach that target. Because of that, the correct deployment path today is:
Automatic CD from GitHub becomes appropriate only after moving to a public server or using a self-hosted runner inside the LAN.
develop.develop.develop to a staging environment.develop into main only after staging validation.main to production behind HTTPS with DEBUG=0.If you want the most standard shape, the long-term target is:
feature branch -> CI -> develop -> staging deploy -> validate -> main -> production deploy
DJANGO_DEBUG=0, secure cookies, and HTTPS only.development and production.main.develop.develop into main.main.From the Mac on the same network:
git checkout develop
./scripts/deploy_test_from_mac.sh
This helper script does all of the following:
developorigin/develop/opt/workdock with rsync.env.test and .env.prodFrom the Mac, only after the change has been promoted into main:
git checkout main
./scripts/deploy_prod_from_mac.sh
This helper script does all of the following:
mainorigin/main/opt/workdock with rsync.env.test and .env.prodRUN_DJANGO_CHECK=1192.168.2.55root/opt/workdock.env.testhttp://192.168.2.55:8088/healthz/https://workdock.bostame.de/.github/workflows/deploy-test.yml exists, but GitHub-hosted deploy to the LAN server is not the recommended path right now..github/workflows/deploy-prod.yml exists for later production use.main only./opt/workdock/.env.test still exists on the server.ssh -4 root@192.168.2.55
curl -I http://192.168.2.55:8088/healthz/
ssh root@192.168.2.55 "cd /opt/workdock && docker compose --env-file .env.test -f docker-compose.prod.yml ps"
DJANGO_DEBUG=1 in .env.test because the security checks correctly reject insecure cookie settings when DEBUG=0 and the deployment is still plain HTTP behind a local test topology. This is acceptable for the test box only. Production must run with HTTPS and DEBUG=0.
docker-compose.prod.ymlbackend/entrypoint-web-prod.shbackend/entrypoint-worker-prod.shdeploy/Caddyfilescripts/deploy_stack.shweb, worker, and caddydb and redisbootstrap_initial_userscollectstaticmanage.py checkweb, worker, and caddy/healthz/ becomes healthyThe preferred test-deployment path is the local helper script from a Mac or another LAN-connected workstation:
./scripts/deploy_test_from_mac.sh
This script fast-forwards develop, checks that the remote env file exists, syncs the repo to the server with rsync, runs the remote deployment, verifies the health endpoint, and prints the deployed commit hash.
The helper scripts explicitly preserve server-local env files such as .env.test and .env.prod so deployment does not wipe machine-specific secrets.
Use the production helper only from main:
git checkout main
./scripts/deploy_prod_from_mac.sh
This script fast-forwards main, checks that .env.prod exists on the target server, syncs the repo, runs the production deployment with RUN_DJANGO_CHECK=1, verifies https://workdock.bostame.de/healthz/, and prints the deployed commit hash.
Direct server-side deploy is still available if the code is already on the server:
cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml
cd /opt/workdock
RUN_DJANGO_CHECK=1 DEPLOY_HEALTH_URL="https://workdock.bostame.de/healthz/" ./scripts/deploy_stack.sh .env.prod docker-compose.prod.yml
curl -I http://192.168.2.55:8088/healthz/
ssh root@192.168.2.55 "cd /opt/workdock && docker compose --env-file .env.test -f docker-compose.prod.yml ps"
Deployment updates code. It does not automatically overwrite runtime database configuration. Use explicit sync when you want local configuration compared or applied to the server.
Supported sync scopes:
PortalAppConfigPortalBrandingPortalCompanyConfigExport locally:
docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json
docker compose cp web:/tmp/portal-app-config.json /tmp/portal-app-config.json
docker compose cp web:/tmp/portal-deployment-config.json /tmp/portal-deployment-config.json
Copy the JSON files to the server host:
scp -4 /tmp/portal-app-config.json /tmp/portal-deployment-config.json root@192.168.2.55:/opt/workdock/
Because the server runs baked container images instead of a bind-mounted app tree, copy the files into the running web container before importing:
ssh -4 root@192.168.2.55 '
docker cp /opt/workdock/portal-app-config.json workdock-web-1:/tmp/portal-app-config.json &&
docker cp /opt/workdock/portal-deployment-config.json workdock-web-1:/tmp/portal-deployment-config.json
'
Dry-run the import first:
ssh -4 root@192.168.2.55 '
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run &&
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run
'
Only apply the import after the dry run looks correct:
ssh -4 root@192.168.2.55 '
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json &&
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json
'
The current server is an Ubuntu CT on Proxmox running Docker inside the container. The CT required Proxmox-side configuration before Docker containers could start correctly.
features: nesting=1,keyctl=1
lxc.apparmor.profile: unconfined
lxc.mount.entry: /dev/null sys/module/apparmor/parameters/enabled none bind 0 0
Those lines belong in /etc/pve/lxc/<CTID>.conf on the Proxmox host, followed by pct restart <CTID>.
DJANGO_DEBUG=0RUN_DJANGO_CHECK=1manage.py checkpython -c "import requests" does not emit a compatibility warningdocker compose up -d --build
Start or rebuild the local stack.
docker compose restart web
docker compose restart worker
Restart app services after code or template changes.
./scripts/git_remote_target.sh status
Show the current branch, active local identity, and both remotes before pushing.
docker compose exec -T web python manage.py check
Run Django system checks.
docker compose exec -T web python manage.py test
Run the full test suite.
./scripts/deploy_test_from_mac.sh
Sync the current develop checkout to the LAN test server and deploy it.
./scripts/deploy_prod_from_mac.sh
Sync the current main checkout to the production target and deploy it with production checks enabled.
./scripts/git_remote_target.sh push-origin
./scripts/git_remote_target.sh push-tubco release/tubco-v1
Push to the intended remote explicitly instead of relying on memory.
./scripts/git_remote_target.sh set-own-identity
./scripts/git_remote_target.sh set-tubco-identity
Switch between the normal commit identity and the TUBCO customer identity when needed.
cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml
Deploy when code is already present on the server.
cd /opt/workdock
RUN_DJANGO_CHECK=1 DEPLOY_HEALTH_URL="https://workdock.bostame.de/healthz/" ./scripts/deploy_stack.sh .env.prod docker-compose.prod.yml
Production deploy when code is already present on the server.
docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json
Export runtime configuration from local.
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run
Validate server-side config import before applying it.
make backup-create
make backup-verify BACKUP_DIR=backups/backup_YYYYmmdd_HHMMSS
Create and verify backup bundles.
Cmd + Shift + R.127.0.0.1:8088 in the browser devtools and sign in again.docker compose restart web
docker compose up -d --build
This is the right order when shared header fixes, page-local CSS fixes, or versioned static assets look correct on the server but localhost still shows the old UI.
web and hard-refresh browser8025 and test/production mode togglechardet==5.2.0 is installed in the rebuilt image and restart web/workerapp user..env, not in tracked files.