{% extends 'workflows/base_shell.html' %} {% load static %} {% block title %}Developer Handbook{% endblock %} {% block extra_css %} {% endblock %} {% block shell_body %} {% include 'workflows/includes/app_header.html' with header_show_home=1 header_inside_shell=1 %}
Engineering runbook for development, deployment, maintenance, and extension of the current company portal deployment.
This handbook is for developers and maintainers. It documents the actual engineering workflow of the standalone product repository.
workdock-platform (current local path; legacy compose project name may still be retained for runtime continuity)/backend/config/: Django settings, WSGI, URL config/backend/workflows/: application logic, views, models, tasks, templates, static assets/backend/workflows/templates/workflows/base_shell.html: standard page shell for new staff-facing pages/backend/workflows/roles.py: centralized role names, capability matrix, and template permission helpersbase_shell.html; do not rebuild topbar/frame logic in page-local templates./backend/media/templates/: PDF HTML templates and letterhead source files/backend/media/pdfs/: generated PDF outputs on host volume/backend/locale/: translation catalogs/docker-compose.yml: local runtime orchestration/Makefile: repeatable translation commands/.github/workflows/i18n.yml: translation compile validation in CIcd /path/to/workdock-platform
docker compose up -d --build
http://127.0.0.1:8088/http://127.0.0.1:8025/http://127.0.0.1:8088/healthz/admin_test / admin12345user_test / user12345docker compose up -d --build
docker compose restart web
docker compose restart worker
docker compose logs --no-color --tail=120 web
docker compose logs --no-color --tail=120 worker
docker compose down
docker compose down -v
docker compose up -d --build.
docker compose exec -T web python manage.py makemigrations
docker compose exec -T web python manage.py migrate
docker compose exec -T web python manage.py check
entrypoint-web.sh.Platform Owner, Super Admin, Admin, IT Staff, Staff.post_migrate hook in workflows.signals.workflows.roles.CAPABILITIES._require_capability(...) in views instead of flat is_staff checks.workflows.context_processors.role_context.Platform Owner is the product-level role. Company roles remain Super Admin, Admin, IT Staff, and Staff./admin-tools/users/ and is the preferred path for normal role assignment, account activation, invitation mail dispatch, password-reset mail dispatch, and controlled user deletion.is_staff=True but no explicit role group currently fall back to the Admin capability set.superuser accounts resolve to Super Admin.roles.py, gate the view, and hide the UI affordance when the capability is absent.make i18n-update-en
make i18n-compile
Equivalent raw commands:
docker compose exec -T web django-admin makemessages -l en
docker compose exec -T web django-admin compilemessages
gettext is installed in the Docker image..mo compilation anymore.NotificationTemplate, NotificationRule, and the welcome-email settings UI with explicit DE/EN subject/body fields.preferred_language so workflow emails can render in the submitter's active UI language with German fallback.preferred_language is normalized in model save() and also has a DB default of de, so alternate creation paths cannot insert null values.xhtml2pdf.templates.pdf, but can now be replaced from Admin Apps → Branding.backend/workflows/tasks.py.preferred_language, with German fallback.onboarding_template.htmloffboarding_template.htmlonboarding_intro_template.htmlonboarding_intro_session_pdf.htmlDEFAULT_NOTIFICATION_TEMPLATES in tasks.py.NotificationTemplate and NotificationRule.preferred_language value.backend/workflows/services.py.Integrationen → Backup-Ziel.PortalBranding.Branding.@tub.co code changes.workflows/app_registry.py.PortalAppConfig.roles.py./admin-tools/apps/ for Platform Owner.PortalTrialConfig./admin-tools/trial/ for Platform Owner.workflows.middleware.TrialModeMiddleware, so expiry is handled centrally instead of per-view.docker compose exec -T web python manage.py cleanup_expired_trial_workspace --yes-delete
FormFieldConfig + FormOptionIntroChecklistItemAdminAuditLog/admin-tools/audit-log//admin-tools/backups/ for create, verify, and delete actions. Keep real restore CLI-only.docker compose exec -T web python manage.py check
docker compose exec -T web python manage.py test
docker compose exec -T web python manage.py run_staging_e2e_check
check after model/view/template changes.base_shell.html and keep header/frame logic out of page-local templates.submitted → processing → completed/failed.submitted, and enqueue the appropriate Celery task again.make backup-create
make backup-verify BACKUP_DIR=backups/backup_YYYYmmdd_HHMMSS
backend/backups/ and ignored by git.media/ archive, metadata, and SHA256 checksums.Integrationen → Backup-Ziel.nextcloud implemented, s3 and nfs config-ready but not yet implemented.docker compose exec -T web python manage.py verify_latest_backup --create-if-missing
./scripts/backup_restore.sh --yes-restore backend/backups/backup_YYYYmmdd_HHMMSS
APP_DOMAIN: canonical hostname without schemeAPP_BASE_URL: canonical external URL including schemeDJANGO_ALLOWED_HOSTS: explicit host/IP allow-listDJANGO_CSRF_TRUSTED_ORIGINS: explicit origin allow-list with schemeAPP_DOMAIN and the hostname from APP_BASE_URL into the effective allowed-host configuration automatically.APP_BASE_URL is also appended to trusted CSRF origins automatically.APP_DOMAIN and APP_BASE_URL as the primary deployment-facing values instead of repeatedly editing long host/origin strings.DJANGO_ALLOWED_HOSTS and, if needed, in DJANGO_CSRF_TRUSTED_ORIGINS./admin-tools/deployment-hosts/.Invalid HTTP_HOST header failure happens before normal page routing, so a broken hostname cannot render a custom error page on that same broken host. Use a working host or IP to access the runbook and fix the env file.develop for the test deployment, main reserved for production..github/workflows/deploy-test.yml..github/workflows/deploy-prod.yml.192.168.2.55. For the current local test server, the correct CD path is manual deployment from a LAN machine or a self-hosted runner inside the same network.development and production.TEST_DEPLOY_HOSTTEST_DEPLOY_USERTEST_DEPLOY_PORTTEST_DEPLOY_PATHTEST_DEPLOY_SSH_KEYPROD_DEPLOY_HOSTPROD_DEPLOY_USERPROD_DEPLOY_PORTPROD_DEPLOY_PATHPROD_DEPLOY_SSH_KEYSettings.Environments in the left sidebar.development.production.development.Environment secrets, add the deployment secrets one by one.production.192.168.2.55root/opt/workdockhttp://192.168.2.55:8088TEST_DEPLOY_HOST=192.168.2.55TEST_DEPLOY_USER=rootTEST_DEPLOY_PORT=22TEST_DEPLOY_PATH=/opt/workdockTEST_DEPLOY_SSH_KEY=<full private key content>Deploy Test on branch develop.http://192.168.2.55:8088/healthz/ returns HTTP 200.DJANGO_DEBUG=1 in .env.test because the security checks correctly reject insecure cookie settings when DEBUG=0 and the deployment is still plain HTTP. This is acceptable for the internal test box only. Production must run with HTTPS and DEBUG=0.
docker-compose.prod.ymlbackend/entrypoint-web-prod.shbackend/entrypoint-worker-prod.shdeploy/Caddyfilescripts/deploy_stack.shweb, worker, and caddydb and redisbootstrap_initial_userscollectstaticmanage.py checkweb, worker, and caddy/healthz/ becomes healthyThe preferred current test-deployment path is the local helper script from a Mac or another LAN-connected workstation:
./scripts/deploy_test_from_mac.sh
This script fast-forwards develop, checks that the remote env file exists, syncs the repo to the server with rsync, runs the remote deployment, verifies the health endpoint, and prints the deployed commit hash.
The script explicitly preserves server-local env files such as .env.test and .env.prod so deployment does not wipe machine-specific secrets.
Direct server-side deploy is still available if the code is already on the server:
cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml
curl -I http://192.168.2.55:8088/healthz/
ssh root@192.168.2.55 "cd /opt/workdock && docker compose --env-file .env.test -f docker-compose.prod.yml ps"
The current server is an Ubuntu CT on Proxmox running Docker inside the container. The CT required Proxmox-side configuration before Docker containers could start correctly.
features: nesting=1,keyctl=1
lxc.apparmor.profile: unconfined
lxc.mount.entry: /dev/null sys/module/apparmor/parameters/enabled none bind 0 0
Those lines belong in /etc/pve/lxc/<CTID>.conf on the Proxmox host, followed by pct restart <CTID>.
DJANGO_DEBUG=0RUN_DJANGO_CHECK=1manage.py checkpython -c "import requests" does not emit a compatibility warningweb and hard-refresh browser8025 and test/production mode togglechardet==5.2.0 is installed in the rebuilt image and restart web/workerapp user..env, not in tracked files.