16 KiB
Deployment and CI/CD
Current deployment model
- one private GitHub repository
developdeploys to the test servermainis reserved for production deployment- GitHub Actions uploads the repository contents to the server over SSH
- the server does not need GitHub access to deploy
This is intentional. For a private repository, server-side git clone adds unnecessary credential management.
Branch strategy
develop: test deployment branchmain: production branch- feature branches: normal product work
Environments
Development / test
- target server:
192.168.2.55 - deployment path:
/opt/workdock - stack file:
docker-compose.prod.yml - env file on server:
.env.test - current access URL:
http://192.168.2.55:8088
Production
- same deployment mechanism
- usually a different server
- env file on server:
.env.prod - branch:
main - should run behind real HTTPS
- should keep
DEBUG=0
Important design choice
The current test server is a LAN-only HTTP deployment.
Because the Django settings enforce secure-cookie checks when DEBUG=0, the test deployment uses:
DJANGO_DEBUG=1RUN_DJANGO_CHECK=0
That is acceptable for this internal test box only.
Production must use:
DJANGO_DEBUG=0DJANGO_SECURE_COOKIES=1- HTTPS
RUN_DJANGO_CHECK=1
Files used for deployment
- docker-compose.prod.yml
- scripts/deploy_stack.sh
- backend/entrypoint-web-prod.sh
- backend/entrypoint-worker-prod.sh
- deploy/Caddyfile
- .env.test.example
- .env.prod.example
- .github/workflows/deploy-test.yml
- .github/workflows/deploy-prod.yml
What deploy_stack.sh does
The deployment script:
- validates the env file exists
- builds
web,worker, andcaddy - starts
dbandredis - initializes writable volume ownership for:
/app/media/app/staticfiles/app/backups
- runs:
migratebootstrap_initial_userscollectstatic
- optionally runs
manage.py check - starts:
webworkercaddy
- waits for
/healthz/
Proxmox / LXC requirement
This project is running in an Ubuntu CT on Proxmox, with Docker inside the CT.
For this to work, the CT needed Proxmox-side configuration in:
/etc/pve/lxc/<CTID>.conf
Required settings:
features: nesting=1,keyctl=1
lxc.apparmor.profile: unconfined
lxc.mount.entry: /dev/null sys/module/apparmor/parameters/enabled none bind 0 0
Then restart the CT:
pct restart <CTID>
Without this, Docker containers in the CT fail with:
open sysctl net.ipv4.ip_unprivileged_port_start ... permission denied
This is a Proxmox/LXC nested-Docker issue, not an application bug.
Server bootstrap
Run on the server once:
apt-get update
apt-get install -y ca-certificates curl gnupg git
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable --now docker
Server directory layout
Current test server path:
/opt/workdock
Important server-local files:
/opt/workdock/.env.test- later
/opt/workdock/.env.prod
These env files are intentionally not uploaded from GitHub Actions.
Test env file
Create on the server:
cp .env.test.example .env.test
Current important values for the LAN test box:
DJANGO_DEBUG=1
DJANGO_ALLOWED_HOSTS=192.168.2.55,localhost,127.0.0.1
DJANGO_CSRF_TRUSTED_ORIGINS=http://192.168.2.55:8088
DJANGO_SECURE_COOKIES=0
DJANGO_SECURE_SSL_REDIRECT=0
APP_PORT=8088
SITE_ADDRESS=:80
Generate strong values for:
DJANGO_SECRET_KEYPOSTGRES_PASSWORD
Production env file
Production should use:
DJANGO_DEBUG=0
DJANGO_SECURE_COOKIES=1
DJANGO_SECURE_SSL_REDIRECT=1
And a real HTTPS hostname in:
DJANGO_ALLOWED_HOSTSDJANGO_CSRF_TRUSTED_ORIGINSSITE_ADDRESS
Manual test deployment
For a LAN-only test server, this is the recommended CD path.
One-command local deployment from your Mac
Use:
./scripts/deploy_test_from_mac.sh
What it does:
- requires the current branch to be
develop - fast-forwards from
origin/develop - verifies that the server env file exists before syncing
- syncs the repo to
/opt/workdockviarsync - runs the remote deployment script
- verifies the health endpoint
- prints the deployed commit and branch
Important:
- the helper preserves server-local env files:
.env.test.env.prod
- those files are not supposed to be replaced from your Mac checkout
Default assumptions:
- target host:
root@192.168.2.55 - target path:
/opt/workdock - env file:
.env.test - health URL:
http://192.168.2.55:8088/healthz/
Optional overrides:
DEPLOY_HOST=root@192.168.2.55 \
DEPLOY_PATH=/opt/workdock \
HEALTH_URL=http://192.168.2.55:8088/healthz/ \
./scripts/deploy_test_from_mac.sh
Manual production deployment
For production, use a dedicated helper instead of the test script.
One-command production deployment from your Mac
Use:
./scripts/deploy_prod_from_mac.sh
What it does:
- requires the current branch to be
main - fast-forwards from
origin/main - verifies that the server env file exists before syncing
- syncs the repo to the production path via
rsync - runs the remote deployment script with
RUN_DJANGO_CHECK=1 - verifies the production health endpoint
- prints the deployed commit and branch
Important:
- the production helper preserves server-local env files:
.env.test.env.prod
- do not use the test helper for production
Default assumptions:
- target host:
root@192.168.2.55 - target path:
/opt/workdock - env file:
.env.prod - health URL:
https://workdock.bostame.de/healthz/
Optional overrides:
DEPLOY_HOST=root@192.168.2.55 \
DEPLOY_PATH=/opt/workdock \
HEALTH_URL=https://workdock.bostame.de/healthz/ \
./scripts/deploy_prod_from_mac.sh
Manual server-side deploy only
If the latest code is already on the server:
cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml
Manual production deployment:
cd /opt/workdock
RUN_DJANGO_CHECK=1 DEPLOY_HEALTH_URL="https://workdock.bostame.de/healthz/" ./scripts/deploy_stack.sh .env.prod docker-compose.prod.yml
Runtime config sync
Deployment updates code. It does not automatically overwrite runtime database configuration.
Use explicit sync when you want local configuration to be compared or applied to another environment.
Supported sync scopes
PortalAppConfigPortalBrandingPortalCompanyConfig
Step 1: export locally
docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json
docker compose cp web:/tmp/portal-app-config.json /tmp/portal-app-config.json
docker compose cp web:/tmp/portal-deployment-config.json /tmp/portal-deployment-config.json
Step 2: copy JSON files to the server host
scp -4 /tmp/portal-app-config.json /tmp/portal-deployment-config.json root@192.168.2.55:/opt/workdock/
Step 3: copy JSON files into the running web container
The server uses baked images, not a bind-mounted app tree. Because of that, the running web container cannot automatically read arbitrary files from /opt/workdock.
Use:
ssh -4 root@192.168.2.55 '
docker cp /opt/workdock/portal-app-config.json workdock-web-1:/tmp/portal-app-config.json &&
docker cp /opt/workdock/portal-deployment-config.json workdock-web-1:/tmp/portal-deployment-config.json
'
Step 4: dry-run the import on the server
ssh -4 root@192.168.2.55 '
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run &&
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run
'
Step 5: apply the import on the server
Only do this if the dry run looks correct.
ssh -4 root@192.168.2.55 '
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json &&
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json
'
Notes
PortalAppConfigcovers app order, section, visibility, and overrides.- deployment-config sync covers branding/company text and metadata.
- uploaded branding files are intentionally excluded:
- logo
- favicon
- PDF letterhead
- use dry-run first. Treat config sync as an explicit operator action, not something hidden inside deploy.
GitHub Actions workflows
Test deployment workflow
File:
Behavior:
- triggers on push to
develop - can also be run manually with
workflow_dispatch - checks out the repo in GitHub Actions
- uploads the working tree to the server over SSH
- runs the server deployment script
Important:
- this workflow only works if the GitHub runner can reach the server
- it is not suitable for a pure LAN-only target using a private IP like
192.168.2.55 - for the current environment, prefer the local Mac deploy script or a self-hosted runner on the LAN
Production deployment workflow
File:
Behavior:
- manual only
- uploads the working tree to the production server
- runs the production deployment script
GitHub environment setup
In GitHub:
- open repository settings
- open
Environments - create:
developmentproduction
Exact GitHub UI path
- Open the private repository:
https://github.com/Bostame/workdock-platform
- Click:
Settings
- In the left sidebar, open:
Environments
- Click:
New environment
- Create:
development
- Repeat and create:
production
- Open the
developmentenvironment - Under
Environment secrets, click:Add environment secret
- Add each required secret one by one
- Repeat the same pattern later for
production
Development environment secrets
Add:
TEST_DEPLOY_HOSTTEST_DEPLOY_USERTEST_DEPLOY_PORTTEST_DEPLOY_PATHTEST_DEPLOY_SSH_KEY
Current test values:
TEST_DEPLOY_HOST=192.168.2.55TEST_DEPLOY_USER=rootTEST_DEPLOY_PORT=22TEST_DEPLOY_PATH=/opt/workdockTEST_DEPLOY_SSH_KEY=<private key that can ssh to root@192.168.2.55>
Development secret entry example
Use these exact values in the development environment:
TEST_DEPLOY_HOST
192.168.2.55
TEST_DEPLOY_USER
root
TEST_DEPLOY_PORT
22
TEST_DEPLOY_PATH
/opt/workdock
TEST_DEPLOY_SSH_KEY
<paste the full private SSH key that can log in to root@192.168.2.55>
The SSH key must include the full multi-line content, for example:
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
How to verify the SSH key before adding it
From your local machine:
ssh -4 root@192.168.2.55
If that works without asking for a password, the matching private key is the correct one to store in TEST_DEPLOY_SSH_KEY.
Production environment secrets
Add:
PROD_DEPLOY_HOSTPROD_DEPLOY_USERPROD_DEPLOY_PORTPROD_DEPLOY_PATHPROD_DEPLOY_SSH_KEY
How the CI/CD test deploy works
Normal flow
- push code to
develop - GitHub Actions runs
Deploy Test - workflow uploads repository contents to
/opt/workdock - server keeps its local
.env.test deploy_stack.shrebuilds and restarts the stack- workflow succeeds only after
/healthz/is healthy
Manual trigger
From GitHub Actions:
- open
Deploy Test - click
Run workflow
First GitHub Actions validation
After you add the development environment secrets:
- Open:
https://github.com/Bostame/workdock-platform/actions
- Open workflow:
Deploy Test
- Click:
Run workflow
- Select branch:
develop
- Run it
- Wait until both steps complete:
- upload bundle
- deploy over SSH
- Verify:
http://192.168.2.55:8088/healthz/
- Then open the app home page in the browser
What success looks like
- workflow status is green in GitHub Actions
Deploy Testjob finishes without SSH or health-check errors/healthz/returns200 OK- the containers on the test server remain up
If the workflow fails
Check in this order:
- wrong or incomplete
TEST_DEPLOY_SSH_KEY - wrong
TEST_DEPLOY_USER - wrong
TEST_DEPLOY_PATH - changed server host key
- server disk-space or Docker runtime issue
How to validate a deployment
From your machine
curl -I http://192.168.2.55:8088/healthz/
On the server
cd /opt/workdock
docker compose --env-file .env.test -f docker-compose.prod.yml ps
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 web
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 worker
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 caddy
Rollback
This deployment path is source-upload based, not image-tag based.
Rollback options:
- revert the bad commit on
developand let GitHub Actions deploy again - manually re-upload a previous working checkout and rerun
deploy_stack.sh
For production, you may later want image-tag based rollback. That is not necessary yet for the test box.
Operational notes
- server-local env files must survive deployments
- do not store
.env.testor.env.prodin Git - test deployment is intentionally weaker than production on transport security
- production should not reuse the test env model
Command reference
Use this as the short operational index.
Local development
docker compose up -d --build
Start or rebuild the local stack.
docker compose restart web
docker compose restart worker
Restart the app services after code/template changes.
Validation
docker compose exec -T web python manage.py check
Run Django system checks.
docker compose exec -T web python manage.py test
Run the full test suite.
Local test deployment
./scripts/deploy_test_from_mac.sh
Sync the current develop checkout to the LAN test server and deploy it.
Direct server deployment
cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml
Deploy when code is already present on the server.
Config export/import
docker compose exec -T web python manage.py export_portal_app_config --output /tmp/portal-app-config.json
docker compose exec -T web python manage.py export_portal_deployment_config --output /tmp/portal-deployment-config.json
Export runtime configuration from local.
docker exec workdock-web-1 python manage.py import_portal_app_config /tmp/portal-app-config.json --dry-run
docker exec workdock-web-1 python manage.py import_portal_deployment_config /tmp/portal-deployment-config.json --dry-run
Validate server-side config import before applying it.
Backup
make backup-create
make backup-verify BACKUP_DIR=backups/backup_YYYYmmdd_HHMMSS
Create and verify backup bundles.
Current known-good state
Validated manually:
- repository pushed to private GitHub
- server bootstrap completed
- test stack deployed successfully
- health check reachable at:
http://192.168.2.55:8088/healthz/