8.0 KiB
Deployment and CI/CD
Current deployment model
- one private GitHub repository
developdeploys to the test servermainis reserved for production deployment- GitHub Actions uploads the repository contents to the server over SSH
- the server does not need GitHub access to deploy
This is intentional. For a private repository, server-side git clone adds unnecessary credential management.
Branch strategy
develop: test deployment branchmain: production branch- feature branches: normal product work
Environments
Development / test
- target server:
192.168.2.55 - deployment path:
/opt/workdock - stack file:
docker-compose.prod.yml - env file on server:
.env.test - current access URL:
http://192.168.2.55:8088
Production
- same deployment mechanism
- different server
- env file on server:
.env.prod - should run behind real HTTPS
- should keep
DEBUG=0
Important design choice
The current test server is a LAN-only HTTP deployment.
Because the Django settings enforce secure-cookie checks when DEBUG=0, the test deployment uses:
DJANGO_DEBUG=1RUN_DJANGO_CHECK=0
That is acceptable for this internal test box only.
Production must use:
DJANGO_DEBUG=0DJANGO_SECURE_COOKIES=1- HTTPS
RUN_DJANGO_CHECK=1
Files used for deployment
- docker-compose.prod.yml
- scripts/deploy_stack.sh
- backend/entrypoint-web-prod.sh
- backend/entrypoint-worker-prod.sh
- deploy/Caddyfile
- .env.test.example
- .env.prod.example
- .github/workflows/deploy-test.yml
- .github/workflows/deploy-prod.yml
What deploy_stack.sh does
The deployment script:
- validates the env file exists
- builds
web,worker, andcaddy - starts
dbandredis - initializes writable volume ownership for:
/app/media/app/staticfiles/app/backups
- runs:
migratebootstrap_initial_userscollectstatic
- optionally runs
manage.py check - starts:
webworkercaddy
- waits for
/healthz/
Proxmox / LXC requirement
This project is running in an Ubuntu CT on Proxmox, with Docker inside the CT.
For this to work, the CT needed Proxmox-side configuration in:
/etc/pve/lxc/<CTID>.conf
Required settings:
features: nesting=1,keyctl=1
lxc.apparmor.profile: unconfined
lxc.mount.entry: /dev/null sys/module/apparmor/parameters/enabled none bind 0 0
Then restart the CT:
pct restart <CTID>
Without this, Docker containers in the CT fail with:
open sysctl net.ipv4.ip_unprivileged_port_start ... permission denied
This is a Proxmox/LXC nested-Docker issue, not an application bug.
Server bootstrap
Run on the server once:
apt-get update
apt-get install -y ca-certificates curl gnupg git
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable --now docker
Server directory layout
Current test server path:
/opt/workdock
Important server-local files:
/opt/workdock/.env.test- later
/opt/workdock/.env.prod
These env files are intentionally not uploaded from GitHub Actions.
Test env file
Create on the server:
cp .env.test.example .env.test
Current important values for the LAN test box:
DJANGO_DEBUG=1
DJANGO_ALLOWED_HOSTS=192.168.2.55,localhost,127.0.0.1
DJANGO_CSRF_TRUSTED_ORIGINS=http://192.168.2.55:8088
DJANGO_SECURE_COOKIES=0
DJANGO_SECURE_SSL_REDIRECT=0
APP_PORT=8088
SITE_ADDRESS=:80
Generate strong values for:
DJANGO_SECRET_KEYPOSTGRES_PASSWORD
Production env file
Production should use:
DJANGO_DEBUG=0
DJANGO_SECURE_COOKIES=1
DJANGO_SECURE_SSL_REDIRECT=1
And a real HTTPS hostname in:
DJANGO_ALLOWED_HOSTSDJANGO_CSRF_TRUSTED_ORIGINSSITE_ADDRESS
Manual test deployment
If you need to deploy manually on the test server:
cd /opt/workdock
RUN_DJANGO_CHECK=0 DEPLOY_HEALTH_URL="http://127.0.0.1:8088/healthz/" ./scripts/deploy_stack.sh .env.test docker-compose.prod.yml
Manual production deployment:
cd /opt/workdock
RUN_DJANGO_CHECK=1 ./scripts/deploy_stack.sh .env.prod docker-compose.prod.yml
GitHub Actions workflows
Test deployment workflow
File:
Behavior:
- triggers on push to
develop - can also be run manually with
workflow_dispatch - checks out the repo in GitHub Actions
- uploads the working tree to the server over SSH
- runs the server deployment script
Production deployment workflow
File:
Behavior:
- manual only
- uploads the working tree to the production server
- runs the production deployment script
GitHub environment setup
In GitHub:
- open repository settings
- open
Environments - create:
developmentproduction
Development environment secrets
Add:
TEST_DEPLOY_HOSTTEST_DEPLOY_USERTEST_DEPLOY_PORTTEST_DEPLOY_PATHTEST_DEPLOY_SSH_KEY
Current test values:
TEST_DEPLOY_HOST=192.168.2.55TEST_DEPLOY_USER=rootTEST_DEPLOY_PORT=22TEST_DEPLOY_PATH=/opt/workdockTEST_DEPLOY_SSH_KEY=<private key that can ssh to root@192.168.2.55>
Production environment secrets
Add:
PROD_DEPLOY_HOSTPROD_DEPLOY_USERPROD_DEPLOY_PORTPROD_DEPLOY_PATHPROD_DEPLOY_SSH_KEY
How the CI/CD test deploy works
Normal flow
- push code to
develop - GitHub Actions runs
Deploy Test - workflow uploads repository contents to
/opt/workdock - server keeps its local
.env.test deploy_stack.shrebuilds and restarts the stack- workflow succeeds only after
/healthz/is healthy
Manual trigger
From GitHub Actions:
- open
Deploy Test - click
Run workflow
How to validate a deployment
From your machine
curl -I http://192.168.2.55:8088/healthz/
On the server
cd /opt/workdock
docker compose --env-file .env.test -f docker-compose.prod.yml ps
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 web
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 worker
docker compose --env-file .env.test -f docker-compose.prod.yml logs --tail=100 caddy
Rollback
This deployment path is source-upload based, not image-tag based.
Rollback options:
- revert the bad commit on
developand let GitHub Actions deploy again - manually re-upload a previous working checkout and rerun
deploy_stack.sh
For production, you may later want image-tag based rollback. That is not necessary yet for the test box.
Operational notes
- server-local env files must survive deployments
- do not store
.env.testor.env.prodin Git - test deployment is intentionally weaker than production on transport security
- production should not reuse the test env model
Current known-good state
Validated manually:
- repository pushed to private GitHub
- server bootstrap completed
- test stack deployed successfully
- health check reachable at:
http://192.168.2.55:8088/healthz/