feat: Working minimal, reproducible infrastructure.

pull/20/head
Sofus Albert Høgsbro Rose 2023-08-13 04:49:19 +02:00
commit b470f36da0
Signed by: so-rose
GPG Key ID: AD901CB0F3701434
36 changed files with 1715 additions and 0 deletions

35
.dockerignore 100644
View File

@ -0,0 +1,35 @@
# You may want to customise this file depending on your Operating System
# and the editor that you use.
#
# We recommend that you use a Global Gitignore for files that are not related
# to the project. (https://help.github.com/articles/ignoring-files/#create-a-global-gitignore)
# OS
#
# Ref: https://github.com/github/gitignore/blob/master/Global/macOS.gitignore
# Ref: https://github.com/github/gitignore/blob/master/Global/Windows.gitignore
# Ref: https://github.com/github/gitignore/blob/master/Global/Linux.gitignore
.DS_STORE
Thumbs.db
# Editors
#
# Ref: https://github.com/github/gitignore/blob/master/Global
# Ref: https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
# Ref: https://github.com/github/gitignore/blob/master/Global/VisualStudioCode.gitignore
.idea
.chrome
/*.log
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
# Python
**/__pycache__
.venv
# Local Developer Notes
dev

22
.editorconfig 100644
View File

@ -0,0 +1,22 @@
# EditorConfig is awesome: https://EditorConfig.org
root = true
# Unix-style newlines with a newline ending every file
[*]
end_of_line = lf
insert_final_newline = true
# Python
[*.py]
indent_style = tab
indent_size = 2
# Python
[*.toml]
indent_style = tab
indent_size = 2
# YML
[*.(yml|yaml)]
indent_style = space
indent_size = 2

40
.gitignore vendored 100644
View File

@ -0,0 +1,40 @@
# You may want to customise this file depending on your Operating System
# and the editor that you use.
#
# We recommend that you use a Global Gitignore for files that are not related
# to the project. (https://help.github.com/articles/ignoring-files/#create-a-global-gitignore)
# OS
#
# Ref: https://github.com/github/gitignore/blob/master/Global/macOS.gitignore
# Ref: https://github.com/github/gitignore/blob/master/Global/Windows.gitignore
# Ref: https://github.com/github/gitignore/blob/master/Global/Linux.gitignore
.DS_STORE
Thumbs.db
# Editors
#
# Ref: https://github.com/github/gitignore/blob/master/Global
# Ref: https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
# Ref: https://github.com/github/gitignore/blob/master/Global/VisualStudioCode.gitignore
.idea
.chrome
/*.log
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
**/neovide_backtraces.log
# Python
__pycache__/
*.py[cod]
*$py.class
.venv
.cache-trivy/
.hypothesis/
# Local Developer Notes
dev

View File

@ -0,0 +1,7 @@
repos:
- repo: https://github.com/compilerla/conventional-pre-commit
rev: v2.3.0
hooks:
- id: conventional-pre-commit
stages: [commit-msg]
args: [feat, fix, ci, chore] # list of Conventional Commits types to allow

67
DEPLOYING.md 100644
View File

@ -0,0 +1,67 @@
# Prerequisites
## Wireguard Key Generation
*TODO: Automate?*
Generate wg keys for all hosts:
```bash
wg genkey
pass insert path/to/private
pass /path/to/private | wg pubkey
pass insert /path/to/public
```
Save each in `password-store` under `<host>_<private|public>_key`.
Then, generate a "Pre-Shared Key" for each Peer-Peer:
```
wg genpsk > psk_peer_peer
```
# Persistence
This deployment has the following requirements in terms of persistence:
## auth
`authentik-postgres`:
1. **Low-Latency FS**: Storage for `postgres` database.
2. **FS**: Storage for `postgres` backups.
`authentik-redis`:
1. **FS** (*non-critical*): Storage for RDB + AOF Redis persistence.
## chat
`zulip-postgres`
1. **Low-Latency**: Storage for `postgres` database.
2. **FS**: Storage for `postgres` backups.
`zulip-redis`:
1. **FS** (*non-critical*): Storage for RDB + AOF Redis persistence.
`zulip`:
1. **FS/S3**: Storage for file uploads.
## git
`gitea`:
1. **FS/S3**: Attachments, lfs, avatars, repo-avatars, repo-archive, packages, actions_log, actions_artifact
2. **FS**: Repository Storage.
3. **Low-Latency FS**: Postgres Storage.
4. **Low-Latency FS**: Indexer (mellisearch) storage.
5. **FS**: Storage for `SQLite` backups.
`gitea-redis`:
1. **FS** (*non-critical*): Storage for RDB + AOF Redis persistence.
## mesh
`traefik`:
1. **FS** (*sensitive*): Storage for SSL Certificates.
## updater
`diun`:
1. **Low-Latency FS** (*non-critical*): Cache for Previous Image Updates.
## uptime
`uptime-kuma`:
1. **Low-Latency FS**: Storage for SQLite DB.
- **NOTE: We might be able to remove this by configuring it on startup.**

60
README.md 100644
View File

@ -0,0 +1,60 @@
# Complete Infrastructure for DTU Python Support
**Very heavily WIP**
This project describes and implements the complete infrastructure for DTUs Python Support group.
The repository provides the following user-facing services:
- timesigned.com: Modern, multilingual guide to using Python at DTU.
- SSG with [mdbook](https://rust-lang.github.io/mdBook/) w/plugins.
- chat.timesigned.com: Modern asynchronous communication and support channel for everybody using Python at DTU.
- Instance of [Zulip](https://zulip.com/).
- git.timesigned.com: Lightweight collaborative development for teams
- Instance of [Forgejo](https://forgejo.org/), itself a soft-fork of [Gitea](https://about.gitea.com/)
- auth.timesigned.com: Identity Provider allowing seamless, secure access to key services with their DTU Account.
- Instance of [Authentik](https://goauthentik.io/).
- uptime.timesigned.com: Black-box monitoring with notifications
- Instance of [Authentik](https://goauthentik.io/).
# References
## Wireguard / systemd-networkd
- `systemd-networkd` Network: <https://www.freedesktop.org/software/systemd/man/systemd.network.html>
- `systemd-networkd` NetDev: <https://man.archlinux.org/man/systemd.netdev.5>
- Setup Inspiration: <https://elou.world/en/tutorial/wireguard>
- Wireguard w/`systemd-networkd`: <https://wiki.archlinux.org/title/WireGuard#systemd-networkd>
- Network Test w/`iperf`: <https://www.redhat.com/sysadmin/network-testing-iperf3>
## Ansible
- DigitalOcean `droplet`: <https://docs.ansible.com/ansible/latest/collections/community/digitalocean/digital_ocean_droplet_module.html>
- CloudFlare `dns`: <https://docs.ansible.com/ansible/latest/collections/community/general/cloudflare_dns_module.html>
- `template`: <https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html>
- `password-store`: <https://docs.ansible.com/ansible/latest/collections/community/general/passwordstore_lookup.html>
- `set-fact`: <https://docs.ansible.com/ansible/latest/collections/ansible/builtin/set_fact_module.html>
- `file`: <https://docs.ansible.com/ansible/latest/collections/ansible/builtin/file_module.html>
### Docker Ansible
- Index: <https://docs.ansible.com/ansible/latest/collections/community/docker/index.html>
- Docker `swarm` Module: <https://docs.ansible.com/ansible/latest/collections/community/docker/docker_swarm_module.html>
- Docker `network` Module: <https://docs.ansible.com/ansible/latest/collections/community/docker/docker_network_module.html>
- Docker `prune` Module: <https://docs.ansible.com/ansible/latest/collections/community/docker/docker_prune_module.html>
- Docker `volume` Module: <https://docs.ansible.com/ansible/latest/collections/community/docker/docker_volume_module.html>
## rclone
- Docker Plugin Docs: <https://rclone.org/docker/>
- `rclone` mount: <https://rclone.org/commands/rclone_mount/>
- Docker Serve Docs: <https://rclone.org/commands/rclone_serve_docker/#options>
- S3 Backend: <https://rclone.org/s3/>
- Crypt Meta-Backend: <https://rclone.org/crypt/>
## Swarm Deployment
- The Funky Penguin: <https://geek-cookbook.funkypenguin.co.nz/docker-swarm>
- Traefik Certificate Auto-Renewal: <https://doc.traefik.io/traefik/https/acme/#automatic-renewals>
- Traefik Service: <https://doc.traefik.io/traefik/routing/services/#configuring-http-services>

149
TODO.md 100644
View File

@ -0,0 +1,149 @@
# Ansible / Dev TODO
Cluster/Ansible Setup
- [x] Setup Playbook
- [x] Root as local var: `work/dtu/python-support/*`
- [x] Get 2 DO Droplets
- [x] Provision DNS
- [ ] Key Fingerprint as local var
- [x] Setup Wireguard wg0 between DO Droplets
- [ ] Setup unattended-upgrades
Swarm
- [x] Install Docker
- [x] Check Swarm ports on wg0: https://docs.docker.com/engine/swarm/swarm-tutorial/
- [x] Init Swarm manager & worker
- [x] Install rclone volume plugin: https://rclone.org/docker/
- [ ] Label big one as 'storage'
Stack: cleanup
- [x] Security Audit
- [x] **Deploy Stack**
Stack: mesh
- [x] Install Configs
- [x] **Deploy Stack**
- [x] rclone `acme.json` to R2 w/crypt
- [ ] Security Audit
Stack: site-support
- [x] Generate Configs
- [x] Install Configs
- [x] **Deploy Stack**
- [ ] Security Audit
Stack: updater
- [ ] config: main
- [ ] config: cleanup
- [ ] config: mesh
- [ ] config: site-support
- [ ] Install Configs
- [ ] **Deploy Stack**
- [ ] Security Audit
Stack: auth
- [ ] Write Stack
- [ ] storage: authentik-postgres
- [ ] storage: authentik-redis
- [ ] *Test Deploy*
- [ ] configs: Blueprints (export from prototyping)
- [ ] Install Configs
- [ ] role: API Setup of Things
- [ ] **Deploy Stack**
- [ ] updater: Integrate update-check
- [ ] Security Audit
Stack: s3
- [ ] Write Stack
- https://geek-cookbook.funkypenguin.co.nz/recipes/minio/
- Restrict to 'storage' label.
- [ ] ...?
- [ ] Install Configs
- [ ] Install Secrets
- [ ] storage: minio
- [ ] *Test Deploy*
- [ ] role: API Setup of Things
- [ ] **Deploy Stack**
- [ ] auth: Integrate OIDC
- https://min.io/docs/minio/container/operations/external-iam.html
- https://goauthentik.io/integrations/services/minio/
- [ ] updater: integrate
- [ ] Security Audit
Stack: chat
- [ ] Write Stack
- https://geek-cookbook.funkypenguin.co.nz/recipes/minio/
- Restrict to 'storage' label.
- [ ] ...?
- [ ] Install Configs
- [ ] Install Secrets
- [ ] storage: zulip-postgres
- [ ] storage: zulip-rabbitmq
- [ ] storage: zulip-redis
- [ ] s3: zulip
- [ ] *Test Deploy*
- [ ] auth: Integrate OIDC
- https://zulip.readthedocs.io/en/latest/production/authentication-methods.html#openid-connect
- Backup SAML: https://goauthentik.io/integrations/services/zulip/
- [ ] role: API Setup of Things
- [ ] **Deploy Stack**
- [ ] updater: Integrate
- [ ] Security Audit
Stack: git
- [ ] Install Configs
- [ ] Install Secrets
- [ ] *Test Deploy*
- [ ] storage: gitea-redis
- [ ] storage: gitea-postgres
- [ ] storage: gitea-mellisearch
- https://www.meilisearch.com/docs/learn/cookbooks/docker
- [ ] s3: gitea
- [ ] s3 via rclone: gitea (repositories)
- [ ] role: API Setup of Things
- [ ] **Deploy Stack**
- [ ] Configure gitea-actions w/auto-setup
- [ ] manual: Migrate docker-mdbook, site-support.
Bonus:
- Play with `uptime`.
- Backups!
# Playbook Creation Notes
- [x] mesh should use a non-`local` driver.
- [ ] Implement rolling updates to services within stacks, whose configs have changed.
- Note `rolling_updates` in the `docker_config` ansible module.
- With a little information-gathering, I'm certain we can prevent actually stopping stacks on deploy and instead only do the secret rotation as described in the Docker documentation: https://docs.docker.com/engine/swarm/secrets/#example-rotate-a-secret
- NOTE that the rclone volume stuff is always gonna need manual stop/start. Is jank. Such is the life.
- [ ] Automatic R2 Bucket Creation
- [ ] Only do the delays when we actually need to stop stacks / unmount volumes
- [ ] Encrypted use of R2 bucket.
- https://rclone.org/crypt/
- [ ] Templated security.txt in site-support
- [ ] Templated limits to not kill the demo hosts in ex. site-support :)
- [ ] Please, please, a nice README.md in site-support?
- [ ] Move DNS stuff out to the stacks. Trust me!
- [ ] Invest in some delegation to roles. These playbooks be gettin messy.
- [ ] Figure out a way to deal with concurrent `acme.json` in Traefik. For now I've set it to one replica and `vfs_cache_mode=full` (I think `none` may be wonky with this particular need of Traefik?)
- Needs more testing!

65
inventory.yml 100644
View File

@ -0,0 +1,65 @@
####################
# - Hosts - by Purpose
####################
service:
hosts:
raspberry.node.timesigned.com:
vars:
ansible_user: root
storage:
hosts:
blueberry.node.timesigned.com:
vars:
ansible_user: root
####################
# - Hosts - by Swarm Role
####################
leader:
## ONLY ==1 Host can be Leader
hosts:
raspberry.node.timesigned.com:
vars:
ansible_user: root
manager:
hosts:
raspberry.node.timesigned.com:
vars:
ansible_user: root
worker:
hosts:
blueberry.node.timesigned.com:
vars:
ansible_user: root
swarm:
hosts:
raspberry.node.timesigned.com:
blueberry.node.timesigned.com:
vars:
ansible_user: root
####################
# - Hosts - by L3 Network
####################
wg0:
hosts:
raspberry.node.timesigned.com:
wg0_ip: "10.9.8.1"
wg_private_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/raspberry_private_key') }}"
wg_public_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/raspberry_public_key') }}"
wg_psk_blueberry.node.timesigned.com: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/psk_raspberry-blueberry') }}"
blueberry.node.timesigned.com:
wg0_ip: "10.9.8.2"
wg_private_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/blueberry_private_key') }}"
wg_public_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/blueberry_public_key') }}"
wg_psk_raspberry.node.timesigned.com: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/psk_raspberry-blueberry') }}"
vars:
ansible_user: root

View File

@ -0,0 +1,136 @@
- hosts: localhost
vars:
dns_root: "timesigned.com"
node_primary: "raspberry.node.timesigned.com"
digitalocean_droplet_token: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/digitalocean-droplet-token') }}"
cloudflare_email: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/cloudflare-email') }}"
cloudflare_dns_token: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/cloudflare-dns-token') }}"
droplet_service_image: "debian-12-x64"
## curl -X GET --silent "https://api.digitalocean.com/v2/images?per_page=999" -H "Authorization: Bearer $(pass work/dtu/python-support/digitalocean-droplet-token)" | jq | less
droplet_service_size: "s-1vcpu-1gb"
droplet_service_region: "fra1"
## curl -X GET --silent "https://api.digitalocean.com/v2/sizes?per_page=999" -H "Authorization: Bearer $(pass work/dtu/python-support/digitalocean-droplet-token)" | jq | less
droplet_storage_image: "debian-12-x64"
droplet_storage_size: "s-1vcpu-1gb"
droplet_storage_region: "fra1"
tasks:
####################
# - Prepare SSH Information
####################
- name: "Get SSH Public Key"
shell: "ssh-add -L"
register: "ssh_key_pub_cmdout"
- name: "Add SSH Public Key to DigitalOcean account"
digital_ocean_sshkey:
name: "key"
oauth_token: "{{ digitalocean_droplet_token }}"
ssh_pub_key: "{{ ssh_key_pub_cmdout.stdout }}"
state: "present"
register: "sshkey_result"
####################
# - Create Digitalocean Nodes
####################
- name: "Create Storage Droplet"
digital_ocean_droplet:
name: "{{ item }}"
oauth_token: "{{ digitalocean_droplet_token }}"
ssh_keys: ["{{ sshkey_result.data.ssh_key.id }}"]
image: "{{ droplet_storage_image }}"
size: "{{ droplet_storage_size }}"
region: "{{ droplet_storage_region }}"
wait_timeout: 600
unique_name: "yes"
state: present
with_inventory_hostnames:
- storage
register: droplet_storage_result
- name: "Create Service Droplet"
digital_ocean_droplet:
name: "{{ item }}"
oauth_token: "{{ digitalocean_droplet_token }}"
ssh_keys: ["{{ sshkey_result.data.ssh_key.id }}"]
image: "{{ droplet_service_image }}"
size: "{{ droplet_service_size }}"
region: "{{ droplet_service_region }}"
wait_timeout: 600
unique_name: "yes"
state: present
with_inventory_hostnames:
- service
register: droplet_service_result
####################
# - Set DNS A Records => Hosts
####################
- name: "Set Storage DNS A => *.node.{{ dns_root }}"
cloudflare_dns:
api_token: "{{ cloudflare_dns_token }}"
zone: "{{ dns_root }}"
type: "A"
record: "{{ item.data.droplet.name }}"
value: "{{ item.data.ip_address }}"
with_items: "{{ droplet_storage_result.results }}"
- name: "Set Service DNS A => *.node.{{ dns_root }}"
cloudflare_dns:
api_token: "{{ cloudflare_dns_token }}"
zone: "{{ dns_root }}"
type: "A"
record: "{{ item.data.droplet.name }}"
value: "{{ item.data.ip_address }}"
with_items: "{{ droplet_service_result.results }}"
####################
# - Set DNS CNAME Record => @
####################
- name: "Set DNS CNAME => Primary Node"
cloudflare_dns:
api_token: "{{ cloudflare_dns_token }}"
zone: "{{ dns_root }}"
type: "CNAME"
record: "@"
value: "{{ node_primary }}"
## Cloudflare allows CNAME on @ via CNAME-flattening
####################
# - Set DNS CNAME Records => Stacks
####################
- name: "Set DNS CNAME => Stack: auth"
cloudflare_dns:
api_token: "{{ cloudflare_dns_token }}"
zone: "{{ dns_root }}"
type: "CNAME"
record: "auth"
value: "@"
- name: "Set DNS CNAME => Stack: site-support"
cloudflare_dns:
api_token: "{{ cloudflare_dns_token }}"
zone: "{{ dns_root }}"
type: "CNAME"
record: "pysupport"
value: "@"

View File

@ -0,0 +1,132 @@
- hosts: swarm
become: "true"
tasks:
####################
# - Tuning - Traefik
# -- Traefik serving QUIC can be bottlenecked by a too-low UDP buffer.
# -- This increases both send & receive from ~200KB to 2.5MB.
####################
- name: "Set net.core.rmem_max = 2500000"
sysctl:
state: "present"
name: "net.core.rmem_max"
value: "2500000"
reload: "yes"
- name: "Set net.core.wmem_max = 2500000"
sysctl:
state: "present"
name: "net.core.rmem_max"
value: "2500000"
reload: "yes"
####################
# - Docker - Install
####################
- name: "Download Docker Apt Key"
ansible.builtin.get_url:
url: "https://download.docker.com/linux/debian/gpg"
dest: "/etc/apt/trusted.gpg.d/docker.asc"
checksum: "sha256:1500c1f56fa9e26b9b8f42452a553675796ade0807cdce11975eb98170b3a570"
owner: "root"
group: "root"
mode: "644"
- name: "Add Docker Apt Repository"
apt_repository:
state: "present"
repo: "deb https://download.docker.com/linux/debian bullseye stable"
filename: "docker"
- name: "Install Docker CE"
apt:
state: "present"
name: "docker-ce"
- name: "Install python3-docker"
apt:
state: "present"
name: "python3-docker"
####################
# - Docker Plugin - rclone
####################
- name: "Install fuse"
apt:
state: "present"
name: "fuse"
- name: "Create rclone Config Path"
ansible.builtin.file:
path: "/var/lib/docker-plugins/rclone/config"
state: directory
mode: "0750"
- name: "Create rclone Cache Path"
ansible.builtin.file:
path: "/var/lib/docker-plugins/rclone/cache"
state: directory
mode: "0750"
- name: "Disable the rclone Docker Plugin"
community.docker.docker_plugin:
state: "disable"
alias: "rclone"
plugin_name: "rclone/docker-volume-rclone:amd64"
- name: "Install rclone Docker Plugin"
community.docker.docker_plugin:
state: "present"
alias: "rclone"
plugin_name: "rclone/docker-volume-rclone:amd64"
plugin_options:
args: "-v --allow-other"
- name: "Enable the rclone Docker Plugin"
community.docker.docker_plugin:
state: "enable"
alias: "rclone"
plugin_name: "rclone/docker-volume-rclone:amd64"
plugin_options:
args: "-v --allow-other"
####################
# - Docker - Swarm Init
####################
- hosts: leader
become: "true"
tasks:
- name: "Initialize Docker Swarm Leader"
community.docker.docker_swarm:
state: "present"
advertise_addr: "{{ wg0_ip }}"
listen_addr: "{{ wg0_ip }}:2377"
- name: "Collect Swarm Info"
community.docker.docker_swarm_info:
register: swarm_info
- name: "Retrieve Join Tokens"
set_fact:
swarm_manager_token: "{{ swarm_info.swarm_facts['JoinTokens']['Manager'] }}"
swarm_worker_token: "{{ swarm_info.swarm_facts['JoinTokens']['Worker'] }}"
- name: "Install jsondiff & pyyaml (stack-deploy deps)"
apt:
state: "present"
name:
- "python3-jsondiff"
- "python3-yaml"
# SKIP Manager
# - Currently, there is only one manager == leader. So there's no point.
- hosts: worker
become: "true"
tasks:
- name: "Initialize Docker Swarm Workers"
community.docker.docker_swarm:
state: "join"
advertise_addr: "{{ wg0_ip }}"
join_token: "{{ hostvars[groups['leader'][0]]['swarm_worker_token'] }}"
remote_addrs: [ "{{ hostvars[groups['leader'][0]]['wg0_ip'] }}:2377" ]

View File

@ -0,0 +1,10 @@
- hosts: swarm
become: "true"
tasks:
####################
# - Tuning - Dev
####################
- name: "Install Terminfo for Kitty"
ansible.builtin.apt:
state: "present"
name: "kitty-terminfo"

View File

@ -0,0 +1,43 @@
- hosts: wg0
become: "true"
tasks:
####################
# - Wireguard
####################
- name: "Install Wireguard Tools"
ansible.builtin.apt:
state: "present"
name: "wireguard"
- name: "systemd-networkd: Install wg0 Device"
template:
src: "./templates/99-wg0.netdev"
dest: "/etc/systemd/network/99-wg0.netdev"
owner: "root"
group: "systemd-network"
mode: "0640"
- name: "systemd-networkd: Install wg0 Network"
template:
src: "./templates/99-wg0.network"
dest: "/etc/systemd/network/99-wg0.network"
owner: "root"
group: "systemd-network"
mode: "0640"
- name: "Restart systemd-networkd"
systemd:
name: "systemd-networkd.service"
state: "restarted"
####################
# - Wireguard - Enable Packet Forwarding
####################
- name: "Set net.ipv4.ip_forward = 1"
sysctl:
state: "present"
name: "net.ipv4.ip_forward"
value: "1"
reload: "yes"

View File

@ -0,0 +1,19 @@
[NetDev]
Name=wg0
Kind=wireguard
Description=WireGuard tunnel wg0
[WireGuard]
ListenPort=51871
PrivateKey={{ wg_private_key }}
{% for item in groups['wg0'] %}
{% if item != inventory_hostname %}
[WireGuardPeer]
PublicKey={{ hostvars[item]['wg_public_key'] }}
PresharedKey={{ hostvars[item]['wg_psk_' ~ inventory_hostname] }}
AllowedIPs={{ hostvars[item]['wg0_ip'] }}/32
Endpoint={{ item }}:51871
{% endif %}
{% endfor %}

View File

@ -0,0 +1,5 @@
[Match]
Name=wg0
[Network]
Address={{ wg0_ip }}/24

155
run.sh 100755
View File

@ -0,0 +1,155 @@
#!/bin/bash
set -e ## Exit if Problems
set -u ## Fail on Undefined Variable
SCRIPT_PATH="$(dirname "$(readlink -f "$0")")"
####################
# - Constants
####################
PLAYBOOKS_PATH="$SCRIPT_PATH/playbooks"
INVENTORY="$SCRIPT_PATH/inventory.yml"
PLAYBOOK_HOSTS="$PLAYBOOKS_PATH/playbook.hosts.yml"
PLAYBOOK_WG0="$PLAYBOOKS_PATH/playbook.wg0.yml"
PLAYBOOK_SWARM="$PLAYBOOKS_PATH/playbook.swarm.yml"
PLAYBOOK_STACK_CLEANUP="$SCRIPT_PATH/stacks/cleanup/playbook.yml"
PLAYBOOK_STACK_MESH="$SCRIPT_PATH/stacks/mesh/playbook.yml"
PLAYBOOK_STACK_SITE_SUPPORT="$SCRIPT_PATH/stacks/site-support/playbook.yml"
help() {
less -R << EOF
This script manages the deployment using ansible.
Usage:
./run.sh [COMMAND]
EOF
}
####################
# - Utilities
####################
cmd_exists() {
if type -P "$1" &> /dev/null || [ -x "$1" ]; then
echo true
else
echo false
fi
}
pkg_installed() {
if [ $(dpkg-query -W -f='${Status}' "$1" 2>/dev/null | grep -c "ok installed") -eq 0 ]; then
echo false
else
echo true
fi
}
####################
# - Check Preconditions
####################
if [[ $(whoami) == root ]]; then
echo "Please don't run as root."
exit 1
fi
case $(cat /etc/debian_version | cut -d . -f 1) in
"11")
echo "Detected Debian 11 (Supported)..."
;;
"12")
echo "Detected Debian 12 (Supported)..."
;;
*)
echo "Could not detect a supported OS. Refer to manual for more."
exit 1
;;
esac
if [[ $(cmd_exists ansible) != true ]]; then
echo "This script requires ansible. Press ENTER to install and continue..."
sudo apt install ansible
echo "This script requires latest community.docker module. Press ENTER to install and continue..."
ansible-galaxy collection install community.docker
fi
####################
# - Actions
####################
action_hosts() {
ansible-playbook \
--inventory "$INVENTORY" \
"$PLAYBOOK_HOSTS"
}
action_wg0() {
ansible-playbook \
--inventory "$INVENTORY" \
"$PLAYBOOK_WG0"
}
action_swarm() {
ansible-playbook \
--inventory "$INVENTORY" \
"$PLAYBOOK_SWARM"
}
action_stack_cleanup() {
ansible-playbook \
--inventory "$INVENTORY" \
"$PLAYBOOK_STACK_CLEANUP"
}
action_stack_mesh() {
ansible-playbook \
--inventory "$INVENTORY" \
"$PLAYBOOK_STACK_MESH"
}
action_stack_site_support() {
ansible-playbook \
--inventory "$INVENTORY" \
"$PLAYBOOK_STACK_SITE_SUPPORT"
}
####################
# - Check Dependencies
####################
case $1 in
sync)
action_hosts
action_wg0
action_swarm
action_stack_cleanup
action_stack_mesh
action_stack_site_support
;;
sync-hosts)
action_hosts
;;
sync-wg0)
action_wg0
;;
sync-swarm)
action_swarm
;;
sync-stack-cleanup)
action_stack_cleanup
;;
sync-stack-mesh)
action_stack_mesh
;;
sync-stack-site-support)
action_stack_site_support
;;
# sync-role)
# ansible-playbook \
# --inventory "$INVENTORY" \
# --tags "$2" \
# "$PLAYBOOK"
# ;;
esac

6
stacks/.gitignore vendored 100644
View File

@ -0,0 +1,6 @@
auth
chat
git
s3
updater
uptime

View File

@ -0,0 +1,2 @@
# TODO
- [ ] Security

View File

@ -0,0 +1,31 @@
# Security
Here follows an explanation of security practices taken into account.
Refer to https://docs.docker.com/compose/compose-file/compose-file-v3/ for explanations of individual points.
## Rootness
**The container process runs as `root`**.
Due to the deterministic, static nature of the container process, this is not an issue.
## Port Exposure
The container exposes no ports.
## Volume Access
**The container process has `docker.sock` access**.
Due to the deterministic, static nature of the container process, this is not an issue.
## Resource Limits
The service employs CPU/Memory usage limits in the `deploy` section.
This helps prevent any issues with the container process from crashing the entire host.
## Capabilities
All capabilities are dropped with `--cap_drop ALL`.
No capabilities need to be added back, so none are.
## Special Note: latest
Hosts are presumed to be kept up-to-date via the official `docker-ce` package.
Thus, uniquely, using `latest` tag in this container is warranted.

View File

@ -0,0 +1,37 @@
version: "3.8"
services:
docker-cleanup:
image: docker.io/docker:latest
cap_drop:
- ALL
volumes:
- /var/run/docker.sock:/var/run/docker.sock
entrypoint: []
command:
- "sh"
- "-euc"
- |
while true; do
docker image prune --all --force
docker system prune --all --force --volumes
sleep 86400
## 1 day, in seconds
done
deploy:
mode: global
resources:
limits:
cpus: "1.0"
memory: "1G"
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s

View File

@ -0,0 +1,28 @@
- hosts: leader
become: "true"
vars:
stack_name: "cleanup"
tasks:
####################
# - Stack Deployment
####################
- name: "Upload Stack to /tmp"
template:
src: "./docker-compose.yml"
dest: "/tmp/{{ stack_name }}.yml"
owner: "root"
group: "root"
mode: "0640"
- name: "Deploy Stack: {{ stack_name }}"
community.docker.docker_stack:
state: "present"
prune: "true"
name: "{{ stack_name }}"
compose:
- "/tmp/{{ stack_name }}.yml"
- name: "Delete /tmp Stack"
ansible.builtin.file:
path: "/tmp/{{ stack_name }}.yml"
state: "absent"

View File

@ -0,0 +1,2 @@
# TODO
- [ ] Configure Services per-Stack

View File

@ -0,0 +1,10 @@
[http.routers.site-support__site-support]
rule = "Host(`pysupport.timesigned.com`)"
entryPoints = ["websecure", "web"]
service = "site-support__site-support"
[[http.services.site-support__site-support.loadBalancer.servers]]
url = "http://site-support:8787"
[http.routers.site-support__site-support.tls]
certResolver = "letsencrypt"

View File

@ -0,0 +1,21 @@
####################
# - Default Middlewares
####################
[http.middlewares.default.chain]
middlewares = [
"default-security-headers",
]
####################
# - Middleware: Default Security Headers
####################
[http.middlewares.default-security-headers.headers]
browserXssFilter = true # X-XSS-Protection=1; mode=block
contentTypeNosniff = true # X-Content-Type-Options=nosniff
forceSTSHeader = true # Add STS even when using HTTP.
frameDeny = true # X-Frame-Options=deny
referrerPolicy = "strict-origin-when-cross-origin"
sslRedirect = true # Allow only https requests
stsIncludeSubdomains = true # Add includeSubdomains to STS header
stsPreload = true # Add preload flag appended to STS header
stsSeconds = 63072000 # Set max-age of STS header (2 years)

View File

@ -0,0 +1,63 @@
####################
# - Global Config
####################
[global]
checkNewVersion = false
sendAnonymousUsage = false
[experimental]
http3 = true
[api]
dashboard = false
insecure = false
debug = false
disabledashboardad = true
[log]
level = "DEBUG"
####################
# - Certificate Resolvers
# * https://doc.traefik.io/traefik/https/acme/#certificate-resolvers
####################
[certificatesResolvers.letsencrypt.acme]
email = "{{ email_letsencrypt }}"
storage = "/data-certs/acme.json"
#caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
[certificatesResolvers.letsencrypt.acme.tlsChallenge]
####################
# - Entry Points
####################
#[entryPoints.ssh]
#address = ":22"
[entryPoints.websecure]
address = ":443"
http3.advertisedPort = 443
[entryPoints.web]
address = ":80"
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
permanent = true
####################
# - Providers
####################
[providers.file]
directory = "/data-providers"
watch = false
debugLogGeneratedTemplate = true

View File

@ -0,0 +1,16 @@
####################
# - TLS Defaults
# * See https://doc.traefik.io/traefik/https/tls/
# * Adapted from https://ssl-config.mozilla.org
####################
[tls.options.default]
minVersion = "VersionTLS12"
sniStrict = true
cipherSuites = [
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
]

View File

@ -0,0 +1,89 @@
version: "3.8"
services:
traefik:
image: traefik:v2.10
user: "5000:5000"
configs:
- source: mesh__traefik_static.toml
target: /etc/traefik/traefik.toml
uid: "5000"
gid: "5000"
- source: mesh__traefik_tls.toml
target: /etc/traefik/dynamic/tls.toml
uid: "5000"
gid: "5000"
- source: mesh__traefik_default_middlewares.toml
target: /etc/traefik/dynamic/default_middlewares.toml
uid: "5000"
gid: "5000"
- source: mesh__stack_site-support.toml
target: /data-providers/site-support.toml
uid: "5000"
gid: "5000"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- mesh__traefik_certs:/data-certs
ports:
## HTTP
- target: 80
published: 80
protocol: tcp
mode: host
## HTTPS
- target: 443
published: 443
protocol: tcp
mode: host
deploy:
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
order: stop-first
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints:
- node.role == manager
networks:
- mesh_public
####################
# - Resources
####################
configs:
mesh__traefik_static.toml:
external: true
mesh__traefik_tls.toml:
external: true
mesh__traefik_default_middlewares.toml:
external: true
mesh__stack_site-support.toml:
external: true
volumes:
mesh__traefik_certs:
external: true
networks:
mesh_public:
external: true

View File

@ -0,0 +1,120 @@
####################
# - Stop the Stack
####################
- hosts: leader
become: "true"
vars:
stack_name: "mesh"
tasks:
- name: "Stop Stack: {{ stack_name }}"
community.docker.docker_stack:
state: "absent"
absent_retries: 15
name: "{{ stack_name }}"
- name: "Pause to Let Stack Stop"
pause:
seconds: 5
####################
# - Volume Creation
####################
- hosts: swarm
become: "true"
vars:
cloudflare_b0__access_key_id: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/r2/mesh__traefik_certs/access_key_id') }}"
cloudflare_b0__secret_access_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/r2/mesh__traefik_certs/secret_access_key') }}"
cloudflare_b0__endpoint: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/r2/mesh__traefik_certs/endpoint') }}"
tasks:
- name: "Unmount Volume: mesh__traefik_certs"
community.docker.docker_volume:
state: "absent"
name: "mesh__traefik_certs"
driver: "rclone"
- name: "Pause to Let Volume Unmount"
pause:
seconds: 5
- name: "Mount Volume: mesh__traefik_certs"
community.docker.docker_volume:
state: "present"
name: "mesh__traefik_certs"
driver: "rclone"
driver_options:
remote: ":s3:mesh--traefik-certs"
uid: "5000"
gid: "5000"
s3_provider: "Cloudflare"
s3_access_key_id: "{{ cloudflare_b0__access_key_id }}"
s3_secret_access_key: "{{ cloudflare_b0__secret_access_key }}"
s3_region: "auto"
s3_endpoint: "{{ cloudflare_b0__endpoint }}"
s3_acl: "private"
vfs_cache_mode: "full"
####################
# - Deployment
####################
- hosts: leader
become: "true"
vars:
email_letsencrypt: "s174509@dtu.dk"
stack_name: "mesh"
stack_configs:
- "mesh__traefik_static.toml"
- "mesh__traefik_tls.toml"
- "mesh__traefik_default_middlewares.toml"
- "mesh__stack_site-support.toml"
tasks:
####################
# - Network Creation
####################
- name: "Create Network: mesh_public"
community.docker.docker_network:
state: "present"
name: "mesh_public"
driver: "overlay"
scope: "swarm"
attachable: true
appends: true
####################
# - Configs Creation
####################
- name: "Create Docker Configs"
community.docker.docker_config:
state: "present"
name: "{{ item }}"
data: "{{ lookup('template', './configs/' ~ item) | b64encode }}"
data_is_b64: "true"
with_items: "{{ stack_configs }}"
####################
# - Stack Deployment
####################
- name: "Upload Stack to /tmp"
template:
src: "./docker-compose.yml"
dest: "/tmp/{{ stack_name }}.yml"
owner: "root"
group: "root"
mode: "0640"
- name: "Deploy Stack: {{ stack_name }}"
community.docker.docker_stack:
state: "present"
prune: "true"
name: "{{ stack_name }}"
compose:
- "/tmp/{{ stack_name }}.yml"
- name: "Delete /tmp Stack"
ansible.builtin.file:
path: "/tmp/{{ stack_name }}.yml"
state: "absent"

View File

@ -0,0 +1,17 @@
# TODO
- [ ] Test
# Introduction
This is stack deploys the `python-support` website.
# Monitoring
Check for:
- Expired security.txt
Consider checking for:
- Revoked security.txt key
- Tag update (with alert to the server webhook)

View File

@ -0,0 +1,47 @@
# Security
Here follows an explanation of security practices taken into account.
Refer to https://docs.docker.com/compose/compose-file/compose-file-v3/ for explanations of individual points.
## Rootness
The container process runs as `5000:5000`.
No processes are run as root within the container.
## Port Exposure
The container participates in the private `mesh_public` overlay network.
This allows the reverse proxy, Traefik, to route traffic via. internal DNS.
This traffic is unencrypted HTTP.
Thus, **the overlay network must be run on a trusted (L3) network**.
## Volume Access
Only `localtime` and `timezone` are mounted (read-only).
All files to be served are either baked into the container image, or mounted with `docker config`.
## Resource Limits
The service employs CPU/Memory usage limits in the `deploy` section.
This helps prevent a DDoS attack from crashing the entire host.
## Capabilities
All capabilities are dropped with `--cap_drop ALL`.
No capabilities need to be added back, so none are.
## security.txt
*See https://securitytxt.org/ for RFC + generator.*
This stack comes with a `security.txt` generator in `scripts__security_txt`, which:
- Templates mail contact, expiry, GPG public key link, and canonical path.
- Signs the file with the GPG private key referenced in the link.
To use it, first adjust the following block in `gen.py`:
```python
MAILTO =
EXPIRY =
MAILTO_PGP_FINGERPRINT =
DEPLOY_DOMAIN =
```
Then, run `./gen.py` from any working directory. Remember to review the generated file, and update `docker config`.

View File

@ -0,0 +1,24 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Contact: mailto:s174509@dtu.dk
Expires: 2024-08-01T00:00:00
Encryption: https://keys.openpgp.org/vks/v1/by-fingerprint/E3B345EFFF5B3994BC1D12603D01BE95F3EFFEB9
Preferred-Languages: en, dk
Canonical: https://timesigned.com/.well-known/security.txt
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEG10i+uTnDBwXTs3FrZAcsPNwFDQFAmTUpRcACgkQrZAcsPNw
FDRt9g/9GnqvAVUCBEZYtv+WwizxRe1iZF5ABIHytnymqsgjNjoF0uBxCZzR7MFZ
z7yP/ChmaS9g14DOSAUs5I3si3mF1pcHgS0/auGMB84xg2p3Jn1ZmUIU2mPppEqw
PvIju6hM5dSEgZap8iwxUis7bIqdtV+PeYfZdzRkXyVnBSCNpbK9VHX5enyMX7MD
Is7PzQorn3MwytmhxOkYZ4XRxFd2OUtMm8QDQuSZjPSCEtXykH5Y6ITn1nCuJYQw
Nz9wyE4bNnzdZMVFWzDdwICDHoWzQO3SCvyDbxKlDnY+AN2/6pzKvPo+C3iMpNdo
MG+BuXVKc2ZwOj4+g6Srk9sM0flMy83HHOTYFXLx2M7guaa/+WaJK7GiKjaQUQJk
fV/toxLEpmZONbGFQQR9wXvwA6iIee08A2Le9gmGdD2T/OUrTOVXemqd9tvhfDPn
RserBgHnnFO7+ucIFjtqwhMmh3iXLg+x/cZyvt25Gke9WhwPu9oEEMLmP/M2N7XC
TGopbg7GbDoZNY/BEz0Fh49DNYef8kemFc/qEFBV0XbVZRqIH0+zBrrs6z9LdSy+
soB4yooK7dBa3Sxx01jYwv6o5yaKcBbxeNIx3Xf8awLONspr5RMELOSPSECAERs+
GHYcpHSvBMzrdaz+uW9tHgKUAK9URDO8DOQphltZpg1ldTFIrZA=
=XXVW
-----END PGP SIGNATURE-----

View File

@ -0,0 +1,60 @@
version: "3.8"
services:
site-support:
image: git.sofus.io/so-rose/site-support:0
user: "5020:5020"
cap_drop:
- ALL
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
configs:
- source: site-support__security.txt
target: /public/.well-known/security.txt
uid: "5020"
gid: "5020"
environment:
SERVER_PORT: "8787"
SERVER_REDIRECT_TRAILING_SLASH: "true"
SERVER_LOG_LEVEL: "info"
SERVER_LOG_REMOTE_ADDRESS: "false"
SERVER_THREADS_MULTIPLIER: "0" ## Use # CPUs
SERVER_SECURITY_HEADERS: "true"
SERVER_DIRECTORY_LISTING: "false"
SERVER_CACHE_CONTROL_HEADERS: "false" ## change when stable?
SERVER_COMPRESSION: "true" ## reconsider for small ssg payload
SERVER_COMPRESSION_STATIC: "false" ## pre-compress? :)
deploy:
mode: replicated
replicas: 1
# resources:
# limits:
# cpus: "4.0"
# memory: "4G"
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
networks:
- mesh_public
configs:
site-support__security.txt:
external: true
networks:
mesh_public:
external: true

View File

@ -0,0 +1,66 @@
####################
# - Deployment
####################
- hosts: leader
become: "true"
vars:
stack_name: "site-support"
stack_configs:
- "site-support__security.txt"
tasks:
####################
# - Stop the Stack
####################
- name: "Stop Stack: {{ stack_name }}"
community.docker.docker_stack:
state: "absent"
absent_retries: 15
name: "{{ stack_name }}"
####################
# - Network Creation
####################
- name: "Create Network: mesh_public"
community.docker.docker_network:
state: "present"
name: "mesh_public"
driver: "overlay"
scope: "swarm"
attachable: true
appends: true
####################
# - Config Creation
####################
- name: "Create Docker Configs"
community.docker.docker_config:
state: "present"
name: "{{ item }}"
data: "{{ lookup('template', './configs/' ~ item) | b64encode }}"
data_is_b64: "true"
with_items: "{{ stack_configs }}"
####################
# - Stack Deployment
####################
- name: "Upload Stack to /tmp"
template:
src: "./docker-compose.yml"
dest: "/tmp/{{ stack_name }}.yml"
owner: "root"
group: "root"
mode: "0640"
- name: "Deploy Stack: {{ stack_name }}"
community.docker.docker_stack:
state: "present"
prune: "true"
name: "{{ stack_name }}"
compose:
- "/tmp/{{ stack_name }}.yml"
- name: "Delete /tmp Stack"
ansible.builtin.file:
path: "/tmp/{{ stack_name }}.yml"
state: "absent"

View File

@ -0,0 +1,126 @@
#!/usr/bin/python3
# Copyright (C) 2023 Sofus Albert Høgsbro Rose
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""This script templates and signs a `security.txt` file.
Note that:
- This script presumes that `gpg` is installed.
- This script presumes that the private key of the configued fingerprint is available to use with `gpg --clearsign`.
- The keyserver is hardcoded to `keys.openpgp.org`.
To use, first adjust the following configuration block:
```python
MAILTO =
EXPIRY =
MAILTO_PGP_FINGERPRINT =
DEPLOY_DOMAIN =
```
Then, just run `./gen.py`.
**REMEMBER TO REVIEW THE GENERATED FILE BEFORE DEPLOYMENT**.
"""
import os
import sys
if not all([
sys.version_info.major == 3,
sys.version_info.minor in [9, 10, 11, 12, 13],
]):
sys.exit(1)
from pathlib import Path
import platform
import shutil
import subprocess
import contextlib
from datetime import datetime
from string import Template
####################
# - Configuration
####################
MAILTO = "s174509@dtu.dk"
EXPIRY = datetime(year = 2024, month = 8, day = 1).isoformat()
MAILTO_PGP_FINGERPRINT = "E3B345EFFF5B3994BC1D12603D01BE95F3EFFEB9"
DEPLOY_DOMAIN = "https://timesigned.com"
####################
# - Constants
####################
SCRIPT_PATH = Path(__file__).resolve().parent
PATH_SECURITY_TXT = (
SCRIPT_PATH.parent / "configs" / "site-support__security.txt"
)
####################
# - Utilities
####################
@contextlib.contextmanager
def cd_script_dir() -> None:
cwd_orig = Path.cwd()
os.chdir(SCRIPT_PATH)
try:
yield
finally:
os.chdir(cwd_orig)
####################
# - Actions
####################
def sign_security_txt() -> None:
if PATH_SECURITY_TXT.is_file():
PATH_SECURITY_TXT.unlink()
## Avoid platform-defined (os.rename()) shutil.move() to existing file.
with cd_script_dir():
# Template
with open("security.txt.unsigned.tmpl", "r") as f0:
with open("security.txt.unsigned", "w") as f1:
f1.write(
Template(
f0.read()
).substitute(
MAILTO = MAILTO,
EXPIRY = EXPIRY,
MAILTO_PGP_FINGERPRINT = MAILTO_PGP_FINGERPRINT,
DEPLOY_DOMAIN = DEPLOY_DOMAIN,
)
)
# Sign + Delete Templated
subprocess.run([
"gpg",
"--local-user", "E3B345EFFF5B3994BC1D12603D01BE95F3EFFEB9",
"--clearsign", "security.txt.unsigned",
])
Path("security.txt.unsigned").unlink()
# Move
shutil.move(
"security.txt.unsigned.asc",
PATH_SECURITY_TXT,
)
####################
# - Main
####################
if __name__ == "__main__":
sign_security_txt()
# `cat` the Installed File
with open(PATH_SECURITY_TXT, "r") as f:
print(f.read(), end = "")

View File

@ -0,0 +1,5 @@
Contact: mailto:$MAILTO
Expires: $EXPIRY
Encryption: https://keys.openpgp.org/vks/v1/by-fingerprint/$MAILTO_PGP_FINGERPRINT
Preferred-Languages: en, dk
Canonical: $DEPLOY_DOMAIN/.well-known/security.txt