refactor!: Playbooks into reusable roles.
Also solving several issues along the way. Progress on #18. Closes #15. Closes #13. Closes #10. Closes #16.main
parent
2c6e94b013
commit
f6e459e0ea
|
@ -1,35 +0,0 @@
|
||||||
|
|
||||||
# You may want to customise this file depending on your Operating System
|
|
||||||
# and the editor that you use.
|
|
||||||
#
|
|
||||||
# We recommend that you use a Global Gitignore for files that are not related
|
|
||||||
# to the project. (https://help.github.com/articles/ignoring-files/#create-a-global-gitignore)
|
|
||||||
|
|
||||||
# OS
|
|
||||||
#
|
|
||||||
# Ref: https://github.com/github/gitignore/blob/master/Global/macOS.gitignore
|
|
||||||
# Ref: https://github.com/github/gitignore/blob/master/Global/Windows.gitignore
|
|
||||||
# Ref: https://github.com/github/gitignore/blob/master/Global/Linux.gitignore
|
|
||||||
.DS_STORE
|
|
||||||
Thumbs.db
|
|
||||||
|
|
||||||
# Editors
|
|
||||||
#
|
|
||||||
# Ref: https://github.com/github/gitignore/blob/master/Global
|
|
||||||
# Ref: https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
|
|
||||||
# Ref: https://github.com/github/gitignore/blob/master/Global/VisualStudioCode.gitignore
|
|
||||||
.idea
|
|
||||||
.chrome
|
|
||||||
/*.log
|
|
||||||
.vscode/*
|
|
||||||
!.vscode/settings.json
|
|
||||||
!.vscode/tasks.json
|
|
||||||
!.vscode/launch.json
|
|
||||||
!.vscode/extensions.json
|
|
||||||
|
|
||||||
# Python
|
|
||||||
**/__pycache__
|
|
||||||
.venv
|
|
||||||
|
|
||||||
# Local Developer Notes
|
|
||||||
dev
|
|
60
README.md
60
README.md
|
@ -1,27 +1,63 @@
|
||||||
# Complete Infrastructure for DTU Python Support
|
# Complete Infrastructure for DTU Python Support
|
||||||
|
This goal of this project is to describe and implement the complete infrastructure for DTUs Python Support group.
|
||||||
**Very heavily WIP**
|
**Very heavily WIP**
|
||||||
|
|
||||||
This project describes and implements the complete infrastructure for DTUs Python Support group.
|
## Project Goals
|
||||||
|
The ordered list of priorities are:
|
||||||
|
1. **Security/privacy**: It should address all major security concerns and take general good-practice steps to mitigate general issues.
|
||||||
|
2. **Reliability**: It should "just work", and keep "just work"ing until someone tells it otherwise.
|
||||||
|
3. **Developer usability**: It should be understandable and deployable with minimal human-to-human explanation.
|
||||||
|
4. **Resource/cost efficiency**: It should be surrounded by minimal effective infrastructure, and run on the cheapest hardware that supports the application use case.
|
||||||
|
|
||||||
The repository provides the following user-facing services:
|
That is to say:
|
||||||
- timesigned.com: Modern, multilingual guide to using Python at DTU.
|
- In a tradeoff between security and reliability, we will generally prefer security. This has a hard limit; note that **convenience is security**, and reliability is one of the finest conveniences that exist.
|
||||||
|
- In a tradeoff between reliability and dev usability, we will generally prefer reliability. This is a more subjective choice; deployment problems are categorically "hard", and "reliable" can very quickly come to mean "unusable to most".
|
||||||
|
- And so on...
|
||||||
|
|
||||||
|
## Deployed Services
|
||||||
|
The following user-facing services are provided:
|
||||||
|
- pysupport.timesigned.com: Modern, multilingual guide to using Python at DTU.
|
||||||
- SSG with [mdbook](https://rust-lang.github.io/mdBook/) w/plugins.
|
- SSG with [mdbook](https://rust-lang.github.io/mdBook/) w/plugins.
|
||||||
|
|
||||||
- chat.timesigned.com: Modern asynchronous communication and support channel for everybody using Python at DTU.
|
- chat.timesigned.com: Modern asynchronous communication and support channel for everybody using Python at DTU.
|
||||||
- Instance of [Zulip](https://zulip.com/).
|
- Instance of [Zulip](https://zulip.com/).
|
||||||
|
- git.timesigned.com: Lightweight collaborative development and project management infrastructure for development teams.
|
||||||
- git.timesigned.com: Lightweight collaborative development for teams
|
|
||||||
- Instance of [Forgejo](https://forgejo.org/), itself a soft-fork of [Gitea](https://about.gitea.com/)
|
- Instance of [Forgejo](https://forgejo.org/), itself a soft-fork of [Gitea](https://about.gitea.com/)
|
||||||
|
|
||||||
- auth.timesigned.com: Identity Provider allowing seamless, secure access to key services with their DTU Account.
|
- auth.timesigned.com: Identity Provider allowing seamless, secure access to key services with their DTU Account.
|
||||||
- Instance of [Authentik](https://goauthentik.io/).
|
- Instance of [Authentik](https://goauthentik.io/).
|
||||||
|
- uptime.timesigned.com: Black-box monitoring with operational notifications.
|
||||||
- uptime.timesigned.com: Black-box monitoring with notifications
|
|
||||||
- Instance of [Authentik](https://goauthentik.io/).
|
- Instance of [Authentik](https://goauthentik.io/).
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
To achieve our goals, we choose the following basic bricks to play with:
|
||||||
|
- `docker swarm`: A (flawed, but principled) orchestrator with batteries included.
|
||||||
|
- `wireguard`: Encrypted L3 overlay network with no overhead. The perfect companion to any orchestrator.
|
||||||
|
- `ansible`: Expresses desired infrastructure state as YML. Better treated as pseudo-scripts that are guaranteed (\*) safe to re-run.
|
||||||
|
|
||||||
|
In practice, here are some of the key considerations in the architecture:
|
||||||
|
- **Prefer configs/secrets**: We always prefer mounted secrets/configs, which are not subject to persistence headaches, are protected by Raft consensus, and are immune to runtime modifications.
|
||||||
|
- **Our Approach**: We vehemently disallow secrets in the stack environment; when this is incompatible with the application, we use an entrypoint script to inject the environment variable from the docker secret file when calling the app.
|
||||||
|
|
||||||
|
- **No `docker.sock`**: Access (even read-only) to `docker.sock` implicitly grants the container in question root access to the host.
|
||||||
|
- **Our Approach**: Use of `docker.sock` is reserved for pseudo-`cronjob` replacements; that is to say, deterministic, simple, easily vettable processes that are critical for host security.
|
||||||
|
|
||||||
|
- **Rootless Container Internals**: The docker socket itself must be rootful in Swarm. This is a calculated risk, for which immense ease of use (**convenience is security!!**) and container-level security (specifically, managing when a container actually does get access to something sensitive) can be bought as managed `iptables` (especially effective over `wg0`), simple `CAP_DROP`, `cgroup` definitions, etc. . With a certain discipline, one gets a lot in return.
|
||||||
|
- **Our Approach**: We build infrastructure around containerized deployments (to manage ex. ownership and permissions) to ensure that unique UID:GIDs can run processes within containers without overlap. We actively prefer services that allow doing this, and are willing to resort to ex. entrypoint hacking to make rootless operation possible. We also take care to go beyond default Docker security CAP policies, aspiring to always run `CAP_DROP: ALL` by default, and then either manually `CAP_ADD` back or configuring the container process to not need the capability.
|
||||||
|
|
||||||
|
- **Encrypted `overlay`**: Docker `overlay` networks are principally not more secure than the network they're built in: Prone to Active/Passive MITM, MAC/IP spoofs, ARP cache poisoning, and so on.
|
||||||
|
- **Our Approach**: We build an encrypted L3 network with minimal overhead, using the `wireguard` kernel module via `systemd-networkd`. This enforces that Swarm communications happen over the `wg0` interface, without having to maintain a pile of scripts outside the main system. This eliminates MITM risk, and ensures that when `overlay` networks defining peers by their IP can trust that IP address.
|
||||||
|
- **NOTE on Key Generation**: We pre-generate all keys into our secret store (`password-store`), *including pre-shared keys*. This is extremely secure, but it's also a... Heavy way to do it (a PK problem). $100$ nodes would require generating and distributing $10100$ keys. We will never have more than 5 nodes, though.
|
||||||
|
|
||||||
|
- **Reproducible Deployment**: Swarm deployments rely on a lot of external stuff: Availability of hosts, correct DNS records, shared attachable `overlay` networks with static IPs and hostnames for connected containers, volumes backed in various ways, configs/secrets with possible rotation, and so on.
|
||||||
|
- **Our Approach**: We aspire to encode the requisitioning of all required resources into the **single-source-of-truth deployment path**. In practice, this takes the form of an Ansible project; one tied especially closely to the contents of `docker-compose.yml` stack files.
|
||||||
|
|
||||||
|
### Why not `x`?
|
||||||
|
- `k8s`/`k3s`/...: Unfortunately, the heaviness and complexity on a small team makes it break all of the four concerns. One can use cloud provider infrastructure, but then privacy (and cost!) becomes a risk.
|
||||||
|
- HashiCorp `x`: Terraform, Nomad, Vault, etc. are no longer free (as in freedom) software, and even if they still were, generally imply buy-in to the whole ecosystem.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# References
|
# References
|
||||||
|
To dig deeper and/or develop this infrastructure.
|
||||||
|
|
||||||
## Wireguard / systemd-networkd
|
## Wireguard / systemd-networkd
|
||||||
- `systemd-networkd` Network: <https://www.freedesktop.org/software/systemd/man/systemd.network.html>
|
- `systemd-networkd` Network: <https://www.freedesktop.org/software/systemd/man/systemd.network.html>
|
||||||
|
@ -53,8 +89,12 @@ The repository provides the following user-facing services:
|
||||||
- S3 Backend: <https://rclone.org/s3/>
|
- S3 Backend: <https://rclone.org/s3/>
|
||||||
- Crypt Meta-Backend: <https://rclone.org/crypt/>
|
- Crypt Meta-Backend: <https://rclone.org/crypt/>
|
||||||
|
|
||||||
|
|
||||||
## Swarm Deployment
|
## Swarm Deployment
|
||||||
- The Funky Penguin: <https://geek-cookbook.funkypenguin.co.nz/docker-swarm>
|
- The Funky Penguin: <https://geek-cookbook.funkypenguin.co.nz/docker-swarm>
|
||||||
- Traefik Certificate Auto-Renewal: <https://doc.traefik.io/traefik/https/acme/#automatic-renewals>
|
- Traefik Certificate Auto-Renewal: <https://doc.traefik.io/traefik/https/acme/#automatic-renewals>
|
||||||
- Traefik Service: <https://doc.traefik.io/traefik/routing/services/#configuring-http-services>
|
- Traefik Service: <https://doc.traefik.io/traefik/routing/services/#configuring-http-services>
|
||||||
|
|
||||||
|
## Docker Networking
|
||||||
|
- Friends, Scopes Matter: <https://stackoverflow.com/questions/50282792/how-does-docker-network-work>
|
||||||
|
- `overlay` networks **require** `scope=global` when used the way we use it.
|
||||||
|
- Note, don't run other containers on hosts that you don't want able to connect to these overlay networks.
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
[defaults]
|
||||||
|
roles_path = ./roles
|
||||||
|
host_key_checking = False
|
|
@ -1,65 +1,76 @@
|
||||||
|
####################
|
||||||
|
# - Global Variables
|
||||||
|
####################
|
||||||
|
all:
|
||||||
|
vars:
|
||||||
|
passwordstore: "./dev/.password-store"
|
||||||
|
stacks_dir: "./stacks"
|
||||||
|
|
||||||
|
project_name: "python-support"
|
||||||
|
dns_root: "timesigned.com"
|
||||||
|
email_abuse: "s174509@dtu.dk"
|
||||||
|
|
||||||
####################
|
####################
|
||||||
# - Hosts - by Purpose
|
# - Hosts - by Purpose
|
||||||
####################
|
####################
|
||||||
service:
|
purpose_service:
|
||||||
hosts:
|
hosts:
|
||||||
raspberry.node.timesigned.com:
|
raspberry.node:
|
||||||
vars:
|
|
||||||
ansible_user: root
|
|
||||||
|
|
||||||
storage:
|
purpose_storage:
|
||||||
hosts:
|
hosts:
|
||||||
blueberry.node.timesigned.com:
|
blueberry.node:
|
||||||
vars:
|
|
||||||
ansible_user: root
|
|
||||||
|
|
||||||
####################
|
####################
|
||||||
# - Hosts - by Swarm Role
|
# - Hosts - by Swarm Role
|
||||||
####################
|
####################
|
||||||
leader:
|
swarm_leader:
|
||||||
## ONLY ==1 Host can be Leader
|
## ONLY ==1 Host should be Leader
|
||||||
hosts:
|
hosts:
|
||||||
raspberry.node.timesigned.com:
|
raspberry.node:
|
||||||
vars:
|
|
||||||
ansible_user: root
|
|
||||||
|
|
||||||
manager:
|
swarm_managers:
|
||||||
hosts:
|
hosts:
|
||||||
raspberry.node.timesigned.com:
|
raspberry.node:
|
||||||
vars:
|
|
||||||
ansible_user: root
|
|
||||||
|
|
||||||
worker:
|
swarm_workers:
|
||||||
hosts:
|
hosts:
|
||||||
blueberry.node.timesigned.com:
|
blueberry.node:
|
||||||
vars:
|
|
||||||
ansible_user: root
|
|
||||||
|
|
||||||
swarm:
|
swarm_nodes:
|
||||||
hosts:
|
|
||||||
raspberry.node.timesigned.com:
|
|
||||||
blueberry.node.timesigned.com:
|
|
||||||
vars:
|
vars:
|
||||||
ansible_user: root
|
ansible_user: "root"
|
||||||
|
|
||||||
####################
|
|
||||||
# - Hosts - by L3 Network
|
|
||||||
####################
|
|
||||||
wg0:
|
|
||||||
hosts:
|
hosts:
|
||||||
raspberry.node.timesigned.com:
|
raspberry.node:
|
||||||
|
ansible_host: "raspberry.node.{{ dns_root }}"
|
||||||
|
|
||||||
wg0_ip: "10.9.8.1"
|
wg0_ip: "10.9.8.1"
|
||||||
|
wg0_private_key: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'networks/wg0/raspberry.node/private_key'
|
||||||
|
) }}"
|
||||||
|
wg0_public_key: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'networks/wg0/raspberry.node/public_key'
|
||||||
|
) }}"
|
||||||
|
wg0_psk_blueberry.node: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'networks/wg0/raspberry.node/psk_blueberry.node'
|
||||||
|
) }}"
|
||||||
|
blueberry.node:
|
||||||
|
ansible_host: "blueberry.node.{{ dns_root }}"
|
||||||
|
|
||||||
wg_private_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/raspberry_private_key') }}"
|
|
||||||
wg_public_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/raspberry_public_key') }}"
|
|
||||||
|
|
||||||
wg_psk_blueberry.node.timesigned.com: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/psk_raspberry-blueberry') }}"
|
|
||||||
blueberry.node.timesigned.com:
|
|
||||||
wg0_ip: "10.9.8.2"
|
wg0_ip: "10.9.8.2"
|
||||||
|
wg0_private_key: "{{ lookup(
|
||||||
wg_private_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/blueberry_private_key') }}"
|
'community.general.passwordstore',
|
||||||
wg_public_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/blueberry_public_key') }}"
|
'networks/wg0/blueberry.node/private_key'
|
||||||
|
) }}"
|
||||||
wg_psk_raspberry.node.timesigned.com: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/wg/psk_raspberry-blueberry') }}"
|
wg0_public_key: "{{ lookup(
|
||||||
vars:
|
'community.general.passwordstore',
|
||||||
ansible_user: root
|
'networks/wg0/blueberry.node/public_key'
|
||||||
|
) }}"
|
||||||
|
wg0_psk_raspberry.node: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'networks/wg0/raspberry.node/psk_blueberry.node'
|
||||||
|
) }}"
|
||||||
|
|
|
@ -0,0 +1,70 @@
|
||||||
|
####################
|
||||||
|
# - Setup
|
||||||
|
####################
|
||||||
|
- name: "Allocate Hosts w/DNS"
|
||||||
|
tags:
|
||||||
|
- "stage_setup"
|
||||||
|
hosts: "localhost"
|
||||||
|
vars:
|
||||||
|
do_project: "{{ project_name }}"
|
||||||
|
do_project_purpose: "Infrastructure for the Python Support Team."
|
||||||
|
|
||||||
|
roles:
|
||||||
|
- role: "setup/hosts_digitalocean"
|
||||||
|
vars:
|
||||||
|
hosts_do: "{{ groups['purpose_storage'] }}"
|
||||||
|
## SET: nodes_to_ipv4s_public@localhost
|
||||||
|
## SET: nodes_to_ipv4s_private@localhost
|
||||||
|
|
||||||
|
- role: "setup/hosts_digitalocean"
|
||||||
|
vars:
|
||||||
|
hosts_do: "{{ groups['purpose_service'] }}"
|
||||||
|
## SET: nodes_to_ipv4s_public@localhost
|
||||||
|
## SET: nodes_to_ipv4s_private@localhost
|
||||||
|
|
||||||
|
- role: "setup/dns_foundation"
|
||||||
|
vars:
|
||||||
|
ipv4_root: "{{ nodes_to_ipv4s_public['raspberry.node'] }}"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Setup Hosts
|
||||||
|
####################
|
||||||
|
- name: "Configure Hosts"
|
||||||
|
hosts: "swarm_nodes"
|
||||||
|
tags:
|
||||||
|
- "stage_host"
|
||||||
|
roles:
|
||||||
|
- role: "host/system_tools"
|
||||||
|
|
||||||
|
- role: "host/network_wg0"
|
||||||
|
vars:
|
||||||
|
hosts_wg0: "{{ groups['swarm_nodes'] }}"
|
||||||
|
|
||||||
|
- role: "host/docker"
|
||||||
|
|
||||||
|
- name: "Configure Docker Swarm Leader"
|
||||||
|
hosts: "swarm_leader"
|
||||||
|
tags:
|
||||||
|
- "stage_host"
|
||||||
|
roles:
|
||||||
|
- role: "host/docker_swarm_leader"
|
||||||
|
## SET: swarm_manager_token@swarm_leader
|
||||||
|
## SET: swarm_worker_token@swarm_leader
|
||||||
|
|
||||||
|
- name: "Configure Docker Swarm Workers"
|
||||||
|
hosts: "swarm_workers"
|
||||||
|
tags:
|
||||||
|
- "stage_host"
|
||||||
|
roles:
|
||||||
|
- role: "host/docker_swarm_worker"
|
||||||
|
vars:
|
||||||
|
host_swarm_leader: "{{ groups['swarm_leader'][0] }}"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Deploy Stacks
|
||||||
|
####################
|
||||||
|
- import_playbook: "./stacks/mesh/playbook.yml"
|
||||||
|
- import_playbook: "./stacks/site-support/playbook.yml"
|
||||||
|
- import_playbook: "./stacks/cleanup/playbook.yml"
|
|
@ -1,136 +0,0 @@
|
||||||
- hosts: localhost
|
|
||||||
vars:
|
|
||||||
dns_root: "timesigned.com"
|
|
||||||
node_primary: "raspberry.node.timesigned.com"
|
|
||||||
|
|
||||||
digitalocean_droplet_token: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/digitalocean-droplet-token') }}"
|
|
||||||
|
|
||||||
cloudflare_email: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/cloudflare-email') }}"
|
|
||||||
cloudflare_dns_token: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/cloudflare-dns-token') }}"
|
|
||||||
|
|
||||||
droplet_service_image: "debian-12-x64"
|
|
||||||
## curl -X GET --silent "https://api.digitalocean.com/v2/images?per_page=999" -H "Authorization: Bearer $(pass work/dtu/python-support/digitalocean-droplet-token)" | jq | less
|
|
||||||
droplet_service_size: "s-1vcpu-1gb"
|
|
||||||
droplet_service_region: "fra1"
|
|
||||||
## curl -X GET --silent "https://api.digitalocean.com/v2/sizes?per_page=999" -H "Authorization: Bearer $(pass work/dtu/python-support/digitalocean-droplet-token)" | jq | less
|
|
||||||
|
|
||||||
droplet_storage_image: "debian-12-x64"
|
|
||||||
droplet_storage_size: "s-1vcpu-1gb"
|
|
||||||
droplet_storage_region: "fra1"
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
####################
|
|
||||||
# - Prepare SSH Information
|
|
||||||
####################
|
|
||||||
- name: "Get SSH Public Key"
|
|
||||||
shell: "ssh-add -L"
|
|
||||||
register: "ssh_key_pub_cmdout"
|
|
||||||
|
|
||||||
- name: "Add SSH Public Key to DigitalOcean account"
|
|
||||||
digital_ocean_sshkey:
|
|
||||||
name: "key"
|
|
||||||
oauth_token: "{{ digitalocean_droplet_token }}"
|
|
||||||
ssh_pub_key: "{{ ssh_key_pub_cmdout.stdout }}"
|
|
||||||
state: "present"
|
|
||||||
register: "sshkey_result"
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Create Digitalocean Nodes
|
|
||||||
####################
|
|
||||||
- name: "Create Storage Droplet"
|
|
||||||
digital_ocean_droplet:
|
|
||||||
name: "{{ item }}"
|
|
||||||
oauth_token: "{{ digitalocean_droplet_token }}"
|
|
||||||
ssh_keys: ["{{ sshkey_result.data.ssh_key.id }}"]
|
|
||||||
|
|
||||||
image: "{{ droplet_storage_image }}"
|
|
||||||
size: "{{ droplet_storage_size }}"
|
|
||||||
region: "{{ droplet_storage_region }}"
|
|
||||||
|
|
||||||
wait_timeout: 600
|
|
||||||
unique_name: "yes"
|
|
||||||
|
|
||||||
state: present
|
|
||||||
with_inventory_hostnames:
|
|
||||||
- storage
|
|
||||||
register: droplet_storage_result
|
|
||||||
|
|
||||||
- name: "Create Service Droplet"
|
|
||||||
digital_ocean_droplet:
|
|
||||||
name: "{{ item }}"
|
|
||||||
oauth_token: "{{ digitalocean_droplet_token }}"
|
|
||||||
ssh_keys: ["{{ sshkey_result.data.ssh_key.id }}"]
|
|
||||||
|
|
||||||
image: "{{ droplet_service_image }}"
|
|
||||||
size: "{{ droplet_service_size }}"
|
|
||||||
region: "{{ droplet_service_region }}"
|
|
||||||
|
|
||||||
wait_timeout: 600
|
|
||||||
unique_name: "yes"
|
|
||||||
|
|
||||||
state: present
|
|
||||||
with_inventory_hostnames:
|
|
||||||
- service
|
|
||||||
register: droplet_service_result
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Set DNS A Records => Hosts
|
|
||||||
####################
|
|
||||||
- name: "Set Storage DNS A => *.node.{{ dns_root }}"
|
|
||||||
cloudflare_dns:
|
|
||||||
api_token: "{{ cloudflare_dns_token }}"
|
|
||||||
|
|
||||||
zone: "{{ dns_root }}"
|
|
||||||
type: "A"
|
|
||||||
|
|
||||||
record: "{{ item.data.droplet.name }}"
|
|
||||||
value: "{{ item.data.ip_address }}"
|
|
||||||
with_items: "{{ droplet_storage_result.results }}"
|
|
||||||
|
|
||||||
- name: "Set Service DNS A => *.node.{{ dns_root }}"
|
|
||||||
cloudflare_dns:
|
|
||||||
api_token: "{{ cloudflare_dns_token }}"
|
|
||||||
|
|
||||||
zone: "{{ dns_root }}"
|
|
||||||
type: "A"
|
|
||||||
|
|
||||||
record: "{{ item.data.droplet.name }}"
|
|
||||||
value: "{{ item.data.ip_address }}"
|
|
||||||
with_items: "{{ droplet_service_result.results }}"
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Set DNS CNAME Record => @
|
|
||||||
####################
|
|
||||||
- name: "Set DNS CNAME => Primary Node"
|
|
||||||
cloudflare_dns:
|
|
||||||
api_token: "{{ cloudflare_dns_token }}"
|
|
||||||
|
|
||||||
zone: "{{ dns_root }}"
|
|
||||||
type: "CNAME"
|
|
||||||
|
|
||||||
record: "@"
|
|
||||||
value: "{{ node_primary }}"
|
|
||||||
## Cloudflare allows CNAME on @ via CNAME-flattening
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Set DNS CNAME Records => Stacks
|
|
||||||
####################
|
|
||||||
- name: "Set DNS CNAME => Stack: auth"
|
|
||||||
cloudflare_dns:
|
|
||||||
api_token: "{{ cloudflare_dns_token }}"
|
|
||||||
|
|
||||||
zone: "{{ dns_root }}"
|
|
||||||
type: "CNAME"
|
|
||||||
|
|
||||||
record: "auth"
|
|
||||||
value: "@"
|
|
||||||
|
|
||||||
- name: "Set DNS CNAME => Stack: site-support"
|
|
||||||
cloudflare_dns:
|
|
||||||
api_token: "{{ cloudflare_dns_token }}"
|
|
||||||
|
|
||||||
zone: "{{ dns_root }}"
|
|
||||||
type: "CNAME"
|
|
||||||
|
|
||||||
record: "pysupport"
|
|
||||||
value: "@"
|
|
|
@ -1,132 +0,0 @@
|
||||||
- hosts: swarm
|
|
||||||
become: "true"
|
|
||||||
tasks:
|
|
||||||
####################
|
|
||||||
# - Tuning - Traefik
|
|
||||||
# -- Traefik serving QUIC can be bottlenecked by a too-low UDP buffer.
|
|
||||||
# -- This increases both send & receive from ~200KB to 2.5MB.
|
|
||||||
####################
|
|
||||||
- name: "Set net.core.rmem_max = 2500000"
|
|
||||||
sysctl:
|
|
||||||
state: "present"
|
|
||||||
name: "net.core.rmem_max"
|
|
||||||
value: "2500000"
|
|
||||||
reload: "yes"
|
|
||||||
|
|
||||||
- name: "Set net.core.wmem_max = 2500000"
|
|
||||||
sysctl:
|
|
||||||
state: "present"
|
|
||||||
name: "net.core.rmem_max"
|
|
||||||
value: "2500000"
|
|
||||||
reload: "yes"
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Docker - Install
|
|
||||||
####################
|
|
||||||
- name: "Download Docker Apt Key"
|
|
||||||
ansible.builtin.get_url:
|
|
||||||
url: "https://download.docker.com/linux/debian/gpg"
|
|
||||||
dest: "/etc/apt/trusted.gpg.d/docker.asc"
|
|
||||||
checksum: "sha256:1500c1f56fa9e26b9b8f42452a553675796ade0807cdce11975eb98170b3a570"
|
|
||||||
owner: "root"
|
|
||||||
group: "root"
|
|
||||||
mode: "644"
|
|
||||||
|
|
||||||
- name: "Add Docker Apt Repository"
|
|
||||||
apt_repository:
|
|
||||||
state: "present"
|
|
||||||
repo: "deb https://download.docker.com/linux/debian bullseye stable"
|
|
||||||
filename: "docker"
|
|
||||||
|
|
||||||
- name: "Install Docker CE"
|
|
||||||
apt:
|
|
||||||
state: "present"
|
|
||||||
name: "docker-ce"
|
|
||||||
|
|
||||||
- name: "Install python3-docker"
|
|
||||||
apt:
|
|
||||||
state: "present"
|
|
||||||
name: "python3-docker"
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Docker Plugin - rclone
|
|
||||||
####################
|
|
||||||
- name: "Install fuse"
|
|
||||||
apt:
|
|
||||||
state: "present"
|
|
||||||
name: "fuse"
|
|
||||||
|
|
||||||
- name: "Create rclone Config Path"
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "/var/lib/docker-plugins/rclone/config"
|
|
||||||
state: directory
|
|
||||||
mode: "0750"
|
|
||||||
|
|
||||||
- name: "Create rclone Cache Path"
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "/var/lib/docker-plugins/rclone/cache"
|
|
||||||
state: directory
|
|
||||||
mode: "0750"
|
|
||||||
|
|
||||||
# - name: "Disable the rclone Docker Plugin"
|
|
||||||
# community.docker.docker_plugin:
|
|
||||||
# state: "disable"
|
|
||||||
# alias: "rclone"
|
|
||||||
# plugin_name: "rclone/docker-volume-rclone:amd64"
|
|
||||||
|
|
||||||
- name: "Install rclone Docker Plugin"
|
|
||||||
community.docker.docker_plugin:
|
|
||||||
state: "present"
|
|
||||||
alias: "rclone"
|
|
||||||
plugin_name: "rclone/docker-volume-rclone:amd64"
|
|
||||||
plugin_options:
|
|
||||||
args: "-v --allow-other"
|
|
||||||
|
|
||||||
- name: "Enable the rclone Docker Plugin"
|
|
||||||
community.docker.docker_plugin:
|
|
||||||
state: "enable"
|
|
||||||
alias: "rclone"
|
|
||||||
plugin_name: "rclone/docker-volume-rclone:amd64"
|
|
||||||
plugin_options:
|
|
||||||
args: "-v --allow-other"
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Docker - Swarm Init
|
|
||||||
####################
|
|
||||||
- hosts: leader
|
|
||||||
become: "true"
|
|
||||||
tasks:
|
|
||||||
- name: "Initialize Docker Swarm Leader"
|
|
||||||
community.docker.docker_swarm:
|
|
||||||
state: "present"
|
|
||||||
advertise_addr: "{{ wg0_ip }}"
|
|
||||||
listen_addr: "{{ wg0_ip }}:2377"
|
|
||||||
|
|
||||||
- name: "Collect Swarm Info"
|
|
||||||
community.docker.docker_swarm_info:
|
|
||||||
register: swarm_info
|
|
||||||
|
|
||||||
- name: "Retrieve Join Tokens"
|
|
||||||
set_fact:
|
|
||||||
swarm_manager_token: "{{ swarm_info.swarm_facts['JoinTokens']['Manager'] }}"
|
|
||||||
swarm_worker_token: "{{ swarm_info.swarm_facts['JoinTokens']['Worker'] }}"
|
|
||||||
|
|
||||||
- name: "Install jsondiff & pyyaml (stack-deploy deps)"
|
|
||||||
apt:
|
|
||||||
state: "present"
|
|
||||||
name:
|
|
||||||
- "python3-jsondiff"
|
|
||||||
- "python3-yaml"
|
|
||||||
|
|
||||||
# SKIP Manager
|
|
||||||
# - Currently, there is only one manager == leader. So there's no point.
|
|
||||||
|
|
||||||
- hosts: worker
|
|
||||||
become: "true"
|
|
||||||
tasks:
|
|
||||||
- name: "Initialize Docker Swarm Workers"
|
|
||||||
community.docker.docker_swarm:
|
|
||||||
state: "join"
|
|
||||||
advertise_addr: "{{ wg0_ip }}"
|
|
||||||
join_token: "{{ hostvars[groups['leader'][0]]['swarm_worker_token'] }}"
|
|
||||||
remote_addrs: [ "{{ hostvars[groups['leader'][0]]['wg0_ip'] }}:2377" ]
|
|
|
@ -1,10 +0,0 @@
|
||||||
- hosts: swarm
|
|
||||||
become: "true"
|
|
||||||
tasks:
|
|
||||||
####################
|
|
||||||
# - Tuning - Dev
|
|
||||||
####################
|
|
||||||
- name: "Install Terminfo for Kitty"
|
|
||||||
ansible.builtin.apt:
|
|
||||||
state: "present"
|
|
||||||
name: "kitty-terminfo"
|
|
|
@ -1,43 +0,0 @@
|
||||||
- hosts: wg0
|
|
||||||
become: "true"
|
|
||||||
tasks:
|
|
||||||
####################
|
|
||||||
# - Wireguard
|
|
||||||
####################
|
|
||||||
- name: "Install Wireguard Tools"
|
|
||||||
ansible.builtin.apt:
|
|
||||||
state: "present"
|
|
||||||
name: "wireguard"
|
|
||||||
|
|
||||||
- name: "systemd-networkd: Install wg0 Device"
|
|
||||||
template:
|
|
||||||
src: "./templates/99-wg0.netdev"
|
|
||||||
dest: "/etc/systemd/network/99-wg0.netdev"
|
|
||||||
owner: "root"
|
|
||||||
group: "systemd-network"
|
|
||||||
mode: "0640"
|
|
||||||
|
|
||||||
- name: "systemd-networkd: Install wg0 Network"
|
|
||||||
template:
|
|
||||||
src: "./templates/99-wg0.network"
|
|
||||||
dest: "/etc/systemd/network/99-wg0.network"
|
|
||||||
owner: "root"
|
|
||||||
group: "systemd-network"
|
|
||||||
mode: "0640"
|
|
||||||
|
|
||||||
- name: "Restart systemd-networkd"
|
|
||||||
systemd:
|
|
||||||
name: "systemd-networkd.service"
|
|
||||||
state: "restarted"
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Wireguard - Enable Packet Forwarding
|
|
||||||
####################
|
|
||||||
- name: "Set net.ipv4.ip_forward = 1"
|
|
||||||
sysctl:
|
|
||||||
state: "present"
|
|
||||||
name: "net.ipv4.ip_forward"
|
|
||||||
value: "1"
|
|
||||||
reload: "yes"
|
|
||||||
|
|
||||||
|
|
|
@ -1,19 +0,0 @@
|
||||||
[NetDev]
|
|
||||||
Name=wg0
|
|
||||||
Kind=wireguard
|
|
||||||
Description=WireGuard tunnel wg0
|
|
||||||
|
|
||||||
[WireGuard]
|
|
||||||
ListenPort=51871
|
|
||||||
PrivateKey={{ wg_private_key }}
|
|
||||||
|
|
||||||
{% for item in groups['wg0'] %}
|
|
||||||
{% if item != inventory_hostname %}
|
|
||||||
[WireGuardPeer]
|
|
||||||
PublicKey={{ hostvars[item]['wg_public_key'] }}
|
|
||||||
PresharedKey={{ hostvars[item]['wg_psk_' ~ inventory_hostname] }}
|
|
||||||
AllowedIPs={{ hostvars[item]['wg0_ip'] }}/32
|
|
||||||
Endpoint={{ item }}:51871
|
|
||||||
|
|
||||||
{% endif %}
|
|
||||||
{% endfor %}
|
|
|
@ -0,0 +1,19 @@
|
||||||
|
ansible==8.3.0
|
||||||
|
ansible-core==2.15.3
|
||||||
|
boto3==1.28.30
|
||||||
|
botocore==1.31.30
|
||||||
|
cffi==1.15.1
|
||||||
|
cryptography==41.0.3
|
||||||
|
dnspython==2.4.2
|
||||||
|
importlib-resources==5.0.7
|
||||||
|
Jinja2==3.1.2
|
||||||
|
jmespath==1.0.1
|
||||||
|
MarkupSafe==2.1.3
|
||||||
|
packaging==23.1
|
||||||
|
pycparser==2.21
|
||||||
|
python-dateutil==2.8.2
|
||||||
|
PyYAML==6.0.1
|
||||||
|
resolvelib==1.0.1
|
||||||
|
s3transfer==0.6.2
|
||||||
|
six==1.16.0
|
||||||
|
urllib3==1.26.16
|
|
@ -0,0 +1,27 @@
|
||||||
|
####################
|
||||||
|
# - Docker - Install
|
||||||
|
####################
|
||||||
|
- name: "Download Docker Apt Key"
|
||||||
|
ansible.builtin.get_url:
|
||||||
|
url: "https://download.docker.com/linux/debian/gpg"
|
||||||
|
dest: "/etc/apt/trusted.gpg.d/docker.asc"
|
||||||
|
checksum: "sha256:1500c1f56fa9e26b9b8f42452a553675796ade0807cdce11975eb98170b3a570"
|
||||||
|
owner: "root"
|
||||||
|
group: "root"
|
||||||
|
mode: "644"
|
||||||
|
|
||||||
|
- name: "Add Docker Apt Repository"
|
||||||
|
apt_repository:
|
||||||
|
state: "present"
|
||||||
|
repo: "deb https://download.docker.com/linux/debian bookworm stable"
|
||||||
|
filename: "docker"
|
||||||
|
|
||||||
|
- name: "Install Docker CE"
|
||||||
|
apt:
|
||||||
|
state: "present"
|
||||||
|
name: "docker-ce"
|
||||||
|
|
||||||
|
- name: "Install python3-docker"
|
||||||
|
apt:
|
||||||
|
state: "present"
|
||||||
|
name: "python3-docker"
|
|
@ -0,0 +1,32 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Host] Check that mandatory variables are defined"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "wg0_ip is defined"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "Initialize Docker Swarm Leader"
|
||||||
|
community.docker.docker_swarm:
|
||||||
|
state: "present"
|
||||||
|
advertise_addr: "{{ wg0_ip }}"
|
||||||
|
listen_addr: "{{ wg0_ip }}:2377"
|
||||||
|
|
||||||
|
- name: "Collect Swarm Info"
|
||||||
|
community.docker.docker_swarm_info:
|
||||||
|
register: swarm_info
|
||||||
|
|
||||||
|
- name: "Retrieve Join Tokens"
|
||||||
|
set_fact:
|
||||||
|
swarm_manager_token: "{{ swarm_info.swarm_facts['JoinTokens']['Manager'] }}"
|
||||||
|
swarm_worker_token: "{{ swarm_info.swarm_facts['JoinTokens']['Worker'] }}"
|
||||||
|
|
||||||
|
- name: "Install jsondiff & pyyaml (stack-deploy deps)"
|
||||||
|
apt:
|
||||||
|
state: "present"
|
||||||
|
name:
|
||||||
|
- "python3-jsondiff"
|
||||||
|
- "python3-yaml"
|
|
@ -0,0 +1,23 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Play] Check that mandatory variables are defined"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "host_swarm_leader is defined"
|
||||||
|
|
||||||
|
- name: "[Host][host_swarm_leader] Check that mandatory variables are defined"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "'swarm_worker_token' in hostvars[host_swarm_leader]"
|
||||||
|
- "'wg0_ip' in hostvars[host_swarm_leader]"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Initialize Workers
|
||||||
|
####################
|
||||||
|
- name: "Initialize Docker Swarm Workers"
|
||||||
|
community.docker.docker_swarm:
|
||||||
|
state: "join"
|
||||||
|
advertise_addr: "{{ wg0_ip }}"
|
||||||
|
join_token: "{{ hostvars[host_swarm_leader]['swarm_worker_token'] }}"
|
||||||
|
remote_addrs: [ "{{ hostvars[host_swarm_leader]['wg0_ip'] }}:2377" ]
|
|
@ -0,0 +1,5 @@
|
||||||
|
- name: "Restart systemd-networkd"
|
||||||
|
systemd:
|
||||||
|
name: "systemd-networkd.service"
|
||||||
|
state: "restarted"
|
||||||
|
listen: "restart systemd-networkd"
|
|
@ -0,0 +1,67 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Play] Check Variables"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "hosts_wg0 is defined"
|
||||||
|
|
||||||
|
- name: "[Host][localhost] Check Variables"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "hostvars['localhost'].nodes_to_ipv4s_private is defined"
|
||||||
|
|
||||||
|
- name: "[Host] Check Variables"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "wg0_private_key is defined"
|
||||||
|
- "wg0_public_key is defined"
|
||||||
|
- "wg0_ip is defined"
|
||||||
|
with_items: "{{ hosts_wg0 }}"
|
||||||
|
|
||||||
|
- name: "[Special][Inter-Host PSKs] Check Variables"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "'wg0_psk_' ~ item is defined"
|
||||||
|
with_items: "{{ hosts_wg0 }}"
|
||||||
|
when: "item != inventory_hostname"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Wireguard
|
||||||
|
####################
|
||||||
|
- name: "Install Wireguard Tools"
|
||||||
|
ansible.builtin.apt:
|
||||||
|
state: "present"
|
||||||
|
name: "wireguard"
|
||||||
|
|
||||||
|
- name: "systemd-networkd: Install wg0 Device"
|
||||||
|
template:
|
||||||
|
src: "{{ role_path }}/templates/99-wg0.netdev"
|
||||||
|
dest: "/etc/systemd/network/99-wg0.netdev"
|
||||||
|
owner: "root"
|
||||||
|
group: "systemd-network"
|
||||||
|
mode: "0640"
|
||||||
|
notify: "restart systemd-networkd"
|
||||||
|
|
||||||
|
- name: "systemd-networkd: Install wg0 Network"
|
||||||
|
template:
|
||||||
|
src: "{{ role_path }}/templates/99-wg0.network"
|
||||||
|
dest: "/etc/systemd/network/99-wg0.network"
|
||||||
|
owner: "root"
|
||||||
|
group: "systemd-network"
|
||||||
|
mode: "0640"
|
||||||
|
notify: "restart systemd-networkd"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Wireguard - Enable Packet Forwarding
|
||||||
|
####################
|
||||||
|
- name: "Set net.ipv4.ip_forward = 1"
|
||||||
|
sysctl:
|
||||||
|
state: "present"
|
||||||
|
name: "net.ipv4.ip_forward"
|
||||||
|
value: "1"
|
||||||
|
reload: "yes"
|
||||||
|
notify: "restart systemd-networkd"
|
||||||
|
|
||||||
|
- name: "Run Notified Handlers"
|
||||||
|
meta: "flush_handlers"
|
|
@ -0,0 +1,19 @@
|
||||||
|
[NetDev]
|
||||||
|
Name=wg0
|
||||||
|
Kind=wireguard
|
||||||
|
Description=WireGuard tunnel wg0
|
||||||
|
|
||||||
|
[WireGuard]
|
||||||
|
ListenPort=51871
|
||||||
|
PrivateKey={{ wg0_private_key }}
|
||||||
|
|
||||||
|
{% for item in hosts_wg0 %}
|
||||||
|
{% if item != inventory_hostname %}
|
||||||
|
[WireGuardPeer]
|
||||||
|
PublicKey={{ hostvars[item].wg0_public_key }}
|
||||||
|
PresharedKey={{ hostvars[item]['wg0_psk_' ~ inventory_hostname] }}
|
||||||
|
AllowedIPs={{ hostvars[item].wg0_ip }}/32
|
||||||
|
Endpoint={{ hostvars['localhost'].nodes_to_ipv4s_private[item] }}:51871
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
|
@ -0,0 +1,4 @@
|
||||||
|
- name: "Install Terminfo for Kitty"
|
||||||
|
ansible.builtin.apt:
|
||||||
|
state: "present"
|
||||||
|
name: "kitty-terminfo"
|
|
@ -0,0 +1 @@
|
||||||
|
dns_root: "timesigned.com"
|
|
@ -0,0 +1,55 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Play] Check Variables"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "nodes_to_ipv4s_public is defined"
|
||||||
|
- "ipv4_root is defined"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Set DNS A Records => Hosts
|
||||||
|
####################
|
||||||
|
- name: "Set Node DNS A => *.node.{{ dns_root }}"
|
||||||
|
cloudflare_dns:
|
||||||
|
api_token: "{{ cloudflare_dns_token }}"
|
||||||
|
|
||||||
|
zone: "{{ dns_root }}"
|
||||||
|
type: "A"
|
||||||
|
solo: true
|
||||||
|
|
||||||
|
record: "{{ item.key }}"
|
||||||
|
value: "{{ item.value }}"
|
||||||
|
with_dict: "{{ nodes_to_ipv4s_public }}"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Set DNS A Record => @
|
||||||
|
####################
|
||||||
|
- name: "Set DNS A => Primary Node"
|
||||||
|
cloudflare_dns:
|
||||||
|
api_token: "{{ cloudflare_dns_token }}"
|
||||||
|
|
||||||
|
zone: "{{ dns_root }}"
|
||||||
|
type: "A"
|
||||||
|
solo: true
|
||||||
|
|
||||||
|
record: "@"
|
||||||
|
value: "{{ ipv4_root }}"
|
||||||
|
|
||||||
|
- name: "Wait for Node DNS Propagation"
|
||||||
|
debug:
|
||||||
|
msg: "Waiting..."
|
||||||
|
until: "lookup(
|
||||||
|
'community.general.dig',
|
||||||
|
item.key ~ '.' ~ dns_root
|
||||||
|
) == item.value"
|
||||||
|
retries: 30
|
||||||
|
delay: 10
|
||||||
|
with_dict: "{{ nodes_to_ipv4s_public }}"
|
||||||
|
|
||||||
|
- name: "Wait for Primary DNS Propagation"
|
||||||
|
debug:
|
||||||
|
msg: "Waiting..."
|
||||||
|
until: "lookup('community.general.dig', dns_root) == ipv4_root"
|
||||||
|
retries: 30
|
||||||
|
delay: 10
|
|
@ -0,0 +1,4 @@
|
||||||
|
cloudflare_dns_token: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'cloudflare/dns-token'
|
||||||
|
) }}"
|
|
@ -0,0 +1,2 @@
|
||||||
|
## Sets
|
||||||
|
- `nodes_to_ipv4s |=` on `localhost`: DigitalOcean Droplet IPs, indexed by inventory hostname.
|
|
@ -0,0 +1,11 @@
|
||||||
|
droplet_image: "debian-12-x64"
|
||||||
|
droplet_size: "s-1vcpu-1gb"
|
||||||
|
droplet_region: "fra1"
|
||||||
|
|
||||||
|
ssh_key_pub: "{{ lookup('ansible.builtin.pipe', 'ssh-add -L') }}"
|
||||||
|
|
||||||
|
## Get Image
|
||||||
|
## curl -X GET --silent "https://api.digitalocean.com/v2/images?per_page=999" -H "Authorization: Bearer <token>" | jq | less
|
||||||
|
|
||||||
|
## Get Sizes
|
||||||
|
## curl -X GET --silent "https://api.digitalocean.com/v2/sizes?per_page=999" -H "Authorization: Bearer <token>" | jq | less
|
|
@ -0,0 +1,81 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Play] Check Variables"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "hosts_do is defined"
|
||||||
|
- "do_project is defined"
|
||||||
|
- "do_project_purpose is defined"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Prepare SSH Information
|
||||||
|
####################
|
||||||
|
- name: "Add SSH Public Key to DO Account"
|
||||||
|
digital_ocean_sshkey:
|
||||||
|
state: "present"
|
||||||
|
|
||||||
|
name: "{{ ssh_key_pub.split(' ')[-1] }}"
|
||||||
|
oauth_token: "{{ digitalocean_droplet_token }}"
|
||||||
|
ssh_pub_key: "{{ ssh_key_pub }}"
|
||||||
|
register: "do_sshkey_result"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Create Digitalocean Project
|
||||||
|
####################
|
||||||
|
- name: "Create DO Project: {{ do_project }}"
|
||||||
|
run_once: true
|
||||||
|
community.digitalocean.digital_ocean_project:
|
||||||
|
state: "present"
|
||||||
|
|
||||||
|
name: "{{ do_project }}"
|
||||||
|
oauth_token: "{{ digitalocean_droplet_token }}"
|
||||||
|
purpose: "{{ do_project_purpose }}"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Create Digitalocean Nodes
|
||||||
|
####################
|
||||||
|
- name: "Create Droplets"
|
||||||
|
digital_ocean_droplet:
|
||||||
|
state: "present"
|
||||||
|
|
||||||
|
name: "{{ item }}"
|
||||||
|
oauth_token: "{{ digitalocean_droplet_token }}"
|
||||||
|
ssh_keys: ["{{ do_sshkey_result.data.ssh_key.id }}"]
|
||||||
|
|
||||||
|
image: "{{ droplet_image }}"
|
||||||
|
size: "{{ droplet_size }}"
|
||||||
|
region: "{{ droplet_region }}"
|
||||||
|
|
||||||
|
project: "{{ do_project }}"
|
||||||
|
wait_timeout: 600
|
||||||
|
unique_name: "yes"
|
||||||
|
|
||||||
|
with_items: "{{ hosts_do }}"
|
||||||
|
register: "droplet_result"
|
||||||
|
|
||||||
|
- name: "Register Droplet IPs"
|
||||||
|
set_fact:
|
||||||
|
nodes_to_ipv4s_public: "{{
|
||||||
|
nodes_to_ipv4s_public
|
||||||
|
| default({})
|
||||||
|
| combine({
|
||||||
|
item.data.droplet.name: (
|
||||||
|
item.data.droplet.networks.v4
|
||||||
|
| selectattr('type', 'eq', 'public')
|
||||||
|
| first
|
||||||
|
).ip_address,
|
||||||
|
})
|
||||||
|
}}"
|
||||||
|
nodes_to_ipv4s_private: "{{
|
||||||
|
nodes_to_ipv4s_private
|
||||||
|
| default({})
|
||||||
|
| combine({
|
||||||
|
item.data.droplet.name: (
|
||||||
|
item.data.droplet.networks.v4
|
||||||
|
| selectattr('type', 'eq', 'private')
|
||||||
|
| first
|
||||||
|
).ip_address,
|
||||||
|
})
|
||||||
|
}}"
|
||||||
|
with_items: "{{ droplet_result.results }}"
|
|
@ -0,0 +1,4 @@
|
||||||
|
digitalocean_droplet_token: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'digitalocean/droplet-token',
|
||||||
|
) }}"
|
|
@ -0,0 +1,3 @@
|
||||||
|
stack_dir: "{{ playbook_dir }}"
|
||||||
|
stack_config_paths: "{{ lookup('fileglob', stack_dir ~ '/configs/*').split(',') }}"
|
||||||
|
stack_configs: "{{ stack_config_paths | map('basename') | list }}"
|
|
@ -0,0 +1,34 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Play] Check that mandatory variables are defined"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "stack_name is defined"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Deploy Configs
|
||||||
|
####################
|
||||||
|
- name: "Stop Stack: {{ stack_name}}"
|
||||||
|
run_once: true
|
||||||
|
community.docker.docker_stack:
|
||||||
|
state: "absent"
|
||||||
|
|
||||||
|
name: "{{ stack_name }}"
|
||||||
|
absent_retries: 15
|
||||||
|
|
||||||
|
- name: "Wait for Stack to Stop"
|
||||||
|
run_once: true
|
||||||
|
shell: "until [ -z $(docker stack ps {{ stack_name }} -q) ]; do sleep 1; done"
|
||||||
|
|
||||||
|
- name: "Wait for Stack Networks to Stop"
|
||||||
|
run_once: true
|
||||||
|
shell: "until [ -z $(docker stack ps {{ stack_name }} -q) ]; do sleep 1; done"
|
||||||
|
|
||||||
|
- name: "Create Docker Configs"
|
||||||
|
community.docker.docker_config:
|
||||||
|
state: "present"
|
||||||
|
name: "{{ item }}"
|
||||||
|
data: "{{ lookup('template', stack_dir ~ '/configs/' ~ item) | b64encode }}"
|
||||||
|
data_is_b64: "true"
|
||||||
|
with_items: "{{ stack_configs }}"
|
|
@ -0,0 +1,28 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Play] Check that mandatory variables are defined"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "domain is defined"
|
||||||
|
- "domain_to is defined"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Set DNS CNAME Record => @
|
||||||
|
####################
|
||||||
|
- name: "Set DNS CNAME {{ domain_to }} => {{ domain_to }}"
|
||||||
|
cloudflare_dns:
|
||||||
|
api_token: "{{ cloudflare_dns_token }}"
|
||||||
|
|
||||||
|
zone: "{{ dns_root }}"
|
||||||
|
type: "CNAME"
|
||||||
|
|
||||||
|
record: "{{ domain }}"
|
||||||
|
value: "{{ domain_to }}"
|
||||||
|
|
||||||
|
- name: "Wait for DNS Propagation"
|
||||||
|
debug:
|
||||||
|
msg: "Waiting..."
|
||||||
|
until: "lookup('community.general.dig', domain) == lookup('community.general.dig', domain_to)"
|
||||||
|
retries: 30
|
||||||
|
delay: 10
|
|
@ -0,0 +1,4 @@
|
||||||
|
cloudflare_dns_token: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'cloudflare/dns-token'
|
||||||
|
) }}"
|
|
@ -0,0 +1,29 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Play] Check that mandatory variables are defined"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "network_name is defined"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Network Creation
|
||||||
|
####################
|
||||||
|
- name: "Retrieve {{ network_name }} Info"
|
||||||
|
community.docker.docker_network_info:
|
||||||
|
name: "{{ network_name }}"
|
||||||
|
register: result
|
||||||
|
|
||||||
|
- name: "Create {{ network_name }}"
|
||||||
|
run_once: true
|
||||||
|
community.docker.docker_network:
|
||||||
|
state: "present"
|
||||||
|
|
||||||
|
name: "{{ network_name }}"
|
||||||
|
driver: "overlay"
|
||||||
|
scope: "global"
|
||||||
|
|
||||||
|
attachable: true
|
||||||
|
appends: true
|
||||||
|
when: "not result.exists"
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
stack_dir: "{{ playbook_dir }}"
|
|
@ -0,0 +1,5 @@
|
||||||
|
- name: "Redeploy Stack "
|
||||||
|
systemd:
|
||||||
|
name: "systemd-networkd.service"
|
||||||
|
state: "restarted"
|
||||||
|
listen: "restart systemd-networkd"
|
|
@ -0,0 +1,18 @@
|
||||||
|
####################
|
||||||
|
# - Check Variables
|
||||||
|
####################
|
||||||
|
- name: "[Play] Check that mandatory variables are defined"
|
||||||
|
assert:
|
||||||
|
that:
|
||||||
|
- "stack_name is defined"
|
||||||
|
|
||||||
|
####################
|
||||||
|
# - Stack Deployment
|
||||||
|
####################
|
||||||
|
- name: "Deploy Stack: {{ stack_name }}"
|
||||||
|
community.docker.docker_stack:
|
||||||
|
state: "present"
|
||||||
|
prune: "true"
|
||||||
|
name: "{{ stack_name }}"
|
||||||
|
compose:
|
||||||
|
- "{{ lookup('file', stack_dir ~ '/docker-compose.yml') | from_yaml }}"
|
|
@ -0,0 +1,65 @@
|
||||||
|
# S3 Master Credentials
|
||||||
|
cloudflare_account_id: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'cloudflare/account-id'
|
||||||
|
) }}"
|
||||||
|
s3_master_access_key_id: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'cloudflare/r2/s3_access_key_id'
|
||||||
|
) }}"
|
||||||
|
s3_master_secret_access_key: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'cloudflare/r2/s3_secret_access_key'
|
||||||
|
) }}"
|
||||||
|
s3_master_endpoint: "https://{{ cloudflare_account_id }}.r2.cloudflarestorage.com"
|
||||||
|
|
||||||
|
# S3 Bucket Info
|
||||||
|
s3_bucket_name: "{{ volume_name | replace('_', '-') }}"
|
||||||
|
s3_access_key_id: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'volumes/' ~ volume_name ~ '/s3_access_key_id'
|
||||||
|
) }}"
|
||||||
|
s3_secret_access_key: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'volumes/' ~ volume_name ~ '/s3_secret_access_key'
|
||||||
|
) }}"
|
||||||
|
s3_endpoint: "{{ s3_master_endpoint }}/{{ s3_bucket_name }}"
|
||||||
|
s3_acl: "private"
|
||||||
|
|
||||||
|
# Volume Dirs / Files
|
||||||
|
dir_volume_base: "/data/volumes/{{ volume_name }}"
|
||||||
|
dir_volume_cache: "{{ dir_volume_base }}/cache"
|
||||||
|
dir_volume_mount: "{{ dir_volume_base }}/data"
|
||||||
|
file_rclone_config: "{{ dir_volume_base }}/rclone.conf"
|
||||||
|
|
||||||
|
# rclone Encryption Options
|
||||||
|
rclone_enckey_1: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'volumes/' ~ volume_name ~ '/rclone_enckey_1'
|
||||||
|
) }}"
|
||||||
|
rclone_enckey_2: "{{ lookup(
|
||||||
|
'community.general.passwordstore',
|
||||||
|
'volumes/' ~ volume_name ~ '/rclone_enckey_2'
|
||||||
|
) }}"
|
||||||
|
|
||||||
|
# rclone Config/Permissions
|
||||||
|
perms_uid: "0"
|
||||||
|
perms_gid: "0"
|
||||||
|
perms_dir: "0777"
|
||||||
|
perms_files: "0666"
|
||||||
|
perms_umask: "2"
|
||||||
|
vfs_cache_mode: "full"
|
||||||
|
|
||||||
|
rclone_mount_opts: "{{
|
||||||
|
'--config ' ~ file_rclone_config
|
||||||
|
~ ' --cache-dir ' ~ dir_volume_cache
|
||||||
|
~ ' --default-permissions'
|
||||||
|
~ ' --allow-other'
|
||||||
|
~ ' --uid ' ~ perms_uid
|
||||||
|
~ ' --gid ' ~ perms_gid
|
||||||
|
~ ' --dir-perms ' ~ perms_dir
|
||||||
|
~ ' --file-perms ' ~ perms_files
|
||||||
|
~ ' --umask ' ~ perms_umask
|
||||||
|
~ ' --gid ' ~ perms_gid
|
||||||
|
~ ' --vfs-cache-mode ' ~ vfs_cache_mode
|
||||||
|
}}"
|
|
@ -0,0 +1,64 @@
|
||||||
|
# Install rclone
|
||||||
|
- name: "Install rclone & fuse"
|
||||||
|
run_once: true
|
||||||
|
apt:
|
||||||
|
state: "present"
|
||||||
|
name:
|
||||||
|
- "rclone"
|
||||||
|
- "fuse"
|
||||||
|
|
||||||
|
# Create S3 Bucket
|
||||||
|
- name: "Create S3 Bucket"
|
||||||
|
local_action:
|
||||||
|
module: "amazon.aws.s3_bucket"
|
||||||
|
state: "present"
|
||||||
|
|
||||||
|
name: "{{ s3_bucket_name }}"
|
||||||
|
|
||||||
|
access_key: "{{ s3_master_access_key_id }}"
|
||||||
|
secret_key: "{{ s3_master_secret_access_key }}"
|
||||||
|
endpoint_url: "{{ s3_master_endpoint }}"
|
||||||
|
#s3_url: "{{ s3_master_endpoint }}"
|
||||||
|
|
||||||
|
# Create Volume Directories
|
||||||
|
- name: "Create S3-Backed Volume Base Directory"
|
||||||
|
file:
|
||||||
|
state: "directory"
|
||||||
|
path: "{{ dir_volume_base }}"
|
||||||
|
mode: "0700"
|
||||||
|
|
||||||
|
- name: "Create S3-Backed Volume Cache Directory"
|
||||||
|
file:
|
||||||
|
state: "directory"
|
||||||
|
path: "{{ dir_volume_cache }}"
|
||||||
|
mode: "0700"
|
||||||
|
|
||||||
|
- name: "Create S3-Backed Volume Data Directory"
|
||||||
|
file:
|
||||||
|
state: "directory"
|
||||||
|
path: "{{ dir_volume_mount }}"
|
||||||
|
mode: "0700"
|
||||||
|
|
||||||
|
# Install Volume-Mount Service
|
||||||
|
- name: "Install rclone.conf"
|
||||||
|
template:
|
||||||
|
src: "{{ role_path }}/templates/rclone.conf"
|
||||||
|
dest: "{{ file_rclone_config }}"
|
||||||
|
owner: "root"
|
||||||
|
group: "root"
|
||||||
|
mode: "0600"
|
||||||
|
|
||||||
|
- name: "Install rclone-{{ volume_name }}.service"
|
||||||
|
template:
|
||||||
|
src: "{{ role_path }}/templates/rclone.service"
|
||||||
|
dest: "/etc/systemd/system/rclone-{{ volume_name }}.service"
|
||||||
|
owner: "root"
|
||||||
|
group: "root"
|
||||||
|
mode: "0600"
|
||||||
|
|
||||||
|
- name: "Start rclone-{{ volume_name }}.service"
|
||||||
|
systemd:
|
||||||
|
state: "started"
|
||||||
|
enabled: true
|
||||||
|
name: "rclone-{{ volume_name }}"
|
||||||
|
daemon_reload: "yes"
|
|
@ -0,0 +1,15 @@
|
||||||
|
[{{volume_name}}-insecure]
|
||||||
|
type = s3
|
||||||
|
provider = Other
|
||||||
|
env_auth = false
|
||||||
|
access_key_id = {{ s3_access_key_id }}
|
||||||
|
secret_access_key = {{ s3_secret_access_key }}
|
||||||
|
region = auto
|
||||||
|
endpoint = {{ s3_endpoint }}
|
||||||
|
acl = {{ s3_acl }}
|
||||||
|
|
||||||
|
[{{volume_name}}]
|
||||||
|
type = crypt
|
||||||
|
remote = {{ volume_name }}-insecure:{{ s3_bucket_name }}
|
||||||
|
password = {{ rclone_enckey_1 }}
|
||||||
|
password2 = {{ rclone_enckey_2 }}
|
|
@ -0,0 +1,16 @@
|
||||||
|
[Unit]
|
||||||
|
Description=rclone_s3 - {{ volume_name }}
|
||||||
|
AssertPathIsDirectory={{ dir_volume_mount }}
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
|
||||||
|
ExecStart=/usr/bin/rclone mount {{ rclone_mount_opts }} {{ volume_name }}: {{ dir_volume_mount }}
|
||||||
|
ExecStop=/usr/bin/fusermount -zu {{ dir_volume_mount }}
|
||||||
|
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
103
run.sh
103
run.sh
|
@ -7,17 +7,8 @@ SCRIPT_PATH="$(dirname "$(readlink -f "$0")")"
|
||||||
####################
|
####################
|
||||||
# - Constants
|
# - Constants
|
||||||
####################
|
####################
|
||||||
PLAYBOOKS_PATH="$SCRIPT_PATH/playbooks"
|
|
||||||
|
|
||||||
INVENTORY="$SCRIPT_PATH/inventory.yml"
|
INVENTORY="$SCRIPT_PATH/inventory.yml"
|
||||||
|
PLAYBOOK="$SCRIPT_PATH/playbook.yml"
|
||||||
PLAYBOOK_HOSTS="$PLAYBOOKS_PATH/playbook.hosts.yml"
|
|
||||||
PLAYBOOK_WG0="$PLAYBOOKS_PATH/playbook.wg0.yml"
|
|
||||||
PLAYBOOK_SWARM="$PLAYBOOKS_PATH/playbook.swarm.yml"
|
|
||||||
|
|
||||||
PLAYBOOK_STACK_CLEANUP="$SCRIPT_PATH/stacks/cleanup/playbook.yml"
|
|
||||||
PLAYBOOK_STACK_MESH="$SCRIPT_PATH/stacks/mesh/playbook.yml"
|
|
||||||
PLAYBOOK_STACK_SITE_SUPPORT="$SCRIPT_PATH/stacks/site-support/playbook.yml"
|
|
||||||
|
|
||||||
help() {
|
help() {
|
||||||
less -R << EOF
|
less -R << EOF
|
||||||
|
@ -25,6 +16,10 @@ This script manages the deployment using ansible.
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
./run.sh [COMMAND]
|
./run.sh [COMMAND]
|
||||||
|
|
||||||
|
Commands:
|
||||||
|
sync [TAGS]
|
||||||
|
- Specify comma-seperated TAGS to restrict execution to particular stages/stacks.
|
||||||
EOF
|
EOF
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -68,47 +63,32 @@ case $(cat /etc/debian_version | cut -d . -f 1) in
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
if [[ $(cmd_exists ansible) != true ]]; then
|
if [ ! -d "$SCRIPT_PATH/.venv" ]; then
|
||||||
echo "This script requires ansible. Press ENTER to install and continue..."
|
python3 -m venv .venv
|
||||||
sudo apt install ansible
|
fi
|
||||||
|
|
||||||
|
. .venv/bin/activate
|
||||||
|
|
||||||
|
if [[ $(cmd_exists ansible) != true ]]; then
|
||||||
|
pip install -r "$SCRIPT_PATH/requirements.txt"
|
||||||
|
|
||||||
echo "This script requires latest community.docker module. Press ENTER to install and continue..."
|
|
||||||
ansible-galaxy collection install community.docker
|
ansible-galaxy collection install community.docker
|
||||||
|
ansible-galaxy collection install community.digitalocean
|
||||||
fi
|
fi
|
||||||
|
|
||||||
####################
|
####################
|
||||||
# - Actions
|
# - Actions
|
||||||
####################
|
####################
|
||||||
action_hosts() {
|
action_sync() {
|
||||||
ansible-playbook \
|
ansible-playbook \
|
||||||
--inventory "$INVENTORY" \
|
--inventory "$INVENTORY" \
|
||||||
"$PLAYBOOK_HOSTS"
|
"$PLAYBOOK"
|
||||||
}
|
}
|
||||||
action_wg0() {
|
action_sync_tags() {
|
||||||
ansible-playbook \
|
ansible-playbook \
|
||||||
--inventory "$INVENTORY" \
|
--inventory "$INVENTORY" \
|
||||||
"$PLAYBOOK_WG0"
|
"$PLAYBOOK" \
|
||||||
}
|
--tags "$1"
|
||||||
action_swarm() {
|
|
||||||
ansible-playbook \
|
|
||||||
--inventory "$INVENTORY" \
|
|
||||||
"$PLAYBOOK_SWARM"
|
|
||||||
}
|
|
||||||
|
|
||||||
action_stack_cleanup() {
|
|
||||||
ansible-playbook \
|
|
||||||
--inventory "$INVENTORY" \
|
|
||||||
"$PLAYBOOK_STACK_CLEANUP"
|
|
||||||
}
|
|
||||||
action_stack_mesh() {
|
|
||||||
ansible-playbook \
|
|
||||||
--inventory "$INVENTORY" \
|
|
||||||
"$PLAYBOOK_STACK_MESH"
|
|
||||||
}
|
|
||||||
action_stack_site_support() {
|
|
||||||
ansible-playbook \
|
|
||||||
--inventory "$INVENTORY" \
|
|
||||||
"$PLAYBOOK_STACK_SITE_SUPPORT"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
####################
|
####################
|
||||||
|
@ -116,45 +96,10 @@ action_stack_site_support() {
|
||||||
####################
|
####################
|
||||||
case $1 in
|
case $1 in
|
||||||
sync)
|
sync)
|
||||||
action_hosts
|
if [ -z "${2-}" ]; then
|
||||||
action_wg0
|
action_sync
|
||||||
action_swarm
|
else
|
||||||
|
action_sync_tags "$2"
|
||||||
action_stack_cleanup
|
fi
|
||||||
action_stack_mesh
|
|
||||||
action_stack_site_support
|
|
||||||
;;
|
;;
|
||||||
|
|
||||||
sync-hosts)
|
|
||||||
action_hosts
|
|
||||||
;;
|
|
||||||
sync-wg0)
|
|
||||||
action_wg0
|
|
||||||
;;
|
|
||||||
sync-swarm)
|
|
||||||
action_swarm
|
|
||||||
;;
|
|
||||||
|
|
||||||
sync-stacks)
|
|
||||||
action_stack_cleanup
|
|
||||||
action_stack_mesh
|
|
||||||
action_stack_site_support
|
|
||||||
;;
|
|
||||||
sync-stack-cleanup)
|
|
||||||
action_stack_cleanup
|
|
||||||
;;
|
|
||||||
sync-stack-mesh)
|
|
||||||
action_stack_mesh
|
|
||||||
;;
|
|
||||||
sync-stack-site-support)
|
|
||||||
action_stack_site_support
|
|
||||||
;;
|
|
||||||
|
|
||||||
# sync-role)
|
|
||||||
# ansible-playbook \
|
|
||||||
# --inventory "$INVENTORY" \
|
|
||||||
# --tags "$2" \
|
|
||||||
# "$PLAYBOOK"
|
|
||||||
# ;;
|
|
||||||
|
|
||||||
esac
|
esac
|
||||||
|
|
|
@ -1,6 +0,0 @@
|
||||||
auth
|
|
||||||
chat
|
|
||||||
git
|
|
||||||
s3
|
|
||||||
updater
|
|
||||||
uptime
|
|
|
@ -1,28 +1,13 @@
|
||||||
- hosts: leader
|
####################
|
||||||
become: "true"
|
# - Deploy Stack: cleanup
|
||||||
|
####################
|
||||||
|
- name: "Deploy Stack: cleanup"
|
||||||
|
hosts: "swarm_leader"
|
||||||
|
tags:
|
||||||
|
- "stage_stack"
|
||||||
|
- "stage_stack_cleanup"
|
||||||
vars:
|
vars:
|
||||||
stack_name: "cleanup"
|
stack_name: "cleanup"
|
||||||
tasks:
|
|
||||||
####################
|
|
||||||
# - Stack Deployment
|
|
||||||
####################
|
|
||||||
- name: "Upload Stack to /tmp"
|
|
||||||
template:
|
|
||||||
src: "./docker-compose.yml"
|
|
||||||
dest: "/tmp/{{ stack_name }}.yml"
|
|
||||||
owner: "root"
|
|
||||||
group: "root"
|
|
||||||
mode: "0640"
|
|
||||||
|
|
||||||
- name: "Deploy Stack: {{ stack_name }}"
|
roles:
|
||||||
community.docker.docker_stack:
|
- role: "stack/deploy_stack"
|
||||||
state: "present"
|
|
||||||
prune: "true"
|
|
||||||
name: "{{ stack_name }}"
|
|
||||||
compose:
|
|
||||||
- "/tmp/{{ stack_name }}.yml"
|
|
||||||
|
|
||||||
- name: "Delete /tmp Stack"
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "/tmp/{{ stack_name }}.yml"
|
|
||||||
state: "absent"
|
|
||||||
|
|
|
@ -0,0 +1,10 @@
|
||||||
|
[http.routers.site-support__site-support]
|
||||||
|
rule = "Host(`pysupport.timesigned.com`)"
|
||||||
|
entryPoints = ["websecure", "web"]
|
||||||
|
service = "site-support__site-support"
|
||||||
|
|
||||||
|
[[http.services.site-support__site-support.loadBalancer.servers]]
|
||||||
|
url = "http://site-support.site-support:8787"
|
||||||
|
|
||||||
|
[http.routers.site-support__site-support.tls]
|
||||||
|
certResolver = "letsencrypt"
|
|
@ -5,17 +5,15 @@
|
||||||
checkNewVersion = false
|
checkNewVersion = false
|
||||||
sendAnonymousUsage = false
|
sendAnonymousUsage = false
|
||||||
|
|
||||||
[experimental]
|
|
||||||
http3 = true
|
|
||||||
|
|
||||||
[api]
|
[api]
|
||||||
dashboard = false
|
dashboard = false
|
||||||
insecure = false
|
insecure = false
|
||||||
debug = false
|
debug = false
|
||||||
disabledashboardad = true
|
|
||||||
|
|
||||||
[log]
|
[log]
|
||||||
level = "DEBUG"
|
level = "INFO"
|
||||||
|
|
||||||
|
[accessLog]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -26,7 +24,7 @@ level = "DEBUG"
|
||||||
[certificatesResolvers.letsencrypt.acme]
|
[certificatesResolvers.letsencrypt.acme]
|
||||||
email = "{{ email_letsencrypt }}"
|
email = "{{ email_letsencrypt }}"
|
||||||
storage = "/data-certs/acme.json"
|
storage = "/data-certs/acme.json"
|
||||||
#caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
|
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
|
||||||
|
|
||||||
[certificatesResolvers.letsencrypt.acme.tlsChallenge]
|
[certificatesResolvers.letsencrypt.acme.tlsChallenge]
|
||||||
|
|
||||||
|
@ -41,7 +39,6 @@ storage = "/data-certs/acme.json"
|
||||||
|
|
||||||
[entryPoints.websecure]
|
[entryPoints.websecure]
|
||||||
address = ":443"
|
address = ":443"
|
||||||
http3.advertisedPort = 443
|
|
||||||
|
|
||||||
|
|
||||||
[entryPoints.web]
|
[entryPoints.web]
|
||||||
|
@ -60,4 +57,3 @@ permanent = true
|
||||||
[providers.file]
|
[providers.file]
|
||||||
directory = "/data-providers"
|
directory = "/data-providers"
|
||||||
watch = false
|
watch = false
|
||||||
debugLogGeneratedTemplate = true
|
|
||||||
|
|
|
@ -21,7 +21,7 @@ services:
|
||||||
uid: "5000"
|
uid: "5000"
|
||||||
gid: "5000"
|
gid: "5000"
|
||||||
|
|
||||||
- source: mesh__stack_site-support.toml
|
- source: mesh__site-support__service.toml
|
||||||
target: /data-providers/site-support.toml
|
target: /data-providers/site-support.toml
|
||||||
uid: "5000"
|
uid: "5000"
|
||||||
gid: "5000"
|
gid: "5000"
|
||||||
|
@ -30,7 +30,7 @@ services:
|
||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
- /etc/timezone:/etc/timezone:ro
|
- /etc/timezone:/etc/timezone:ro
|
||||||
|
|
||||||
- mesh__traefik_certs:/data-certs
|
- /data/volumes/mesh__traefik_certs/data:/data-certs:shared
|
||||||
|
|
||||||
ports:
|
ports:
|
||||||
## HTTP
|
## HTTP
|
||||||
|
@ -65,7 +65,9 @@ services:
|
||||||
- node.role == manager
|
- node.role == manager
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
- mesh_public
|
mesh_public:
|
||||||
|
aliases:
|
||||||
|
- "traefik.mesh"
|
||||||
|
|
||||||
####################
|
####################
|
||||||
# - Resources
|
# - Resources
|
||||||
|
@ -77,11 +79,7 @@ configs:
|
||||||
external: true
|
external: true
|
||||||
mesh__traefik_default_middlewares.toml:
|
mesh__traefik_default_middlewares.toml:
|
||||||
external: true
|
external: true
|
||||||
mesh__stack_site-support.toml:
|
mesh__site-support__service.toml:
|
||||||
external: true
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
mesh__traefik_certs:
|
|
||||||
external: true
|
external: true
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
|
|
|
@ -1,120 +1,25 @@
|
||||||
####################
|
####################
|
||||||
# - Stop the Stack
|
# - Deploy Stack: mesh
|
||||||
####################
|
####################
|
||||||
- hosts: leader
|
- name: "Deploy Stack: mesh"
|
||||||
become: "true"
|
hosts: "swarm_leader"
|
||||||
|
tags:
|
||||||
|
- "stage_stack"
|
||||||
|
- "stage_stack_mesh"
|
||||||
vars:
|
vars:
|
||||||
stack_name: "mesh"
|
stack_name: "mesh"
|
||||||
tasks:
|
email_letsencrypt: "{{ email_abuse }}"
|
||||||
- name: "Stop Stack: {{ stack_name }}"
|
|
||||||
community.docker.docker_stack:
|
|
||||||
state: "absent"
|
|
||||||
absent_retries: 15
|
|
||||||
name: "{{ stack_name }}"
|
|
||||||
|
|
||||||
- name: "Pause to Let Stack Stop"
|
roles:
|
||||||
pause:
|
- role: "stack/deploy_network_overlay"
|
||||||
seconds: 5
|
|
||||||
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Volume Creation
|
|
||||||
####################
|
|
||||||
- hosts: swarm
|
|
||||||
become: "true"
|
|
||||||
vars:
|
vars:
|
||||||
cloudflare_b0__access_key_id: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/r2/mesh__traefik_certs/access_key_id') }}"
|
network_name: "mesh_public"
|
||||||
cloudflare_b0__secret_access_key: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/r2/mesh__traefik_certs/secret_access_key') }}"
|
|
||||||
cloudflare_b0__endpoint: "{{ lookup('community.general.passwordstore', 'work/dtu/python-support/r2/mesh__traefik_certs/endpoint') }}"
|
|
||||||
|
|
||||||
tasks:
|
- role: "stack/deploy_volume_s3"
|
||||||
- name: "Unmount Volume: mesh__traefik_certs"
|
|
||||||
community.docker.docker_volume:
|
|
||||||
state: "absent"
|
|
||||||
name: "mesh__traefik_certs"
|
|
||||||
driver: "rclone"
|
|
||||||
|
|
||||||
- name: "Pause to Let Volume Unmount"
|
|
||||||
pause:
|
|
||||||
seconds: 5
|
|
||||||
|
|
||||||
- name: "Mount Volume: mesh__traefik_certs"
|
|
||||||
community.docker.docker_volume:
|
|
||||||
state: "present"
|
|
||||||
name: "mesh__traefik_certs"
|
|
||||||
driver: "rclone"
|
|
||||||
driver_options:
|
|
||||||
remote: ":s3:mesh--traefik-certs"
|
|
||||||
uid: "5000"
|
|
||||||
gid: "5000"
|
|
||||||
s3_provider: "Cloudflare"
|
|
||||||
s3_access_key_id: "{{ cloudflare_b0__access_key_id }}"
|
|
||||||
s3_secret_access_key: "{{ cloudflare_b0__secret_access_key }}"
|
|
||||||
s3_region: "auto"
|
|
||||||
s3_endpoint: "{{ cloudflare_b0__endpoint }}"
|
|
||||||
s3_acl: "private"
|
|
||||||
vfs_cache_mode: "full"
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Deployment
|
|
||||||
####################
|
|
||||||
- hosts: leader
|
|
||||||
become: "true"
|
|
||||||
vars:
|
vars:
|
||||||
email_letsencrypt: "s174509@dtu.dk"
|
volume_name: "mesh__traefik_certs"
|
||||||
|
perms_uid: "5000"
|
||||||
|
perms_gid: "5000"
|
||||||
|
|
||||||
stack_name: "mesh"
|
- role: "stack/deploy_configs"
|
||||||
stack_configs:
|
- role: "stack/deploy_stack"
|
||||||
- "mesh__traefik_static.toml"
|
|
||||||
- "mesh__traefik_tls.toml"
|
|
||||||
- "mesh__traefik_default_middlewares.toml"
|
|
||||||
- "mesh__stack_site-support.toml"
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
####################
|
|
||||||
# - Network Creation
|
|
||||||
####################
|
|
||||||
- name: "Create Network: mesh_public"
|
|
||||||
community.docker.docker_network:
|
|
||||||
state: "present"
|
|
||||||
name: "mesh_public"
|
|
||||||
driver: "overlay"
|
|
||||||
scope: "swarm"
|
|
||||||
attachable: true
|
|
||||||
appends: true
|
|
||||||
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Configs Creation
|
|
||||||
####################
|
|
||||||
- name: "Create Docker Configs"
|
|
||||||
community.docker.docker_config:
|
|
||||||
state: "present"
|
|
||||||
name: "{{ item }}"
|
|
||||||
data: "{{ lookup('template', './configs/' ~ item) | b64encode }}"
|
|
||||||
data_is_b64: "true"
|
|
||||||
with_items: "{{ stack_configs }}"
|
|
||||||
|
|
||||||
####################
|
|
||||||
# - Stack Deployment
|
|
||||||
####################
|
|
||||||
- name: "Upload Stack to /tmp"
|
|
||||||
template:
|
|
||||||
src: "./docker-compose.yml"
|
|
||||||
dest: "/tmp/{{ stack_name }}.yml"
|
|
||||||
owner: "root"
|
|
||||||
group: "root"
|
|
||||||
mode: "0640"
|
|
||||||
|
|
||||||
- name: "Deploy Stack: {{ stack_name }}"
|
|
||||||
community.docker.docker_stack:
|
|
||||||
state: "present"
|
|
||||||
prune: "true"
|
|
||||||
name: "{{ stack_name }}"
|
|
||||||
compose:
|
|
||||||
- "/tmp/{{ stack_name }}.yml"
|
|
||||||
|
|
||||||
- name: "Delete /tmp Stack"
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "/tmp/{{ stack_name }}.yml"
|
|
||||||
state: "absent"
|
|
||||||
|
|
|
@ -25,9 +25,7 @@ The service employs CPU/Memory usage limits in the `deploy` section.
|
||||||
This helps prevent a DDoS attack from crashing the entire host.
|
This helps prevent a DDoS attack from crashing the entire host.
|
||||||
|
|
||||||
## Capabilities
|
## Capabilities
|
||||||
All capabilities are dropped with `--cap_drop ALL`.
|
The container runs with default capabilities.
|
||||||
|
|
||||||
No capabilities need to be added back, so none are.
|
|
||||||
|
|
||||||
## security.txt
|
## security.txt
|
||||||
*See https://securitytxt.org/ for RFC + generator.*
|
*See https://securitytxt.org/ for RFC + generator.*
|
||||||
|
|
|
@ -4,7 +4,7 @@ entryPoints = ["websecure", "web"]
|
||||||
service = "site-support__site-support"
|
service = "site-support__site-support"
|
||||||
|
|
||||||
[[http.services.site-support__site-support.loadBalancer.servers]]
|
[[http.services.site-support__site-support.loadBalancer.servers]]
|
||||||
url = "http://site-support:8787"
|
url = "http://10.99.88.3:8787"
|
||||||
|
|
||||||
[http.routers.site-support__site-support.tls]
|
[http.routers.site-support__site-support.tls]
|
||||||
certResolver = "letsencrypt"
|
certResolver = "letsencrypt"
|
|
@ -4,8 +4,6 @@ services:
|
||||||
site-support:
|
site-support:
|
||||||
image: git.sofus.io/python-support/site-support:0
|
image: git.sofus.io/python-support/site-support:0
|
||||||
user: "5020:5020"
|
user: "5020:5020"
|
||||||
cap_drop:
|
|
||||||
- ALL
|
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
|
@ -13,34 +11,31 @@ services:
|
||||||
|
|
||||||
configs:
|
configs:
|
||||||
- source: site-support__security.txt
|
- source: site-support__security.txt
|
||||||
target: /public/.well-known/security.txt
|
target: /app/.well-known/security.txt
|
||||||
uid: "5020"
|
|
||||||
gid: "5020"
|
|
||||||
|
|
||||||
environment:
|
environment:
|
||||||
SERVER_PORT: "8787"
|
SERVER_PORT: "8787"
|
||||||
|
SERVER_ROOT: "/app"
|
||||||
SERVER_REDIRECT_TRAILING_SLASH: "true"
|
SERVER_REDIRECT_TRAILING_SLASH: "true"
|
||||||
|
|
||||||
SERVER_LOG_LEVEL: "info"
|
SERVER_LOG_LEVEL: "trace"
|
||||||
SERVER_LOG_REMOTE_ADDRESS: "false"
|
SERVER_LOG_REMOTE_ADDRESS: "false"
|
||||||
|
|
||||||
SERVER_THREADS_MULTIPLIER: "0" ## Use # CPUs
|
SERVER_SECURITY_HEADERS: "false"
|
||||||
|
|
||||||
SERVER_SECURITY_HEADERS: "true"
|
|
||||||
SERVER_DIRECTORY_LISTING: "false"
|
SERVER_DIRECTORY_LISTING: "false"
|
||||||
|
|
||||||
SERVER_CACHE_CONTROL_HEADERS: "false" ## change when stable?
|
SERVER_CACHE_CONTROL_HEADERS: "false" ## change when stable?
|
||||||
SERVER_COMPRESSION: "true" ## reconsider for small ssg payload
|
SERVER_COMPRESSION: "false" ## reconsider for small ssg payload
|
||||||
SERVER_COMPRESSION_STATIC: "false" ## pre-compress? :)
|
SERVER_COMPRESSION_STATIC: "false" ## pre-compress? :)
|
||||||
|
|
||||||
deploy:
|
deploy:
|
||||||
mode: replicated
|
mode: replicated
|
||||||
replicas: 1
|
replicas: 1
|
||||||
|
|
||||||
# resources:
|
resources:
|
||||||
# limits:
|
limits:
|
||||||
# cpus: "4.0"
|
cpus: "1.0"
|
||||||
# memory: "4G"
|
memory: "750M"
|
||||||
|
|
||||||
restart_policy:
|
restart_policy:
|
||||||
condition: on-failure
|
condition: on-failure
|
||||||
|
@ -49,12 +44,13 @@ services:
|
||||||
window: 120s
|
window: 120s
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
- mesh_public
|
public:
|
||||||
|
ipv4_address: "10.99.88.3"
|
||||||
|
|
||||||
configs:
|
configs:
|
||||||
site-support__security.txt:
|
site-support__security.txt:
|
||||||
external: true
|
external: true
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
mesh_public:
|
public:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
@ -1,66 +1,33 @@
|
||||||
####################
|
####################
|
||||||
# - Deployment
|
# - Deploy Stack: site-support
|
||||||
####################
|
####################
|
||||||
- hosts: leader
|
- name: "Deploy Stack: site-support"
|
||||||
become: "true"
|
hosts: "swarm_leader"
|
||||||
|
tags:
|
||||||
|
- "stage_stack"
|
||||||
|
- "stage_stack_site-support"
|
||||||
vars:
|
vars:
|
||||||
stack_name: "site-support"
|
stack_name: "site-support"
|
||||||
stack_configs:
|
|
||||||
- "site-support__security.txt"
|
|
||||||
|
|
||||||
tasks:
|
roles:
|
||||||
####################
|
- role: "stack/deploy_dns"
|
||||||
# - Stop the Stack
|
vars:
|
||||||
####################
|
domain: "pysupport.{{ dns_root }}"
|
||||||
- name: "Stop Stack: {{ stack_name }}"
|
domain_to: "{{ dns_root }}"
|
||||||
community.docker.docker_stack:
|
|
||||||
state: "absent"
|
|
||||||
absent_retries: 15
|
|
||||||
name: "{{ stack_name }}"
|
|
||||||
|
|
||||||
####################
|
- role: "stack/deploy_network_overlay"
|
||||||
# - Network Creation
|
vars:
|
||||||
####################
|
network_name: "public"
|
||||||
- name: "Create Network: mesh_public"
|
|
||||||
community.docker.docker_network:
|
|
||||||
state: "present"
|
|
||||||
name: "mesh_public"
|
|
||||||
driver: "overlay"
|
|
||||||
scope: "swarm"
|
|
||||||
attachable: true
|
|
||||||
appends: true
|
|
||||||
|
|
||||||
####################
|
- role: "stack/deploy_configs"
|
||||||
# - Config Creation
|
vars:
|
||||||
####################
|
stack_configs_gen:
|
||||||
- name: "Create Docker Configs"
|
site-support__security.txt: "securitytxt"
|
||||||
community.docker.docker_config:
|
|
||||||
state: "present"
|
|
||||||
name: "{{ item }}"
|
|
||||||
data: "{{ lookup('template', './configs/' ~ item) | b64encode }}"
|
|
||||||
data_is_b64: "true"
|
|
||||||
with_items: "{{ stack_configs }}"
|
|
||||||
|
|
||||||
####################
|
# `securitytxt` Generation Variables
|
||||||
# - Stack Deployment
|
securitytxt__mailto: "s174509@dtu.dk"
|
||||||
####################
|
securitytxt__expiry: ""
|
||||||
- name: "Upload Stack to /tmp"
|
securitytxt__gpg_id: "E3B345EFFF5B3994BC1D12603D01BE95F3EFFEB9"
|
||||||
template:
|
securitytxt__domain: "https://timesigned.com"
|
||||||
src: "./docker-compose.yml"
|
|
||||||
dest: "/tmp/{{ stack_name }}.yml"
|
|
||||||
owner: "root"
|
|
||||||
group: "root"
|
|
||||||
mode: "0640"
|
|
||||||
|
|
||||||
- name: "Deploy Stack: {{ stack_name }}"
|
- role: "stack/deploy_stack"
|
||||||
community.docker.docker_stack:
|
|
||||||
state: "present"
|
|
||||||
prune: "true"
|
|
||||||
name: "{{ stack_name }}"
|
|
||||||
compose:
|
|
||||||
- "/tmp/{{ stack_name }}.yml"
|
|
||||||
|
|
||||||
- name: "Delete /tmp Stack"
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "/tmp/{{ stack_name }}.yml"
|
|
||||||
state: "absent"
|
|
||||||
|
|
Loading…
Reference in New Issue