Compare commits

..

230 Commits

Author SHA1 Message Date
0b838511a3 Reload nginx instead of restarting it all the goddamn time 2025-03-12 18:31:01 -05:00
38edcf6906 Bypass proxy if you're logged in 2025-03-12 18:29:37 -05:00
2107f823fe Bypass cache for certain URLs in GCI 2025-03-12 17:44:04 -05:00
103236b331 Test configs before we bounce the container 2025-03-12 17:43:52 -05:00
517f04de68 Add link to Matrix space in footer 2025-03-10 17:45:30 -05:00
05a5551650 Another line break, sir 2025-03-10 16:57:07 -05:00
7c4ba6b23e Downclock gunicorn worker count
Dont need em
2025-03-09 20:48:34 -05:00
d052683651 Add volume mount for lbry blockchain 2025-03-08 23:59:13 -06:00
a054fb29d8 Tiny formatting thing 2025-03-07 18:41:36 -06:00
5fe3396446 Disable server tokens in nginx 2025-03-07 18:25:28 -06:00
01dbb36c37 Mount proxy cache elsewhere for easier inspection 2025-03-07 18:03:42 -06:00
99f8132f5e Bump worker count even HIGHER 2025-03-07 16:36:40 -06:00
7f56771749 Cache the FUCK out of the site 2025-03-07 15:14:53 -06:00
71ceec2ecd Rebase GCI prod to :latest, not :bleeding 2025-03-07 15:13:03 -06:00
82a43f8610 Cache results from the /search endpoint 2025-03-07 04:32:56 -06:00
cfc7b94b7f Don't rate limit my home IP 2025-03-07 02:11:59 -06:00
2acc2bd1b8 Tweak open file cache settings for static assets 2025-03-07 01:54:03 -06:00
5509bf56a6 Bump ratelimiting, expose admin endpoint 2025-03-06 23:07:45 -06:00
dc1ea05fe1 Add nagios user to vmg2 db 2025-03-06 15:57:03 -06:00
2080cd6b5c Implement some caching to make the site even quicker 2025-03-05 23:56:11 -06:00
15667e26d3 We are now officially a dono slut 2025-03-05 22:28:48 -06:00
c953fadd88 Bump Gunicorn workers 2025-03-05 21:19:38 -06:00
ecf00b5f74 Let linting fail, but with a warning 2025-03-05 20:14:22 -06:00
52b32bf2cf Add maintenance page for GCI 2025-03-05 17:42:37 -06:00
f54e099c45 Open About link in new tab 2025-03-05 16:38:28 -06:00
88862fa509 Fix up Admin contact info 2025-03-04 12:42:12 -06:00
668a8441b7 Implement rate limiting 2025-03-03 14:48:19 -06:00
d31cb4e1dd Whoops 2025-03-03 12:36:29 -06:00
223140fd3e Fix vm-g-2 not having tags on ingress role 2025-03-03 12:35:57 -06:00
d4a6f23cac Add XFF to Ingress headers 2025-03-03 12:28:37 -06:00
745adfafae Rotate secret key for GCI 2025-03-03 00:22:37 -06:00
1638436439 Minor tweaks to GCI configs 2025-03-03 00:18:55 -06:00
d76250e92e Add www redirect for GCI 2025-03-02 23:48:25 -06:00
d78d321247 Lower Gunicorn worker count 2025-03-02 23:09:18 -06:00
855e26f4d0 Add GunCAD Index 2025-03-02 22:15:25 -06:00
53294574b4 Add support for arbitrary volumes in the ingress container 2025-03-02 22:15:18 -06:00
9b6a917320 Bump requirements so we can do PG on Ub24 2025-03-02 21:21:13 -06:00
a5891093c9 Revert "Change IP for lidarr"
This reverts commit 69b5c5816a87d609bfee09b08de6a8108502fa41.
2025-02-27 15:36:42 -06:00
69b5c5816a Change IP for lidarr 2025-02-27 14:28:45 -06:00
1f05df9e09 Turn on Authentic Z 2025-02-25 13:37:16 -06:00
1ab5b3fda0 How about we just use bleeding edge element instead 2025-02-22 17:54:01 -06:00
11fa90fdde Oops wrong FQCI 2025-02-22 17:47:56 -06:00
a381abb88b Make that script not shit 2025-02-22 17:45:29 -06:00
799b5bac29 Add wrapper script to cache prod inventory 2025-02-22 17:40:55 -06:00
f762a1fdfc Qualify ALL the containers 2025-02-22 17:36:00 -06:00
be0078cbc6 Shit wait this sucks
This reverts commit 8972cf2cf226555ca792652ae4c70d0328326a3d.
2025-02-21 02:31:45 -06:00
b90a272b6c Add bazarr 2025-02-21 02:31:34 -06:00
8972cf2cf2 Let Grafana's Matrix Bridge react to messages when issues resolve 2025-02-20 21:42:49 -06:00
f42623d1e3 Reroute music and jellyfin to the hetzner VPS, proxying traffic back home 2025-02-20 18:02:27 -06:00
e43820d75f Why the fuck is zoid still up 2025-02-20 01:20:52 -06:00
bfc432d2e5 Remove deprecated secret 2025-02-19 13:17:32 -06:00
be6d51c035 Add SA 2025-02-18 12:08:26 -06:00
b066e2a7fd Have restic retry locks
I dunno why I didn't do this before
2025-02-18 01:47:14 -06:00
9c15b15507 Back off when doing restarts for backup.service 2025-02-14 09:57:38 -06:00
2affc1a8fe Use ZT for Jellyfin access 2025-02-13 20:48:05 -06:00
e07232c448 Filter out SystemInfo for Netgear router 2025-02-13 17:19:02 -06:00
faea62fedb Add Netgear Prometheus exporter to the homeauto serber 2025-02-13 15:21:42 -06:00
823f3297fc Update timeout interval for Prometheus 2025-02-13 15:18:39 -06:00
83c1aa9cc2 Disable Kodi 2025-02-12 21:45:24 -06:00
beadee9668 Fix HW accel for Jellyfin 2025-02-12 21:24:16 -06:00
4bbb4ba16b Add Jellyfin 2025-02-12 21:15:18 -06:00
db3ddabfe2 Add Jellyfin DNS 2025-02-12 20:57:51 -06:00
0637bc434f Workin more toward 5dd 2025-02-12 01:52:57 -06:00
2b686da51a Add a DNS record for a funni 2025-02-12 00:42:48 -06:00
d84da547cb Update Kodi packageset a bit 2025-02-10 23:56:46 -06:00
c963a5649f Remove python2 references 2025-02-10 23:56:35 -06:00
1cb8da6515 Remove awscli from package list 2025-02-10 23:55:37 -06:00
6ef5ff5cd2 Add alsa-utils for audio test shenanigans 2025-02-10 20:18:50 -06:00
9ada152e04 Set shell for kodi user to bash 2025-02-10 14:59:57 -06:00
20be80b2ce First working implementation! 2025-02-10 14:59:15 -06:00
c628a280ac Fix typo 2025-02-10 14:07:19 -06:00
2a154699d7 First try 2025-02-10 14:06:56 -06:00
bb4d5548ee Add Flatpak role, work on Kodi role 2025-02-10 13:33:50 -06:00
9123c62cff Fix incorrect transmission proxy port
Oops that's been broken this whole time
2025-02-10 13:00:44 -06:00
4ba22dcef7 Add nagios-checkhttp support for Prometheus Blackbox 2025-02-10 12:55:26 -06:00
2ccdcca4f1 Fix setup blurb 2025-02-09 00:11:37 -06:00
8885daa1b2 Polish README for the first time in forever 2025-02-09 00:10:42 -06:00
d2bc8915ca Remove some dangling host vars 2025-02-08 23:57:07 -06:00
920d972346 Dynamically determine repo name for zerotier instead of hardcoding it like a dork 2025-02-08 23:55:44 -06:00
b501cf1cdf Remove some deprecated secrets 2025-02-08 23:50:33 -06:00
13785d3f43 Enable lastfm integration for Navidrome 2025-02-07 19:17:25 -06:00
e38aa3edf9 Clean up week-old files from soulseek if they're still around 2025-02-07 16:27:45 -06:00
77722be801 Fix another erroneous DNS 2025-02-07 16:01:23 -06:00
6de9d965ce Remove commented-out ingress definition for srv-fw-13-1 2025-02-07 15:01:58 -06:00
7d8d7a781b Move srv-fw-13-1 over to Traefik 2025-02-07 15:00:51 -06:00
2260176040 Fix erroneous DNS 2025-02-07 14:56:11 -06:00
be7c9313c7 Add dynamic TLS support to Traefik role 2025-02-07 14:44:00 -06:00
7f906a5983 Testing a traefik role 2025-02-07 14:26:31 -06:00
128d1092dc Add DNS for HA for when I'm on the VPN 2025-02-07 14:26:22 -06:00
5853fd21c3 Allow home automation to back up every day 2025-02-06 10:12:32 -06:00
2981e0bc03 Fix seed 2025-02-05 15:42:53 -06:00
7926287536 Add bin dir for Lidarr
We're gonna use it to drop ffmpeg in
2025-02-05 01:28:42 -06:00
e0eb632d63 Reconfigure user for slskd 2025-02-05 00:39:08 -06:00
7c7bada344 Fix path binding in slskd 2025-02-05 00:36:34 -06:00
58f4464001 Workin on slskd 2025-02-05 00:01:09 -06:00
7d637ad2d5 Fix websockets on literally everything 2025-02-04 23:45:21 -06:00
7537a2bca9 Move media acquisition back home where it belongs 2025-02-04 22:34:58 -06:00
335660d518 Remove extraneous satisfactory container 2025-02-04 22:19:51 -06:00
9065584cee Fully replace that pi with the fw box 2025-02-04 20:50:47 -06:00
61bf29481d Switch over to x64 HA image 2025-02-04 20:00:39 -06:00
6edb911936 Prepare to move homeassistant tasks to a FW laptop thing 2025-02-04 19:55:22 -06:00
a78ee05bfd Remove sftp subsystem from SSH declarations
It was causing misconfigs on several systems
2025-02-04 19:55:03 -06:00
f38df0c407 Disable touhou for wea's server 2025-02-01 22:08:44 -06:00
67e136494f Really decom that old one 2025-02-01 21:59:17 -06:00
c8c5460979 Add tml server for wea 2025-02-01 21:57:58 -06:00
30d2ecef07 Recategorize autotrash 2025-02-01 21:57:52 -06:00
be2bf484b4 Implement decomming tml servers and decom the one we have 2025-02-01 21:53:40 -06:00
18048a085b Move tml server provisioning to its own role 2025-02-01 21:49:36 -06:00
18d9bec579 Change time that prunes run 2025-01-31 17:39:09 -06:00
9d9096a998 Add restic-prune provisioning based on Netbox tag 2025-01-31 17:36:55 -06:00
ed62d9f722 Encapsulate backup timers into their own block so we can template out the scripts alone 2025-01-31 17:24:02 -06:00
389380dd0c Work on implementing auto prune 2025-01-31 17:20:59 -06:00
f7fbf43569 Remove unused vars from backup 2025-01-31 17:15:28 -06:00
07e96002ac Simplify backup template task 2025-01-31 17:12:12 -06:00
4fa09d1ed1 Change header on backup script 2025-01-31 17:07:08 -06:00
c9d779b871 Remove duplicate AWS key secret definitions 2025-01-31 17:06:15 -06:00
57e0d5b369 Remove ability to configure which backup script to use 2025-01-31 17:04:42 -06:00
418b570ea5 Clean up backup role 2025-01-31 17:03:46 -06:00
63cb53fd16 Add restic aliases to machines 2025-01-31 14:22:44 -06:00
88214fff2c Exclude vaults when doing backups
This is because of a weird perm denial thing I'm getting on my desktop during backups. It absolutely should NOT cause issues like this.
2025-01-28 18:18:30 -06:00
2edbd1c9e8 Fix erroneous backslash in backup script 2025-01-25 22:27:31 -06:00
b793ebf587 Make backups a ton, ton more betterer 2025-01-25 03:20:07 -06:00
601d9543ec Make restic cache dir and ensure it's there whenever we invoke it 2025-01-25 01:36:13 -06:00
c181965242 Add tag to package thingamabob so we can skip it if we have to 2025-01-25 01:30:54 -06:00
e6e8427227 Only use restic for backups from now on 2025-01-25 01:28:34 -06:00
7f75bdb5cd Oops 2025-01-23 12:31:07 -06:00
98a77d5f28 Add a bunch of excludes for desktops 2025-01-23 12:30:39 -06:00
834f40d3ad Implement ignore rules in restic 2025-01-23 12:21:42 -06:00
bfce95d50d Instead of processing ostree config-diffs, just tar up /etc. More consistent that way. 2025-01-23 03:21:01 -06:00
d3ee28fe56 Remove stale locks before backing up 2025-01-23 03:15:03 -06:00
8b0b900375 Alright clearly hard memory limits are working against us here 2025-01-23 03:08:33 -06:00
42f84c2d54 Verbose prunes too 2025-01-21 15:58:39 -06:00
51cf91e0c4 Verbose backups for restic 2025-01-21 15:58:23 -06:00
2bfc6f489d Up memory for backup.service 2025-01-21 15:57:53 -06:00
40e165c5a6 Canonize some changes I've made to my desktop machines into the desktop role 2025-01-21 02:10:47 -06:00
8b743f3b9e Carve out a place for ostree packages on immutable distros 2025-01-21 01:24:07 -06:00
a2971b3df4 Fix not gathering facts for homeauto stuff 2025-01-20 20:27:05 -06:00
c2068dc103 Gather facts for bastion backups 2025-01-20 20:26:16 -06:00
5d8238e029 Prune backups 2025-01-20 20:23:34 -06:00
8e6cbb69ff Update restic wrapper to have keys 2025-01-20 20:16:41 -06:00
ddac9fe542 Fix syntax (oops) 2025-01-20 20:03:17 -06:00
e651396604 Put the backups on the FS so we can just back them up later on the same way 2025-01-20 20:01:36 -06:00
09bdb80712 Dump PG DBs before doing full system backups 2025-01-20 19:58:20 -06:00
e9eccef348 Include batteries with backup command 2025-01-19 13:15:03 -06:00
05b4bcc4f1 Fix up workstation device roles 2025-01-19 13:10:53 -06:00
eea79389c9 Remove Kopia now that it's obsolete 2025-01-19 13:06:04 -06:00
52b9ceb3a3 Add backups back to desktops now that they work 2025-01-19 13:05:26 -06:00
8cffa77d38 Fix not flattening args when doing null comparisons 2025-01-19 12:54:48 -06:00
502d7397cd Add support for overlaying restic 2025-01-19 12:54:00 -06:00
0ffd8ef535 Add logging 2025-01-19 12:51:15 -06:00
c9984c448c Minor reworks to how script is structured for better efficiency 2025-01-19 12:47:27 -06:00
f3520c10ae Update backup script to use restic 2025-01-19 12:43:10 -06:00
f8be177789 Really fully disable site_common.yml 2025-01-17 10:02:46 -06:00
9b261e5085 Update os-version oneoff 2025-01-17 02:42:01 -06:00
2fd9668b51 Wrench down retries to try to get sanity back 2025-01-17 02:39:41 -06:00
896143d009 Remove defunct tmodloader role that I never use anymore 2025-01-17 02:31:11 -06:00
ced9d6b983 Rip more nagios out 2025-01-17 02:30:14 -06:00
6afad6fcd9 More oops 2025-01-17 02:27:25 -06:00
07845384ac Oops 2025-01-17 02:25:13 -06:00
6b6e8f7b64 Add submodule shenanigans 2025-01-17 02:22:24 -06:00
a25c45536e Remove really bad Ansible role 2025-01-17 02:21:13 -06:00
b5de12a767 Improve zerotier checks 2025-01-17 02:10:31 -06:00
4ac296ed41 Update list of flatpaks 2025-01-17 02:05:56 -06:00
31818924b3 Tag adminuser 2025-01-11 16:04:34 -06:00
e4060ca9a0 Add another adminuser keyY 2025-01-11 16:03:37 -06:00
43ccced1c5 Fix up tmodloader cron 2025-01-06 02:57:28 -06:00
725687e05e Add env file that sources venv 2025-01-06 02:49:50 -06:00
8a64774a77 Change MOTD 2025-01-03 23:57:15 -06:00
4cd34284c6 That seed was trash 2025-01-01 17:25:23 -06:00
7b004ca82c Document some settings, set the world seed to be hardcoded 2025-01-01 17:17:36 -06:00
d50f6a1135 Add Boss Checklist 2025-01-01 17:13:39 -06:00
51bf0e5c62 Add more volume mounts to tmodloader 2025-01-01 17:11:55 -06:00
14799abdcf Switch over to Ore Excavation 2025-01-01 17:02:33 -06:00
ba1530a7c1 Organization 2025-01-01 17:02:06 -06:00
11d5b23b50 Gear up tmodloader 2025-01-01 16:56:53 -06:00
9c843f0375 Update CI definition 2025-01-01 16:03:51 -06:00
506c58a18e Disable Minceraft 2025-01-01 16:00:57 -06:00
9962c09fb5 Add tModLoader template 2025-01-01 16:00:49 -06:00
c2623c70ee Revert "Temporarily disabel Transmission"
This reverts commit 469a9e706907b28a1fc7ca89924f8f2218fc9e06.
2024-12-23 23:22:10 -06:00
fad97a4ba0 Add /dev/net/tun to transmission 2024-12-23 23:22:02 -06:00
469a9e7069 Temporarily disabel Transmission 2024-12-21 15:20:16 -06:00
7cf0b5da3d Configure geerling role for pg14 2024-12-11 15:21:39 -06:00
f319ee6ad2 Add PG repo 2024-12-11 14:49:11 -06:00
190a88e57c Fix accidentally overwriting crons 2024-12-07 12:22:46 -06:00
f323cd8987 Add cronjobs for things I keep having to do by hand 2024-12-07 00:34:56 -06:00
6f57f2ed32 Update to Nextcloud 30 2024-12-07 00:30:56 -06:00
72bc460c4f Update Nextcloud to 29 2024-12-07 00:23:56 -06:00
1440db6afc Update Nextcloud to version 28 2024-12-07 00:09:53 -06:00
fba7d30a40 Update grafana matrix forwarder link 2024-11-21 23:54:42 -06:00
b58a23e87a Use more up-to-date synapse upstream 2024-11-11 22:01:21 -06:00
505c20c2b0 Allow flight 2024-10-29 17:25:04 -05:00
a18ec49e20 WE SWAPPA DA PACK AGAIN 2024-10-29 16:55:36 -05:00
0940535d2a We're switching the mods up again 2024-10-28 22:46:24 -05:00
424d5cd75c New Minecraft pack! 2024-10-28 19:43:35 -05:00
537f2c9824 Disable Satisfucktory 2024-10-28 19:42:19 -05:00
a40a30eec4 Reenable satisfactory updates 2024-10-15 15:44:10 -05:00
7d2afdfaef Add docker network to satisfucktory 2024-10-15 11:30:06 -05:00
ef036fca76 Remove nagios shit from autoreboot 2024-10-15 11:29:40 -05:00
b53ce3efaa Add Satisfactory server sftp 2024-10-11 13:04:00 -05:00
63fc4417db Update keepalive on nextcloud 2024-10-01 17:57:51 -05:00
4c4108ab0a Add Satisfactory back into the mix! 1.0 lesgooooo! 2024-09-11 18:42:40 -05:00
658888bda8 Add prom metrics for plain http 2024-09-04 01:56:37 -05:00
5651f6f50a Decom music/lidarr 2024-09-03 22:41:40 -05:00
07ab0b472e Decom Navidrome, too 2024-09-03 22:41:21 -05:00
9a39b79895 Decom Lidarr, too 2024-09-03 22:36:35 -05:00
ee40990c51 Press F for minceraft 2024-09-03 22:32:39 -05:00
fc23453e5a Remove backups for desktop 2024-08-21 22:43:36 -05:00
1e037bf3bc Nevermind flatpak is just stupid 2024-08-21 22:41:48 -05:00
c8aca49ff6 Nevermind I'm just stupid 2024-08-21 22:39:56 -05:00
61c37b4650 Disable unmojang because apparently its keys are fukt 2024-08-21 22:38:45 -05:00
ec77cdbc46 Polish up flatpaks 2024-08-21 22:35:20 -05:00
7bc017e583 And screen, too 2024-08-21 22:25:48 -05:00
ba37a7b4fa Remove awscli from rpm-ostree hosts 2024-08-21 22:23:21 -05:00
bc8dd6d2bd Remove cadvisor from coreos boxen as it doesn't play nice with toolbx 2024-08-19 21:14:04 -05:00
391e424199 Add some (admittedly crusty) support for podman for Prometheus monitoring 2024-08-18 01:07:24 -05:00
f23d6ed738 Remove monitoring script requirements from nagios boxen 2024-08-18 00:48:57 -05:00
a0d1ae0a4a Remove nagios bullshit 2024-08-18 00:48:17 -05:00
760af8dabe Fix up music stuffs 2024-08-11 11:08:23 -05:00
7a72280c6e Disable nagios CI job too 2024-08-10 23:00:25 -05:00
74a6a1ce96 Disable fucking nagios 2024-08-10 22:59:55 -05:00
227f0a5df5 Add navidrome too 2024-08-10 22:42:06 -05:00
db36aa7eae Add Lidarr back into the mix 2024-08-10 22:30:17 -05:00
85c039e4dc Switch over to Ely.by
FUCK YOU MOJANG
2024-07-18 22:00:25 -05:00
702a4c5f4c Add restart-policy to containers that need it
oopsie
2024-07-17 00:21:36 -05:00
68e8f35064 Add old magic pack back in 2024-07-10 20:30:46 -05:00
b250ce9dc8 Enable automatic retries for backups within a short duration
This should help alleviate some of the problems I've been having with Backblaze's accessibility during peak backup hours
2024-07-10 13:14:07 -05:00
142e589f84 Remove direwolf20 pack
o7
2024-07-10 13:09:42 -05:00
9dda82edb3 Add commented-out code for minecraft-createfarming 2024-07-10 13:08:03 -05:00
a6b8c7ef64 Remove minecraft vanilla 2024-07-10 13:07:26 -05:00
b19602f205 Add a bunch of cool envvars to the MC server 2024-07-10 13:06:31 -05:00
113 changed files with 1715 additions and 1463 deletions

1
.env Normal file
View File

@ -0,0 +1 @@
[ -f venv/bin/activate ] && . venv/bin/activate

View File

@ -43,6 +43,7 @@ after_script:
Lint: Lint:
stage: lint stage: lint
interruptible: yes interruptible: yes
allow_failure: yes
except: except:
- pipelines - pipelines
- schedules - schedules
@ -64,31 +65,34 @@ Test:
# PRE-MAIN CONFIGURATION # PRE-MAIN CONFIGURATION
Local: Local:
stage: play-pre stage: play-pre
only:
- pipelines
- schedules
script: script:
- ansible-playbook --skip-tags no-auto playbooks/site_local.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass - ansible-playbook --skip-tags no-auto playbooks/site_local.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
Pre: Pre:
stage: play-pre stage: play-pre
only:
- pipelines
- schedules
script: script:
- ansible-playbook --skip-tags no-auto playbooks/site_pre.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass - ansible-playbook --skip-tags no-auto playbooks/site_pre.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
# MAIN CONFIGURATION # MAIN CONFIGURATION
Main: Main:
stage: play-main stage: play-main
only:
- pipelines
- schedules
retry: 1 retry: 1
script: script:
- ansible-playbook --skip-tags no-auto playbooks/site_main.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass - ansible-playbook --skip-tags no-auto playbooks/site_main.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
Common:
stage: play-main
script:
- ansible-playbook --skip-tags no-auto playbooks/site_common.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
Nagios:
stage: play-main
retry: 1
script:
- ansible-playbook -l vm-general-1.ashburn.mgmt.desu.ltd playbooks/prod_web.yml --tags nagios --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
# CLEANUP # CLEANUP
Cleanup: Cleanup:
stage: play-post stage: play-post
only:
- pipelines
- schedules
script: script:
- ansible-playbook --skip-tags no-auto playbooks/site_post.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass - ansible-playbook --skip-tags no-auto playbooks/site_post.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass

View File

@ -1,17 +1,60 @@
# Salt's Ansible Repository # Desu LTD Ansible
Useful for management across all of 9iron, thefuck, and desu. Ansible scripts that manage infra for all of Desu LTD
## Initialization ## Initialization
* Clone Clone the repo, then:
* `ansible-galaxy install -r requirements.yml`
For quick bootstrapping of tools and libraries used in this repo, see [rehashedsalt/ansible-env](https://gitlab.com/rehashedsalt/docker-ansible-env). I use that exact image for CI/CD. ```bash
# Set up execution environment
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
# Set up Ansible Galaxy roles
ansible-galaxy install -r requirements.yml
# Set up password
# This one's optional if you want to --ask-vault-pass instead
touch ~/.vault_pass
chmod 0600 ~/.vault_pass
vim ~/.vault_pass
```
## Deployment Regular runs of this repo are invoked in [rehashedsalt/ansible-env](https://gitlab.com/rehashedsalt/docker-ansible-env). See Obsidian notes for details.
### Linux Machines ## Usage
To run the whole playbook:
```bash
./site.yml
```
To deploy a core service to a single machine while you're working on it:
```bash
./playbooks/site_main.yml -l my.host --tags someservice
```
All `yml` files that can be invoked at the command line are marked executable and have a shebang at the top. If they do not have these features, you're looking at an include or something.
## Structure
The structure of the playbooks in this repo is as follows:
* `site.yml` - Master playbook, calls in:
* `playbooks/site_local.yml` - Tasks that run solely on the Ansible controller. Mostly used for DNS
* `playbooks/site_pre.yml` - Basic machine bootstrapping and configuration that must be done before services are deployed. Does things like connect a machine to the management Zerotier network, ensure basic packages, ensure monitoring can hook in, etc.
* `playbooks/site_main.yml` - Main service deployment is done here. If you're iterating on a service, invoke this one
* `playbooks/site_post.yml` - Cleanup tasks. Mostly relevant for the regular autoruns. Cleans up old Docker images and reboots boxes
Most services are containerized -- their definitions are in `playbooks/tasks` and are included where relevant.
## Bootstrapping
Each Linux machine will require the following to be fulfilled for Ansible to access it: Each Linux machine will require the following to be fulfilled for Ansible to access it:
@ -25,24 +68,14 @@ Each Linux machine will require the following to be fulfilled for Ansible to acc
To automate these host-local steps, use the script file `contrib/bootstrap.sh`. To automate these host-local steps, use the script file `contrib/bootstrap.sh`.
### Windows Machines ## Netbox
lol don't These playbooks depend heavily on Netbox for:
### All Machines * Inventory, including primary IP, hostname, etc.
Adding a new server will require these: * Data on what services to deploy
* The server is accessible from the Ansible host; * Data on what services to monitor
* The server has been added to NetBox OR in `inventory-hard` Thus, if Netbox is inaccessible, a large portion of these scripts will malfunction. If you anticipate Netbox will be unavailable for whatever reason, run `ansible-inventory` by hand and save the output to a file. Macros for things like monitoring will not work, but you'll at least have an inventory and tags.
* DNS records for the machine are set; and
From there, running the playbook `site.yml` should get the machine up to snuff.
## Zerotier
A lot of my home-network side of things is connected together via ZeroTier; initial deployment/repairs may require specifying an `ansible_host` for the inventory item in question to connect to it locally. Subsequent plays will require connectivity to my home ZeroTier network.
Cloud-managed devices require no such workarounds.

View File

@ -1,14 +1,12 @@
[defaults] [defaults]
# I have a large number of machines, which warrants a large forks setting # Tune this higher if you have a large number of machines
# here. forks = 8
forks = 16
# We set gathering to smart here as I'm often executing the site-wide playbook, # We set gathering to smart here as I'm often executing the site-wide playbook,
# which means a ton of redundant time gathering facts that haven't changed # which means a ton of redundant time gathering facts that haven't changed
# otherwise. # otherwise.
gathering = smart gathering = smart
# host_key_checking is disabled because nearly 90% of my Ansible plays are in # host_key_checking is disabled because nearly 90% of my Ansible plays are in
# ephemeral environments and I'm constantly spinning machines up and down. # ephemeral environments and I'm constantly spinning machines up and down.
# In theory this is an attack vector that I need to work on a solution for.
host_key_checking = false host_key_checking = false
# Explicitly set the python3 interpreter for legacy hosts. # Explicitly set the python3 interpreter for legacy hosts.
interpreter_python = python3 interpreter_python = python3
@ -28,7 +26,7 @@ roles_path = .roles:roles
system_warnings = true system_warnings = true
# We set this to avoid circumstances in which we time out waiting for a privesc # We set this to avoid circumstances in which we time out waiting for a privesc
# prompt. Zerotier, as a management network, can be a bit slow at times. # prompt. Zerotier, as a management network, can be a bit slow at times.
timeout = 60 #timeout = 30
# Bad # Bad
vault_password_file = ~/.vault_pass vault_password_file = ~/.vault_pass
@ -41,9 +39,8 @@ always = true
become = true become = true
[ssh_connection] [ssh_connection]
# The number of retries here is insane because of the volatility of my home # We set retries to be a fairly higher number, all things considered.
# network, where a number of my machines live. #retries = 3
retries = 15
# These extra args are used for bastioning, where the ephemeral Ansible # These extra args are used for bastioning, where the ephemeral Ansible
# controller remotes into a bastion machine to access the rest of the # controller remotes into a bastion machine to access the rest of the
# environment. # environment.

33
contrib/cache-prod-inventory.sh Executable file
View File

@ -0,0 +1,33 @@
#! /bin/sh
#
# cache-prod-inventory.sh
# Copyright (C) 2025 Jacob Babor <jacob@babor.tech>
#
# Distributed under terms of the MIT license.
#
set -e
proddir="inventories/production"
invdir="inventories/production-cache"
# Sanity check
[ -d "$invdir" ] || {
echo "Could not find $invdir; are you in the root of the repo?"
exit 1
}
# Get the new data
[ -e "$invdir"/hosts.yml.new ] && rm "$invdir"/hosts.yml.new
ansible-inventory -i "$proddir" --list -y > "$invdir"/hosts.yml.new || {
# And handle errors
echo "Failed to get inventory; see above and $invdir/hosts.yml.new for errors"
exit 2
}
# Shuffle shit around
[ -e "$invdir"/hosts.yml.old ] && rm "$invdir"/hosts.yml.old
[ -e "$invdir"/hosts.yml ] && mv "$invdir"/hosts.yml{,.old}
[ -e "$invdir"/hosts.yml.new ] && mv "$invdir"/hosts.yml{.new,}
echo "Inventory cached. Use -i \"$invdir\""

View File

@ -0,0 +1,4 @@
#! /bin/sh
git submodule update --recursive --remote --init
git submodule -q foreach 'git checkout -q master && git pull'
git status

View File

@ -1 +0,0 @@
../production/host_vars

View File

@ -0,0 +1 @@
hosts.yml*

View File

@ -0,0 +1 @@
../production/group_vars

View File

@ -1 +0,0 @@
../production/host_vars

View File

@ -17,6 +17,102 @@ netbox_token: !vault |
37323530333463383062396363616263386430356438306133393130626365333932323734383165 37323530333463383062396363616263386430356438306133393130626365333932323734383165
3064663435626339393836353837643730333266366436373033 3064663435626339393836353837643730333266366436373033
# Terraria modlists
tml_basic_qol:
# Better Zoom: Enables zooming out further than 100% for higher-res monitors
- "2562953970"
# Smarter Cursor: Cursor be smarter idort
- "2877850919"
# Heart Crystal & Life Fruit Glow
- "2853619836"
# Ore Excavation (Veinminer)
- "2565639705"
# Shared World Map
- "2815010161"
# Boss Cursor
- "2816694149"
# WMITF (What Mod Is This From (WAILA (WAWLA (WTFAILA))))
- "2563851005"
# Multiplayer Boss Fight Stats
- "2822937879"
# Census (Shows you all the NPCs and their move-in requirements)
- "2687866031"
# Shop Expander (Prevents overloading shops)
- "2828370879"
# Boss Checklist
- "2669644269"
# Auto Trash
- "2565540604"
tml_advanced_qol:
# Quality of Terraria (IT HAS INSTA HOIKS LET'S FUCKING GO)
# Also adds the "Can be shimmered into" and similar text
- "2797518634"
# Chat Source
- "2566083800"
# The Shop Market (it's like the Market from that one Minecraft mod)
- "2572367426"
# Fishing with Explosives
- "3238219681"
# Generated Housing (Adds pregenned home)
- "3141716573"
# Happiness Removal
- "2563345152"
tml_libs:
# Luminance, library mod
- "3222493606"
# Subworld Lib: Required by a few mods (TSA and others)
- "2785100219"
tml_basics:
# Magic Storage Starter Kit
- "2906446375"
# Magic Storage, absoluteAquarian utilities
- "2563309347"
- "2908170107"
# Wing Slot Extra
- "2597324266"
# Better Caves
- "3158254975"
tml_calamity:
# Calamity, Calamity Music, CalValEX
- "2824688072"
- "2824688266"
- "2824688804"
tml_calamity_classes:
# Calamity Ranger Expansion
- "2860270524"
# Calamity Whips
- "2839001756"
tml_calamity_clamity:
# Clamity (sic), Music
- "3028584450"
- "3161277410"
tml_fargos:
# Luminance, library mod
- "3222493606"
# Fargos Mutant Mod. Adds the NPC and infinite items and instas and stuff
- "2570931073"
# Fargos Souls, adds... souls
- "2815540735"
# Fargos Souls DLC (Calamity compat)
- "3044249615"
# Fargos Souls More Cross-Mod (Consolaria, Spirit, Mod of Redemption compat)
- "3326463997"
tml_touhou:
# Gensokyo (UN Owen Was Her plays in the distance)
- "2817254924"
tml_spirit:
# Spirit Mod
- "2982372319"
tml_secrets:
# Secrets of the Shadows
- "2843112914"
tml_yoyo_revamp:
# Moomoo's Yoyo Revamp (and Lib)
- "2977808495"
- "3069154070"
tml_summoners_association:
- "2561619075"
# Admin user configuration # Admin user configuration
adminuser_name: salt adminuser_name: salt
adminuser_ssh_authorized_keys: adminuser_ssh_authorized_keys:
@ -26,8 +122,16 @@ adminuser_ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFS78eNBEZ1fWnGt0qyagCRG7P+8i3kYBqTYgou3O4U8 putty-generated on dsk-ryzen-0.desu.ltd - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFS78eNBEZ1fWnGt0qyagCRG7P+8i3kYBqTYgou3O4U8 putty-generated on dsk-ryzen-0.desu.ltd
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINq8NPEqSM0w7CkhdhsSgDsrcpgAvVg18oz9OybkqhHg salt@dsk-ryzen-0 - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINq8NPEqSM0w7CkhdhsSgDsrcpgAvVg18oz9OybkqhHg salt@dsk-ryzen-0
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGwFJmaV4JuxOOgF6Bqwo6FaCN5Mpcvd4/Vee7PsMBxu salt@lap-fw-diy-1.ws.mgmt.desu.ltd - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGwFJmaV4JuxOOgF6Bqwo6FaCN5Mpcvd4/Vee7PsMBxu salt@lap-fw-diy-1.ws.mgmt.desu.ltd
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwcV0mKhhQveIOjFKwt01S8WVtOn3Pfz6qa2P4/JR7S salt@lap-s76-lemp13-0.ws.mgmt.desu.ltd
# For backups # For backups
backup_restic_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
65623036653432326435353932623037626532316631613763623237323533363938363462316237
6363613363346239666630323134643866653436633537300a663732363565383061326135656539
33313334656330366632613334366664613366313631363964373038396636623735633830386336
3230316663373966390a663732373134323561313633363435376263643834383739643739303761
62376231353936333666613661323864343439383736386636356561636463626266
backup_s3_bucket: !vault | backup_s3_bucket: !vault |
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
66316231643933316261303631656432376339663264666661663634616465326537303331626634 66316231643933316261303631656432376339663264666661663634616465326537303331626634
@ -50,29 +154,7 @@ backup_s3_aws_secret_access_key: !vault |
3635616437373236650a353661343131303332376161316664333833393833373830623130666633 3635616437373236650a353661343131303332376161316664333833393833373830623130666633
66356130646434653039363863346630363931383832353637636131626530616434 66356130646434653039363863346630363931383832353637636131626530616434
backup_s3_aws_endpoint_url: "https://s3.us-east-005.backblazeb2.com" backup_s3_aws_endpoint_url: "https://s3.us-east-005.backblazeb2.com"
backup_kopia_bucket_name: desultd-kopia
backup_kopia_access_key_id: !vault |
$ANSIBLE_VAULT;1.1;AES256
34633366656134376166636164643233353461396263313237653032353764613737393865373763
6665633239396333633132323936343030346362333734640a356631373230383663383530333434
32386639393135373236373263363365366163346234643135363766666666373938373135653663
3836623735393563610a613332623965633032356266643638386230323965366233353930313239
38666562326232353165323934303966643630383235393830613939616330333839
backup_kopia_secret_access_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
31373662326464396136346663626635363332303862613466316236333431636136373038666531
6630616565613431323464373862373963356335643435360a353665356163313635393137363330
66383531326535653066386432646464346161336363373334313064303261616238613564396439
6439333432653862370a303461346438623263636364633437356432613831366462666666303633
63643862643033376363353836616137366432336339383931363837353161373036
backup_kopia_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
34306564393161336162633833356464373065643633343935373566316465373939663838343537
3831343963666432323538636665663733353435636337340a633738306463646133643730333032
33303962306136636163623930306238666633333738373435636366666339623562323531323732
3330633238386336330a346431383233383533303131323736306636353033356538303264383963
37306461613834643063383965356664326265383431336332303333636365316163363437343634
6439613537396535656361616365386261336139366133393637
# For zerotier # For zerotier
zerotier_personal_network_id: !vault | zerotier_personal_network_id: !vault |
@ -90,6 +172,34 @@ zerotier_management_network_id: !vault |
3430303130303766610a633131656431396332626336653562616363666433366664373635613934 3430303130303766610a633131656431396332626336653562616363666433366664373635613934
30316335396166633361666466346232323630396534386332613937366232613965 30316335396166633361666466346232323630396534386332613937366232613965
# For GCI
secret_gci_db_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
62616132613539386133343261393839636630613735323432346530353465383833323665356433
3139396531383838616534643235313434646638356331630a336339323336343631396364316434
32303163613863356465353761666666333037396633613461363939333730306362363965373636
3265343639643432620a303637323461643866313062303838383038363334636666316138326638
63646662353561353234326536343562666336636135303930663564353939376665
secret_gci_secret_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
33333164393639613865613664316639396338393335643533353237343430613030313234383364
3239303838373162303031303061663236353736393635390a313534356530333230613037313765
39313330303039656630316437363535393765326234356463383063316235396463323066393465
3235636465363833390a636662336361663731343030343163633933363133373533333338386531
38383331353465363432383564303666373033376434336635303633373836366134626565336232
39663834656165636365343961663831373834333566623934336132633966353636656263643234
626264646365633638343230343266393338
# For 5dd
five_dd_db_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
31343335306261333630316366366536356165346437393631643630636436626265616239666562
3233353738643136356564396339666137353163393465330a306431376364353734346465643261
64633065383939383562346332323636306565336139343734323861316335333932383863363233
6130353534363563340a636164666631393132346535393936363963326430643638323330663437
31396433303762633139376237373236383732623734626538653933366464623135
# For ara # For ara
secret_ara_db_pass: !vault | secret_ara_db_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
@ -182,65 +292,6 @@ secret_grafana_matrix_token: !vault |
30326666616362366133396562323433323435613232666337336430623230383765346333343232 30326666616362366133396562323433323435613232666337336430623230383765346333343232
3765346238303835633337636233376263366130303436336439 3765346238303835633337636233376263366130303436336439
# For Nagios
secret_nagios_admin_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
64333231393831303031616363363030613464653161313531316465346263313063626638363437
3965303861646232393663633066363039636637343161340a643162633133336335313632383861
34616338636630633539353335336631313361656633333539323130626132356263653436343363
3930323538613137370a373861376566376631356564623665313662636562626234643862343863
61326232633266633262613931303631396163326266386363366639366639613938
secret_nagios_matrix_token: !vault |
$ANSIBLE_VAULT;1.1;AES256
66366665666437643765366533646666386162393038653262333461376566333366363332643135
6233376362633566303939623832636366333330393238370a323766366164393733383736633435
37633137626634643530653665613166633439376333633663633561313864396465623036653063
6433376138386531380a383762393137613738643538343438633730313135613730613139393536
35666133666262383862663637623738643836383633653864626231623034613662646563623936
3763356331333561383833386162616664376335333139376363
nagios_contacts:
- name: matrix
host_notification_commands: notify-host-by-matrix
service_notification_commands: notify-service-by-matrix
host_notification_period: ansible-not-late-at-night
service_notification_period: ansible-not-late-at-night
extra:
- key: contactgroups
value: ansible
- name: salt
host_notification_commands: notify-host-by-email
service_notification_commands: notify-service-by-email
extra:
- key: email
value: alerts@babor.tech
nagios_commands:
# This command is included in the container image
- name: check_nrpe
command: "$USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$"
- name: check_by_ssh
command: "$USER1$/check_by_ssh -H $HOSTADDRESS$ -F /opt/nagios/etc/ssh_config -t 30 -q -i /opt/nagios/etc/id_ed25519 -l nagios-checker -C \"$ARG1$\""
- name: notify-host-by-matrix
command: "/usr/bin/printf \"%b\" \"$NOTIFICATIONTYPE$\\n$HOSTNAME$ is $HOSTSTATE$\\nAddress: $HOSTADDRESS$\\nInfo: $HOSTOUTPUT$\\nDate/Time: $LONGDATETIME$\" | /opt/Custom-Nagios-Plugins/notify-by-matrix"
- name: notify-service-by-matrix
command: "/usr/bin/printf \"%b\" \"$NOTIFICATIONTYPE$\\nService $HOSTALIAS$ - $SERVICEDESC$ is $SERVICESTATE$\\nInfo: $SERVICEOUTPUT$\\nDate/Time: $LONGDATETIME$\" | /opt/Custom-Nagios-Plugins/notify-by-matrix"
nagios_services:
# check_by_ssh checks
- name: Last Ansible Play
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_file_age /var/lib/ansible-last-run -w 432000 -c 604800
- name: Reboot Required
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_reboot_required
- name: Unit backup.service
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_systemd_unit backup.service
hostgroup: "ansible,!role-hypervisor"
- name: Unit backup.timer
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_systemd_unit backup.timer
hostgroup: "ansible,!role-hypervisor"
# Tag-specific checks
# zerotier
- name: Unit zerotier-one.service
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_systemd_unit zerotier-one.service
hostgroup: tag-zt-personal
# For Netbox # For Netbox
secret_netbox_user_pass: !vault | secret_netbox_user_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
@ -381,15 +432,6 @@ secret_synapse_db_pass: !vault |
3663623537333161630a616263656362633461366462613366323262363734353233373330393932 3663623537333161630a616263656362633461366462613366323262363734353233373330393932
36653333643632313139396631633962386533323330346639363736353863313763 36653333643632313139396631633962386533323330346639363736353863313763
# For Vaultwarden
secret_vaultwarden_db_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
61396131623266353764386535373334653337353337326464353636343863643733663333333531
6664376235396139616466646462623666663164323461610a336566396135343431356332626337
32373535343266613565313531653061316438313332333261353435366661353437663361346434
3536306466306362340a313563333065383733373834393131306661383932643565373161356162
33643434396436343037656339343336653637356233313034356632626538616366
# For home media stuff # For home media stuff
secret_transmission_user_pass: !vault | secret_transmission_user_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256

View File

@ -2,27 +2,3 @@
# Docker settings # Docker settings
docker_apt_arch: arm64 docker_apt_arch: arm64
# DB secrets
secret_grafana_local_db_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
32326333383035393665316566363266623130313435353165613463336663393634353261623738
3466636437303938363332633635363666633965386534630a646132666239623666306133313034
63343030613033653964303330643063326636346263363264333061663964373036386536313333
6432613734616361380a346138396335366638323266613963623731633437653964326465373538
63613762633635613232303565383032313164393935303531356666303965663463366335376137
6135376566336662313734333235633362386132333064303534
secret_netbox_local_db_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
33333232623431393930626435313138643963663731336530663066633563666439383936316538
6337376232613937303635386235346561326134616265300a326266373834303137623439366438
33616365353663633434653463643964613231343335326234343331396137363439666138376332
3564356231336230630a336639656337353538633931623536303430363836386137646563613338
66326661313064306162363265303636333765383736336231346136383763613131
secret_keepalived_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
65353963616637303932643435643262333438666566333138373539393836636135656162323965
3036313035343835393439663065326536323464316566340a613966333731356631613536643332
64613934346234316564613564363863356663653063333432316434353633333138643561316638
6563386233656364310a626363663234623161363537323035663663383333353138386239623934
65613231666661633262633439393462316337393532623263363630353133373236

View File

@ -1 +0,0 @@
zerotier_repo_deb: "deb http://download.zerotier.com/debian/jammy jammy main"

View File

@ -1,2 +0,0 @@
# vim:ft=ansible
docker_apt_repository: "deb https://download.docker.com/linux/ubuntu focal stable"

View File

@ -5,7 +5,4 @@
become: no become: no
tasks: tasks:
- name: print os info - name: print os info
debug: msg="{{ item }}" debug: msg="{{ inventory_hostname }} - {{ ansible_distribution }} {{ ansible_distribution_version }}"
with_items:
- "{{ ansible_distribution }}"
- "{{ ansible_distribution_version }}"

View File

@ -22,7 +22,6 @@
PermitRootLogin: no PermitRootLogin: no
PrintMotd: no PrintMotd: no
PubkeyAuthentication: yes PubkeyAuthentication: yes
Subsystem: "sftp /usr/lib/openssh/sftp-server"
UsePAM: yes UsePAM: yes
X11Forwarding: no X11Forwarding: no
# We avoid running on "atomic_container" distros since they already ship # We avoid running on "atomic_container" distros since they already ship

View File

@ -3,7 +3,6 @@
--- ---
# Home desktops # Home desktops
- hosts: device_roles_bastion - hosts: device_roles_bastion
gather_facts: no
roles: roles:
- role: backup - role: backup
vars: vars:

View File

@ -4,26 +4,23 @@
# Home desktops # Home desktops
- hosts: device_roles_workstation - hosts: device_roles_workstation
roles: roles:
- role: backup
vars:
backup_s3backup_exclude_list_extra:
# This isn't prefixed with / because, on ostree systems, this is in /var/home
- "home/*/.var/app/com.valvesoftware.Steam"
- "home/*/.var/app/com.visualstudio.code"
- "home/*/.var/app/com.vscodium.codium"
- "home/*/.cache"
- "home/*/.ollama"
- "home/*/.local/share/containers"
- "home/*/.local/share/Trash"
tags: [ backup ]
- role: desktop
tags: [ desktop ]
- role: udev - role: udev
vars: vars:
udev_rules: udev_rules:
# Switch RCM stuff # Switch RCM stuff
- SUBSYSTEM=="usb", ATTR{idVendor}=="0955", MODE="0664", GROUP="plugdev" - SUBSYSTEM=="usb", ATTR{idVendor}=="0955", MODE="0664", GROUP="plugdev"
tags: [ desktop, udev ] tags: [ desktop, udev ]
- hosts: lap-fw-diy-1.ws.mgmt.desu.ltd
roles:
- role: backup
vars:
backup_s3backup_tar_args_extra: h
backup_s3backup_list_extra:
- /home/salt/.backup/
tags: [ backup ]
- hosts: dsk-ryzen-1.ws.mgmt.desu.ltd
roles:
- role: desktop
- role: backup
vars:
backup_s3backup_tar_args_extra: h
backup_s3backup_list_extra:
- /home/salt/.backup/
tags: [ backup ]

View File

@ -2,8 +2,7 @@
# vim:ft=ansible: # vim:ft=ansible:
--- ---
# Home media storage Pi # Home media storage Pi
- hosts: pi-homeauto-1.home.mgmt.desu.ltd - hosts: srv-fw-13-1.home.mgmt.desu.ltd
gather_facts: no
module_defaults: module_defaults:
docker_container: docker_container:
state: started state: started
@ -15,10 +14,22 @@
tags: [ docker ] tags: [ docker ]
tasks: tasks:
- name: include tasks for apps - name: include tasks for apps
include_tasks: tasks/app/{{ task }} include_tasks: tasks/{{ task }}
with_items: with_items:
- ddns-route53.yml # Home automation shit
- homeassistant.yml - app/ddns-route53.yml
- app/homeassistant.yml
- app/prometheus-netgear-exporter.yml
# Media acquisition
- web/lidarr.yml
- web/prowlarr.yml
- web/radarr.yml
- web/sonarr.yml
- web/bazarr.yml
- web/transmission.yml
# Media presentation
- web/navidrome.yml
- web/jellyfin.yml
loop_control: loop_control:
loop_var: task loop_var: task
tags: [ always ] tags: [ always ]
@ -27,18 +38,11 @@
vars: vars:
backup_s3backup_list_extra: backup_s3backup_list_extra:
- /data - /data
backup_time: "Sun *-*-* 02:00:00"
tags: [ backup ] tags: [ backup ]
- role: ingress - role: ingress-traefik
vars: vars:
ingress_container_image: "nginx:latest" ingress_container_tls: no
ingress_container_ports: ingress_container_dashboard: no
- 80:80
ingress_container_config_mount: /etc/nginx/conf.d
ingress_container_persist_dir: /data/nginx
ingress_listen_args: 80
ingress_listen_tls: no
ingress_servers:
- name: homeauto.local.desu.ltd
proxy_pass: http://localhost:8123
tags: [ ingress ] tags: [ ingress ]
# - role: kodi
# tags: [ kodi ]

View File

@ -89,6 +89,7 @@
type: "{{ item.type | default('CNAME', true) }}" type: "{{ item.type | default('CNAME', true) }}"
ttl: 3600 ttl: 3600
state: "{{ item.state | default('present', true) }}" state: "{{ item.state | default('present', true) }}"
zone: "{{ item.zone | default('desu.ltd', true) }}"
value: [ "{{ item.value }}" ] value: [ "{{ item.value }}" ]
with_items: with_items:
# Public # Public
@ -108,15 +109,40 @@
value: vm-general-1.ashburn.mgmt.desu.ltd value: vm-general-1.ashburn.mgmt.desu.ltd
- record: prometheus.desu.ltd - record: prometheus.desu.ltd
value: vm-general-1.ashburn.mgmt.desu.ltd value: vm-general-1.ashburn.mgmt.desu.ltd
# Games
- record: 5dd.desu.ltd
value: vm-general-1.ashburn.mgmt.desu.ltd
# Public media stuff # Public media stuff
# music and jellyfin are proxied through ashburn
- record: music.desu.ltd
value: vm-general-1.ashburn.mgmt.desu.ltd
- record: jellyfin.desu.ltd
value: vm-general-1.ashburn.mgmt.desu.ltd
- record: lidarr.media.desu.ltd
value: srv-fw-13-1.home.mgmt.desu.ltd
- record: prowlarr.media.desu.ltd - record: prowlarr.media.desu.ltd
value: vm-general-1.ashburn.mgmt.desu.ltd value: srv-fw-13-1.home.mgmt.desu.ltd
- record: slskd.media.desu.ltd
value: srv-fw-13-1.home.mgmt.desu.ltd
- record: sonarr.media.desu.ltd - record: sonarr.media.desu.ltd
value: vm-general-1.ashburn.mgmt.desu.ltd value: srv-fw-13-1.home.mgmt.desu.ltd
- record: radarr.media.desu.ltd - record: radarr.media.desu.ltd
value: vm-general-1.ashburn.mgmt.desu.ltd value: srv-fw-13-1.home.mgmt.desu.ltd
- record: bazarr.media.desu.ltd
value: srv-fw-13-1.home.mgmt.desu.ltd
- record: transmission.media.desu.ltd - record: transmission.media.desu.ltd
value: vm-general-1.ashburn.mgmt.desu.ltd value: srv-fw-13-1.home.mgmt.desu.ltd
# HA
- record: homeassistant.desu.ltd
value: srv-fw-13-1.home.mgmt.desu.ltd
# Secondary projects
- record: guncadindex.com
value: 5.161.185.67
type: A
zone: guncadindex.com
- record: www.guncadindex.com
value: guncadindex.com
zone: guncadindex.com
loop_control: loop_control:
label: "{{ item.record }}" label: "{{ item.record }}"
delegate_to: localhost delegate_to: localhost

View File

@ -2,12 +2,26 @@
# vim:ft=ansible: # vim:ft=ansible:
# Database servers # Database servers
--- ---
- hosts: vm-general-1.ashburn.mgmt.desu.ltd,vm-general-2.ashburn.mgmt.desu.ltd
tasks:
- name: assure postgresql repo key
ansible.builtin.apt_key:
url: https://www.postgresql.org/media/keys/ACCC4CF8.asc
state: present
tags: [ db, psql, repo ]
- name: assure postgresql repo
ansible.builtin.apt_repository:
# Ex. "focal-pgdg main"
repo: 'deb http://apt.postgresql.org/pub/repos/apt {{ ansible_distribution_release }}-pgdg main'
state: present
tags: [ db, psql, repo ]
- hosts: vm-general-1.ashburn.mgmt.desu.ltd - hosts: vm-general-1.ashburn.mgmt.desu.ltd
tasks: tasks:
- name: assure prometheus psql exporter - name: assure prometheus psql exporter
ansible.builtin.docker_container: ansible.builtin.docker_container:
name: prometheus-psql-exporter name: prometheus-psql-exporter
image: quay.io/prometheuscommunity/postgres-exporter image: quay.io/prometheuscommunity/postgres-exporter
restart_policy: unless-stopped
env: env:
DATA_SOURCE_URI: "10.0.0.2:5432/postgres" DATA_SOURCE_URI: "10.0.0.2:5432/postgres"
DATA_SOURCE_USER: "nagios" DATA_SOURCE_USER: "nagios"
@ -18,6 +32,15 @@
roles: roles:
- role: geerlingguy.postgresql - role: geerlingguy.postgresql
vars: vars:
postgresql_version: "14"
postgresql_data_dir: "/var/lib/postgresql/{{ postgresql_version }}/main"
postgresql_bin_path: "/var/lib/postgresql/{{ postgresql_version }}/bin"
postgresql_config_path: "/etc/postgresql/{{ postgresql_version }}/main"
postgresql_packages:
- "postgresql-{{ postgresql_version }}"
- "postgresql-client-{{ postgresql_version }}"
- "postgresql-server-dev-{{ postgresql_version }}"
- libpq-dev
postgresql_global_config_options: postgresql_global_config_options:
- option: listen_addresses - option: listen_addresses
value: 10.0.0.2,127.0.0.1 value: 10.0.0.2,127.0.0.1
@ -59,3 +82,56 @@
lc_ctype: C lc_ctype: C
owner: synapse-desultd owner: synapse-desultd
tags: [ db, psql ] tags: [ db, psql ]
- hosts: vm-general-2.ashburn.mgmt.desu.ltd
tasks:
- name: assure prometheus psql exporter
ansible.builtin.docker_container:
name: prometheus-psql-exporter
image: quay.io/prometheuscommunity/postgres-exporter
restart_policy: unless-stopped
env:
DATA_SOURCE_URI: "10.0.0.2:5432/postgres"
DATA_SOURCE_USER: "nagios"
DATA_SOURCE_PASS: "{{ secret_postgresql_monitoring_password }}"
ports:
- 9102:9187/tcp
tags: [ db, psql, prometheus, monitoring, docker ]
roles:
- role: geerlingguy.postgresql
vars:
postgresql_version: "14"
postgresql_data_dir: "/var/lib/postgresql/{{ postgresql_version }}/main"
postgresql_bin_path: "/var/lib/postgresql/{{ postgresql_version }}/bin"
postgresql_config_path: "/etc/postgresql/{{ postgresql_version }}/main"
postgresql_packages:
- "postgresql-{{ postgresql_version }}"
- "postgresql-client-{{ postgresql_version }}"
- "postgresql-server-dev-{{ postgresql_version }}"
- libpq-dev
postgresql_global_config_options:
- option: listen_addresses
value: 10.0.0.2,127.0.0.1
- option: max_connections
value: 240
- option: shared_buffers
value: 128MB
- option: log_directory
value: 'log'
postgresql_hba_entries:
- { type: local, database: all, user: postgres, auth_method: trust }
- { type: local, database: all, user: all, auth_method: md5 }
- { type: host, database: all, user: all, address: '127.0.0.1/32', auth_method: md5 }
- { type: host, database: all, user: all, address: '::1/128', auth_method: md5 }
# Used for internal access from other nodes
- { type: host, database: all, user: all, address: '10.0.0.0/8', auth_method: md5 }
# Used for internal access from Docker
- { type: host, database: all, user: all, address: '172.16.0.0/12', auth_method: md5 }
postgresql_users:
- name: nagios
password: "{{ secret_postgresql_monitoring_password }}"
- name: guncad-index-prod
password: "{{ secret_gci_db_pass }}"
postgresql_databases:
- name: guncad-index-prod
owner: guncad-index-prod
tags: [ db, psql ]

View File

@ -2,8 +2,131 @@
# vim:ft=ansible: # vim:ft=ansible:
# Webservers # Webservers
--- ---
- hosts: vm-general-2.ashburn.mgmt.desu.ltd
module_defaults:
docker_container:
restart_policy: unless-stopped
pull: yes
pre_tasks:
- name: ensure docker network
docker_network: name=web
tags: [ docker ]
tasks:
- name: docker deploy guncad-index
docker_container:
name: guncad-index
state: started
image: registry.gitlab.com/guncad-index/index:latest
env:
# Global settings
TZ: "America/Chicago"
# Django/Gunicorn settings
GUNCAD_HTTPS: "True"
GUNCAD_ALLOWED_HOSTS: "guncadindex.com"
GUNCAD_CSRF_ORIGINS: "https://guncadindex.com"
GUNCAD_SECRET_KEY: "{{ secret_gci_secret_key }}"
GUNCAD_SITE_ID: com-guncadindex
GUNCAD_GUNICORN_WORKERS: "16"
# GCI settings
GUNCAD_SITE_NAME: GunCAD Index
GUNCAD_SITE_TAGLINE: A search engine for guns
GUNCAD_ADMIN_CONTACT: |
Join the Matrix space <a href="https://matrix.to/#/#guncad-index:matrix.org">#guncad-index:matrix.org</a><br />
Hit me up on twitter <a href="https://x.com/theshittinator" target="_blank">@theshittinator</a><br /><br />
You can also <a href="https://ko-fi.com/theshittinator" target="_blank">support development on ko-fi</a>
# DB connection info
GUNCAD_DB_USER: guncad-index-prod
GUNCAD_DB_PASS: "{{ secret_gci_db_pass }}"
GUNCAD_DB_NAME: guncad-index-prod
GUNCAD_DB_HOST: 10.0.0.2
networks:
- name: web
aliases: [ "guncad-index" ]
volumes:
- /data/guncad-index/data:/data
- /data/guncad-index/lbry:/home/django/.local/share/lbry
tags: [ docker, guncad-index, guncad, index, gci ]
roles:
- role: backup
vars:
backup_s3backup_list_extra:
- /data
- role: ingress
vars:
ingress_head: |
server_tokens off;
open_file_cache max=10000 inactive=6h;
open_file_cache_valid 5m;
open_file_cache_min_uses 1;
open_file_cache_errors on;
geo $whitelist {
{{ common_home_address }}/{{ common_home_address_mask }} 1;
}
map $whitelist $limit {
0 $binary_remote_addr;
1 "";
}
limit_req_zone $limit zone=site:10m rate=20r/s;
limit_req_zone $limit zone=api:10m rate=20r/s;
proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=gci_cache:100m inactive=60m;
proxy_cache_key "$scheme$request_method$host$request_uri";
ingress_container_volumes_extra:
- /data/guncad-index/data/static:/var/www/gci/static:ro
- /data/nginx-certbot/proxy_cache:/var/cache/nginx/proxy_cache
ingress_servers:
- name: guncadindex.com
proxies:
- location: "/"
extra: |
set $bypass_cache 0;
if ($arg_sort = "random") {
set $bypass_cache 1;
}
if ($uri ~* "^/(api|admin|tools)") {
set $bypass_cache 1;
}
proxy_cache gci_cache;
proxy_cache_bypass $bypass_cache $cookie_sessionid;
proxy_no_cache $bypass_cache $cookie_sessionid;
proxy_cache_valid 200 30m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
limit_req_status 429;
limit_req zone=site burst=25 delay=10;
add_header X-Cache $upstream_cache_status;
error_page 502 /static/maintenance.html;
pass: http://guncad-index:8080
- location: "/api"
extra: |
limit_req_status 429;
limit_req zone=api burst=50 delay=10;
pass: http://guncad-index:8080
- location: "/admin"
extra: |
limit_req_status 429;
limit_req zone=site burst=25 delay=10;
pass: http://guncad-index:8080
locations:
- location: "/static"
contents: |
root /var/www/gci;
expires 1y;
add_header X-Content-Type-Options nosniff;
add_header Cache-Control "public, max-age=31536000, immutable";
sendfile on;
tcp_nopush on;
tcp_nodelay on;
- location: "/static/maintenance.html"
contents: |
root /var/www/gci;
- name: www.guncadindex.com
locations:
- location: "/"
contents: |
return 301 $scheme://guncadindex.com$request_uri;
tags: [ web, docker, ingress ]
- hosts: vm-general-1.ashburn.mgmt.desu.ltd - hosts: vm-general-1.ashburn.mgmt.desu.ltd
gather_facts: no #gather_facts: no
module_defaults: module_defaults:
docker_container: docker_container:
restart_policy: unless-stopped restart_policy: unless-stopped
@ -29,16 +152,14 @@
- web/nextcloud.yml - web/nextcloud.yml
- web/synapse.yml - web/synapse.yml
# Backend web services # Backend web services
- web/prowlarr.yml
- web/radarr.yml
- web/sonarr.yml
- web/srv.yml - web/srv.yml
- web/transmission.yml
# Games # Games
- game/factorio.yml - game/factorio.yml
- game/minecraft-createfarming.yml - game/minecraft-createfarming.yml
- game/minecraft-direwolf20.yml - game/minecraft-magicpack.yml
- game/minecraft-weedie.yml
- game/zomboid.yml - game/zomboid.yml
- game/satisfactory.yml
tags: [ always ] tags: [ always ]
roles: roles:
- role: backup - role: backup
@ -47,7 +168,9 @@
- /app/gitea/gitea - /app/gitea/gitea
- /data - /data
backup_s3backup_exclude_list_extra: backup_s3backup_exclude_list_extra:
- /data/minecraft/magicpack/backups
- /data/minecraft/direwolf20/backups - /data/minecraft/direwolf20/backups
- /data/minecraft/weedie/backups
- /data/shared/media - /data/shared/media
- /data/shared/downloads - /data/shared/downloads
- /data/zomboid/ZomboidDedicatedServer/steamapps/workshop - /data/zomboid/ZomboidDedicatedServer/steamapps/workshop
@ -60,16 +183,22 @@
tags: [ web, git ] tags: [ web, git ]
- role: prometheus - role: prometheus
tags: [ prometheus, monitoring, no-test ] tags: [ prometheus, monitoring, no-test ]
- role: nagios - role: gameserver-terraria
vars: vars:
# Definitions for contacts and checks are defined in inventory vars terraria_server_name: "lea-wants-to-play"
# See group_vars/all.yml if you need to change those terraria_motd: "DID SOMEBODY SAY MEATLOAF??"
nagios_matrix_server: "https://matrix.desu.ltd" terraria_world_name: "SuperBepisLand"
nagios_matrix_room: "!NWNCKlNmOTcarMcMIh:desu.ltd" terraria_world_seed: "Make it 'all eight'. As many eights as you can fit in the text box."
nagios_matrix_token: "{{ secret_nagios_matrix_token }}" terraria_mods: "{{ tml_basics + tml_basic_qol + tml_libs + tml_calamity + tml_yoyo_revamp + tml_calamity_classes + tml_summoners_association }}"
nagios_data_dir: /data/nagios tags: [ terraria, tmodloader, lea ]
nagios_admin_pass: "{{ secret_nagios_admin_pass }}" # - role: gameserver-terraria
tags: [ nagios, no-auto ] # vars:
# terraria_server_remove: yes
# terraria_server_name: "generic"
# terraria_world_name: "Seaborgium"
# terraria_world_seed: "benis"
# terraria_mods: "{{ tml_basic_qol + tml_advanced_qol + tml_libs + tml_basics + tml_calamity + tml_calamity_classes + tml_calamity_clamity + tml_fargos + tml_touhou + tml_yoyo_revamp + tml_spirit + tml_secrets + tml_yoyo_revamp }}"
# tags: [ terraria, tmodloader, generic ]
- role: ingress - role: ingress
vars: vars:
ingress_head: | ingress_head: |
@ -111,12 +240,12 @@
pass: http://element:80 pass: http://element:80
directives: directives:
- "client_max_body_size 0" - "client_max_body_size 0"
- name: nagios.desu.ltd
proxy_pass: http://nagios:80
- name: nc.desu.ltd - name: nc.desu.ltd
directives: directives:
- "add_header Strict-Transport-Security \"max-age=31536000\"" - "add_header Strict-Transport-Security \"max-age=31536000\""
- "client_max_body_size 0" - "client_max_body_size 0"
- "keepalive_requests 99999"
- "keepalive_timeout 600"
proxy_pass: http://nextcloud:80 proxy_pass: http://nextcloud:80
locations: locations:
- location: "^~ /.well-known" - location: "^~ /.well-known"
@ -137,27 +266,11 @@
- "allow 45.79.58.44/32" # bastion1.dallas.mgmt.desu.ltd - "allow 45.79.58.44/32" # bastion1.dallas.mgmt.desu.ltd
- "deny all" - "deny all"
proxy_pass: http://prometheus:9090 proxy_pass: http://prometheus:9090
# desu.ltd media bullshit # media.desu.ltd proxies
- name: prowlarr.media.desu.ltd - name: music.desu.ltd
directives: proxy_pass: http://zt0.srv-fw-13-1.home.mgmt.desu.ltd
- "allow {{ common_home_address }}/{{ common_home_address_mask }}" - name: jellyfin.desu.ltd
- "deny all" proxy_pass: http://zt0.srv-fw-13-1.home.mgmt.desu.ltd
proxy_pass: http://prowlarr:9696
- name: sonarr.media.desu.ltd
directives:
- "allow {{ common_home_address }}/{{ common_home_address_mask }}"
- "deny all"
proxy_pass: http://sonarr:8989
- name: radarr.media.desu.ltd
directives:
- "allow {{ common_home_address }}/{{ common_home_address_mask }}"
- "deny all"
proxy_pass: http://radarr:7878
- name: transmission.media.desu.ltd
directives:
- "allow {{ common_home_address }}/{{ common_home_address_mask }}"
- "deny all"
proxy_pass: http://transmission:9091
# 9iron # 9iron
- name: www.9iron.club - name: www.9iron.club
directives: directives:

View File

@ -1,5 +0,0 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
---
# Supplementary tags
- import_playbook: tags_ansible.yml

View File

@ -8,3 +8,5 @@
- import_playbook: prod_web.yml - import_playbook: prod_web.yml
# Home automation stuff # Home automation stuff
- import_playbook: home_automation.yml - import_playbook: home_automation.yml
# Backup management stuff
- import_playbook: tags_restic-prune.yml

View File

@ -1,8 +0,0 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
---
- hosts: tags_ansible
gather_facts: no
roles:
- role: ansible
tags: [ ansible ]

View File

@ -3,34 +3,11 @@
--- ---
- hosts: tags_autoreboot - hosts: tags_autoreboot
gather_facts: no gather_facts: no
module_defaults:
nagios:
author: Ansible
action: downtime
cmdfile: /data/nagios/var/rw/nagios.cmd
comment: "Ansible tags_autoreboot task"
host: "{{ inventory_hostname }}"
minutes: 10
serial: 1 serial: 1
tasks: tasks:
- name: check for reboot-required - name: check for reboot-required
ansible.builtin.stat: path=/var/run/reboot-required ansible.builtin.stat: path=/var/run/reboot-required
register: s register: s
- name: reboot - name: reboot
block: ansible.builtin.reboot: reboot_timeout=600
- name: attempt to schedule downtime
block:
- name: register nagios host downtime
nagios:
service: host
delegate_to: vm-general-1.ashburn.mgmt.desu.ltd
- name: register nagios service downtime
nagios:
service: all
delegate_to: vm-general-1.ashburn.mgmt.desu.ltd
rescue:
- name: notify of failure to reboot
ansible.builtin.debug: msg="Miscellaneous failure when scheduling downtime"
- name: reboot
ansible.builtin.reboot: reboot_timeout=600
when: s.stat.exists when: s.stat.exists

View File

@ -2,71 +2,56 @@
# vim:ft=ansible: # vim:ft=ansible:
--- ---
- hosts: tags_nagios - hosts: tags_nagios
gather_facts: no gather_facts: yes
roles:
- role: git
vars:
git_repos:
- repo: https://git.desu.ltd/salt/monitoring-scripts
dest: /usr/local/bin/monitoring-scripts
tags: [ nagios, git ]
tasks: tasks:
- name: assure nagios plugin packages - name: assure prometheus containers for docker hosts
ansible.builtin.apt: name=monitoring-plugins,nagios-plugins-contrib block:
tags: [ nagios ] - name: assure prometheus node exporter
- name: assure nagios user # https://github.com/prometheus/node_exporter
ansible.builtin.user: name=nagios-checker state=present system=yes ansible.builtin.docker_container:
tags: [ nagios ] name: prometheus-node-exporter
- name: assure nagios user ssh key image: quay.io/prometheus/node-exporter:latest
authorized_key: restart_policy: unless-stopped
user: nagios-checker command:
state: present - '--path.rootfs=/host'
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKNavw28C0mKIQVRLQDW2aoovliU1XCGaenDhIMwumK/ Nagios monitoring" - '--collector.interrupts'
tags: [ nagios ] - '--collector.processes'
- name: assure nagios user sudo rule file network_mode: host
ansible.builtin.file: path=/etc/sudoers.d/50-nagios-checker mode=0750 owner=root group=root state=touch modification_time=preserve access_time=preserve pid_mode: host
tags: [ nagios, sudo ] volumes:
- name: assure nagios user sudo rules - /:/host:ro,rslave
ansible.builtin.lineinfile: tags: [ prometheus ]
path: /etc/sudoers.d/50-nagios-checker - name: assure prometheus cadvisor exporter
line: "nagios-checker ALL = (root) NOPASSWD: {{ item }}" ansible.builtin.docker_container:
with_items: name: prometheus-cadvisor-exporter
- /usr/lib/nagios/plugins/check_disk image: gcr.io/cadvisor/cadvisor:latest
- /usr/local/bin/monitoring-scripts/check_docker restart_policy: unless-stopped
- /usr/local/bin/monitoring-scripts/check_temp ports:
tags: [ nagios, sudo ] - 9101:8080/tcp
- name: assure prometheus node exporter volumes:
# https://github.com/prometheus/node_exporter - /:/rootfs:ro
ansible.builtin.docker_container: - /var/run:/var/run:ro
name: prometheus-node-exporter - /sys:/sys:ro
image: quay.io/prometheus/node-exporter:latest - /var/lib/docker:/var/lib/docker:ro
command: - /dev/disk:/dev/disk:ro
- '--path.rootfs=/host' devices:
- '--collector.interrupts' - /dev/kmsg
- '--collector.processes' when: ansible_pkg_mgr != "atomic_container"
network_mode: host - name: assure prometheus containers for coreos
pid_mode: host block:
volumes: - name: assure prometheus node exporter
- /:/host:ro,rslave # https://github.com/prometheus/node_exporter
tags: [ prometheus ] containers.podman.podman_container:
- name: assure prometheus cadvisor exporter name: prometheus-node-exporter
ansible.builtin.docker_container: image: quay.io/prometheus/node-exporter:latest
name: prometheus-cadvisor-exporter restart_policy: unless-stopped
image: gcr.io/cadvisor/cadvisor:latest command:
ports: - '--path.rootfs=/host'
- 9101:8080/tcp - '--collector.interrupts'
volumes: - '--collector.processes'
- /:/rootfs:ro network_mode: host
- /var/run:/var/run:ro pid_mode: host
- /sys:/sys:ro volumes:
- /var/lib/docker:/var/lib/docker:ro - /:/host:ro,rslave
- /dev/disk:/dev/disk:ro tags: [ prometheus ]
devices: when: ansible_pkg_mgr == "atomic_container"
- /dev/kmsg
- hosts: all
gather_facts: no
tasks:
- name: disable nagios user when not tagged
ansible.builtin.user: name=nagios-checker state=absent remove=yes
when: "'tags_nagios' not in group_names"
tags: [ nagios ]

10
playbooks/tags_restic-prune.yml Executable file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
---
- hosts: tags_restic-prune
roles:
- role: backup
vars:
backup_restic: no
backup_restic_prune: yes
tags: [ backup, prune, restic, restic-prune ]

View File

@ -7,7 +7,7 @@
docker_container: docker_container:
name: ddns-route53 name: ddns-route53
state: started state: started
image: crazymax/ddns-route53:latest image: ghcr.io/crazy-max/ddns-route53:latest
restart_policy: unless-stopped restart_policy: unless-stopped
pull: yes pull: yes
env: env:

View File

@ -2,7 +2,7 @@
- name: docker deploy gitlab runner - name: docker deploy gitlab runner
docker_container: docker_container:
name: gitlab-runner name: gitlab-runner
image: gitlab/gitlab-runner:latest image: registry.gitlab.com/gitlab-org/gitlab-runner:latest
restart_policy: unless-stopped restart_policy: unless-stopped
volumes: volumes:
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock

View File

@ -2,7 +2,7 @@
- name: docker deploy homeassistant - name: docker deploy homeassistant
docker_container: docker_container:
name: homeassistant name: homeassistant
image: "ghcr.io/home-assistant/raspberrypi4-homeassistant:stable" image: ghcr.io/home-assistant/home-assistant:latest
privileged: yes privileged: yes
network_mode: host network_mode: host
volumes: volumes:

View File

@ -0,0 +1,30 @@
# vim:ft=ansible:
#
# Bless this man. Bless him dearly:
# https://github.com/DRuggeri/netgear_exporter
#
- name: docker deploy netgear prometheus exporter
vars:
netgear_admin_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
31346635363565363532653831613034376535653530376237343261623736326230393333326337
3062643963353334323439306361356437653834613832310a666366393662303166313733393831
32373465356638393138633963666337643333303435653537666361363437633533333263303938
6536353530323036350a656330326662373836393736383961393537666537353138346439626566
64336631656538343335343535343338613465393635333937656237333531303230
docker_container:
name: prometheus-netgear-exporter
image: ghcr.io/druggeri/netgear_exporter
env:
NETGEAR_EXPORTER_PASSWORD: "{{ netgear_admin_password }}"
networks:
- name: web
aliases: [ "redis" ]
ports:
- "9192:9192/tcp"
command:
- "--url=http://192.168.1.1:5000" # Set the URL to the SOAP port of the router, NOT the admin interface
- "--insecure" # Required when accessing over IP
- "--timeout=15" # The router is slow as balls
- "--filter.collectors=Client,Traffic" # Filter out SystemInfo to lower collection time
tags: [ docker, prometheus, netgear, prometheus-netgear ]

View File

@ -2,7 +2,7 @@
- name: docker deploy redis - name: docker deploy redis
docker_container: docker_container:
name: redis name: redis
image: redis:6-alpine image: docker.io/redis:6-alpine
networks: networks:
- name: web - name: web
aliases: [ "redis" ] aliases: [ "redis" ]

View File

@ -3,7 +3,7 @@
docker_container: docker_container:
name: factorio name: factorio
state: absent state: absent
image: factoriotools/factorio:stable image: docker.io/factoriotools/factorio:stable
restart_policy: unless-stopped restart_policy: unless-stopped
interactive: yes interactive: yes
pull: yes pull: yes

View File

@ -2,16 +2,28 @@
- name: docker deploy minecraft - create farming and delights - name: docker deploy minecraft - create farming and delights
docker_container: docker_container:
name: minecraft-createfarming name: minecraft-createfarming
state: started state: absent
image: itzg/minecraft-server:latest image: ghcr.io/itzg/minecraft-server:latest
restart_policy: unless-stopped
pull: yes
env: env:
# Common envvars
EULA: "true" EULA: "true"
OPS: "VintageSalt"
SNOOPER_ENABLED: "false"
SPAWN_PROTECTION: "0"
USE_AIKAR_FLAGS: "true"
RCON_CMDS_STARTUP: |-
scoreboard objectives add Deaths deathCount
#scoreboard objectives add Health health {"text":"❤","color":"red"}
RCON_CMDS_ON_CONNECT: |-
scoreboard objectives setdisplay list Deaths
#scoreboard objectives setdisplay belowName Health
# Pack-specific stuff
MODRINTH_PROJECT: "https://modrinth.com/modpack/create-farmersdelight/version/1.0.0" MODRINTH_PROJECT: "https://modrinth.com/modpack/create-farmersdelight/version/1.0.0"
MOTD: "Create Farming and Delights! Spinny trains!"
TYPE: "MODRINTH" TYPE: "MODRINTH"
VERSION: "1.20.1" VERSION: "1.20.1"
MAX_MEMORY: "6G" MAX_MEMORY: "6G"
#VIEW_DISTANCE: "10"
ports: ports:
- "25565:25565/tcp" - "25565:25565/tcp"
- "25565:25565/udp" - "25565:25565/udp"

View File

@ -1,34 +0,0 @@
# vim:ft=ansible:
- name: docker deploy minecraft - direwolf20
docker_container:
name: minecraft-direwolf20
state: absent
image: itzg/minecraft-server:latest
restart_policy: unless-stopped
pull: yes
env:
EULA: "true"
GENERIC_PACK: "/modpacks/1.20.1-direwolf20/Da Bois.zip"
TYPE: "NEOFORGE"
VERSION: "1.20.1"
FORGE_VERSION: "47.1.105"
MEMORY: "8G"
MOTD: "Tannerite Dog Edition\\n#abolishtheatf"
OPS: "VintageSalt"
RCON_CMDS_STARTUP: |-
scoreboard objectives add Deaths deathCount
scoreboard objectives add Health health {"text":"❤","color":"red"}
RCON_CMDS_ON_CONNECT: |-
scoreboard objectives setdisplay list Deaths
scoreboard objectives setdisplay belowName Health
SNOOPER_ENABLED: "false"
SPAWN_PROTECTION: "0"
USE_AIKAR_FLAGS: "true"
VIEW_DISTANCE: "10"
ports:
- "25567:25565/tcp"
- "25567:25565/udp"
volumes:
- /data/srv/packs:/modpacks
- /data/minecraft/direwolf20:/data
tags: [ docker, minecraft, direwolf20 ]

View File

@ -0,0 +1,50 @@
# vim:ft=ansible:
- name: docker deploy minecraft - magicpack
docker_container:
name: minecraft-magicpack
state: absent
image: ghcr.io/itzg/minecraft-server:java8
env:
# Common envvars
EULA: "true"
OPS: "VintageSalt"
SNOOPER_ENABLED: "false"
SPAWN_PROTECTION: "0"
USE_AIKAR_FLAGS: "true"
#
# This enables the use of Ely.by as an auth and skin server
# Comment this and the above line out if you'd like to use Mojang's
# https://docs.ely.by/en/authlib-injector.html
#
# All players should register on Ely.by in order for this to work.
# They should also use Fjord Launcher by Unmojang:
# https://github.com/unmojang/FjordLauncher
#
JVM_OPTS: "-javaagent:/authlib-injector.jar=ely.by"
RCON_CMDS_STARTUP: |-
scoreboard objectives add Deaths deathCount
#scoreboard objectives add Health health {"text":"❤","color":"red"}
RCON_CMDS_ON_CONNECT: |-
scoreboard objectives setdisplay list Deaths
#scoreboard objectives setdisplay belowName Health
# Pack-specific stuff
MODRINTH_PROJECT: "https://srv.9iron.club/files/packs/1.7.10-magicpack/server.mrpack"
MOTD: "It's ya boy, uh, skrunkly modpack"
TYPE: "MODRINTH"
VERSION: "1.7.10"
MAX_MEMORY: "6G"
#VIEW_DISTANCE: "10"
ports:
- "25565:25565/tcp"
- "25565:25565/udp"
- "24454:24454/udp"
# Prometheus exporter for Forge
# https://www.curseforge.com/minecraft/mc-mods/prometheus-exporter
- "19565:19565/tcp"
# Prometheus exporter for Fabric
# https://modrinth.com/mod/fabricexporter
#- "19565:25585/tcp"
volumes:
- /data/minecraft/magicpack:/data
- /data/minecraft/authlib-injector-1.2.5.jar:/authlib-injector.jar
tags: [ docker, minecraft, magicpack ]

View File

@ -1,33 +0,0 @@
# vim:ft=ansible:
- name: docker deploy minecraft - vanilla
docker_container:
name: minecraft-vanilla
state: absent
image: itzg/minecraft-server:latest
restart_policy: unless-stopped
pull: yes
env:
DIFFICULTY: "normal"
ENABLE_COMMAND_BLOCK: "true"
EULA: "true"
MAX_PLAYERS: "8"
MODRINTH_PROJECT: "https://modrinth.com/modpack/adrenaserver"
MOTD: "Tannerite Dog Edition\\n#abolishtheatf"
OPS: "VintageSalt"
RCON_CMDS_STARTUP: |-
scoreboard objectives add Deaths deathCount
scoreboard objectives add Health health {"text":"❤","color":"red"}
RCON_CMDS_ON_CONNECT: |-
scoreboard objectives setdisplay list Deaths
scoreboard objectives setdisplay belowName Health
SNOOPER_ENABLED: "false"
SPAWN_PROTECTION: "0"
TYPE: "MODRINTH"
USE_AIKAR_FLAGS: "true"
VIEW_DISTANCE: "12"
ports:
- "26565:25565/tcp"
- "26565:25565/udp"
volumes:
- /data/minecraft/vanilla:/data
tags: [ docker, minecraft ]

View File

@ -0,0 +1,44 @@
# vim:ft=ansible:
- name: docker deploy minecraft - weediewack next gen pack
docker_container:
name: minecraft-weedie
state: absent
image: ghcr.io/itzg/minecraft-server:latest
env:
# Common envvars
EULA: "true"
OPS: "VintageSalt"
SNOOPER_ENABLED: "false"
SPAWN_PROTECTION: "0"
USE_AIKAR_FLAGS: "true"
ALLOW_FLIGHT: "true"
RCON_CMDS_STARTUP: |-
scoreboard objectives add Deaths deathCount
scoreboard objectives add Health health {"text":"❤","color":"red"}
RCON_CMDS_ON_CONNECT: |-
scoreboard objectives setdisplay list Deaths
scoreboard objectives setdisplay belowName Health
# Pack-specific stuff
TYPE: "Forge"
MOTD: "We're doing it a-fucking-gain!"
VERSION: "1.20.1"
FORGE_VERSION: "47.3.11"
MAX_MEMORY: "8G"
#GENERIC_PACKS: "Server Files 1.3.7"
#GENERIC_PACKS_PREFIX: "https://mediafilez.forgecdn.net/files/5832/451/"
#GENERIC_PACKS_SUFFIX: ".zip"
#SKIP_GENERIC_PACK_UPDATE_CHECK: "true"
#VIEW_DISTANCE: "10"
ports:
- "25565:25565/tcp"
- "25565:25565/udp"
- "24454:24454/udp"
# Prometheus exporter for Forge
# https://www.curseforge.com/minecraft/mc-mods/prometheus-exporter
- "19566:19565/tcp"
# Prometheus exporter for Fabric
# https://modrinth.com/mod/fabricexporter
#- "19565:25585/tcp"
volumes:
- /data/minecraft/weedie:/data
tags: [ docker, minecraft, weedie ]

View File

@ -0,0 +1,47 @@
# vim:ft=ansible:
- name: ensure docker network
docker_network: name=satisfactory
tags: [ satisfactory, docker, network ]
- name: docker deploy satisfactory
docker_container:
name: satisfactory
state: absent
image: ghcr.io/wolveix/satisfactory-server:latest
restart_policy: unless-stopped
pull: yes
networks:
- name: satisfactory
aliases: [ "gameserver" ]
env:
MAXPLAYERS: "8"
# We have this turned on for modding's sake
#SKIPUPDATE: "true"
ports:
- '7777:7777/udp'
- '7777:7777/tcp'
volumes:
- /data/satisfactory/config:/config
tags: [ docker, satisfactory ]
- name: docker deploy satisfactory sftp
docker_container:
name: satisfactory-sftp
state: absent
image: ghcr.io/atmoz/sftp/alpine:latest
restart_policy: unless-stopped
pull: yes
ulimits:
- 'nofile:262144:262144'
ports:
- '7776:22/tcp'
volumes:
- /data/satisfactory/config:/home/servermgr/game
command: 'servermgr:{{ server_password }}:1000'
vars:
server_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
33336138656461646462323661363336623235333861663730373535656331623230313334353239
6535623833343237626161383833663435643262376133320a616634613764396661316332373339
33633662366666623931643635313162366339306539666632643437396637616632633432326631
3038333932623638390a386362653463306338326436396230633562313466336464663764643461
3134
tags: [ docker, satisfactory, sidecar, sftp ]

View File

@ -16,13 +16,13 @@
ADMIN_USERNAME: "Salt" ADMIN_USERNAME: "Salt"
ADMIN_PASSWORD: "SuperMegaDicks" ADMIN_PASSWORD: "SuperMegaDicks"
MAX_PLAYERS: "8" MAX_PLAYERS: "8"
MAP_NAMES: "vehicle_interior;MotoriousExpandedSpawnZones,VehicleSpawnZonesExpandedRedRace;Louisville" MAP_NAMES: "vehicle_interior;MotoriousExpandedSpawnZones;VehicleSpawnZonesExpandedRedRace;AZSpawn;Louisville"
# Generating this list by hand is asinine # Generating this list by hand is asinine
# Go here: https://getcollectionids.moonguy.me/ # Go here: https://getcollectionids.moonguy.me/
# Use this: https://steamcommunity.com/sharedfiles/filedetails/?id=3145884377 # Use this: https://steamcommunity.com/sharedfiles/filedetails/?id=3145884377
# Or this: 3145884377 # Or this: 3145884377
# Add mods to that collection if you want to add them here, then regen these two fields. # Add mods to that collection if you want to add them here, then regen these two fields.
MOD_NAMES: "P4HasBeenRead;AutoSewing;AutoMechanics;BulbMechanics;ShowBulbCondition;modoptions;BoredomTweaks;MoreCLR_desc4mood;MiniHealthPanel;CombatText;manageContainers;EQUIPMENT_UI;ModManager;MoreDescriptionForTraits4166;SkillRecoveryJournal;RV_Interior_MP;RV_Interior_Vanilla;FRUsedCars;FRUsedCarsNRN;Lingering Voices;MapSymbolSizeSlider;VISIBLE_BACKPACK_BACKGROUND;BetterSortCC;MapLegendUI;BB_CommonSense;DRAW_ON_MAP;coavinsfirearmbase;coavinsfirearmsupport1;coavinsfirearmsupport2;coavinsfirearmsupport3;coavinsfirearmsupport4;coavinsfirearmsupport5;Shrek1and2intheirENTIRETYasvhs's;NoVanillaVehicles;AnotherPlayersOnMinimap;AnimSync;DescriptiveSkillTooltips;darkPatches;noirrsling;Susceptible;ToadTraits;TheStar;BION_PlainMoodles;FH;ProximityInventory;SlowConsumption;MaintenanceImprovesRepair;fhqExpVehSpawn;fhqExpVehSpawnGageFarmDisable;fhqExpVehSpawnM911FarmDisable;fhqExpVehSpawnP19AFarmDisable;fhqExpVehSpawnNoVanilla;fhqExpVehSpawnRedRace;RUNE-EXP;NestedContainer01;AddRandomSprinters;TrueActionsDancing;VFExpansion1;Squishmallows;DeLoreanDMC-12;1989Porsche911Turbo;suprabase;IceCreamTruckFreezer;GarbageTruck;T3;MarTraitsBlind;BraStorage;KuromiBackpack;TalsCannedRat;happygilmoretape;SimpleReadWhileWalking41;FasterHoodOpening;SchizophreniaTrait;TwinkiesVan;LouisVille SP;hf_point_blank;UIAPI;WaterDispenser;TheOnlyCure;FancyHandwork;BrutalHandwork;WanderingZombies;AuthenticZLite;ReloadAllMagazines;jiggasGreenfireMod;amclub;SpnClothHideFix;SpnOpenCloth;SpnHairAPI;PwSleepingbags;Video_Game_Consoles;metal_mod_pariah;truemusic;tm_grunge;TPAM;EasyLaundry;DropRollMod;9301;No Mo Culling;SpnCloth;SpnClothHideFix;SpnHair;lore_friendly_music;AmmoLootDropVFE;tsarslib;ItemTweakerAPIExtraClothingAddon;ItemTweakerAPI;TsarcraftCache2;TrueMusicMoodImprovement;StickyWeight" MOD_NAMES: "P4HasBeenRead;AutoSewing;AutoMechanics;BulbMechanics;ShowBulbCondition;modoptions;BoredomTweaks;MoreCLR_desc4mood;MiniHealthPanel;CombatText;manageContainers;EQUIPMENT_UI;ModManager;MoreDescriptionForTraits4166;SkillRecoveryJournal;RV_Interior_MP;RV_Interior_Vanilla;FRUsedCars;FRUsedCarsNRN;Lingering Voices;MapSymbolSizeSlider;VISIBLE_BACKPACK_BACKGROUND;BetterSortCC;MapLegendUI;BB_CommonSense;DRAW_ON_MAP;coavinsfirearmbase;coavinsfirearmsupport1;coavinsfirearmsupport2;coavinsfirearmsupport3;coavinsfirearmsupport4;coavinsfirearmsupport5;Shrek1and2intheirENTIRETYasvhs's;NoVanillaVehicles;AnotherPlayersOnMinimap;AnimSync;DescriptiveSkillTooltips;darkPatches;noirrsling;Susceptible;ToadTraits;TheStar;BION_PlainMoodles;FH;ProximityInventory;SlowConsumption;MaintenanceImprovesRepair;fhqExpVehSpawn;fhqExpVehSpawnGageFarmDisable;fhqExpVehSpawnM911FarmDisable;fhqExpVehSpawnP19AFarmDisable;fhqExpVehSpawnNoVanilla;fhqExpVehSpawnRedRace;RUNE-EXP;NestedContainer01;AddRandomSprinters;TrueActionsDancing;VFExpansion1;Squishmallows;DeLoreanDMC-12;1989Porsche911Turbo;suprabase;IceCreamTruckFreezer;GarbageTruck;T3;MarTraitsBlind;BraStorage;KuromiBackpack;TalsCannedRat;happygilmoretape;SimpleReadWhileWalking41;FasterHoodOpening;SchizophreniaTrait;TwinkiesVan;LouisVille SP;hf_point_blank;UIAPI;WaterDispenser;TheOnlyCure;FancyHandwork;BrutalHandwork;WanderingZombies;Authentic Z - Current;ReloadAllMagazines;jiggasGreenfireMod;amclub;SpnClothHideFix;SpnOpenCloth;SpnHairAPI;PwSleepingbags;Video_Game_Consoles;metal_mod_pariah;truemusic;tm_grunge;TPAM;EasyLaundry;DropRollMod;9301;No Mo Culling;SpnCloth;SpnClothHideFix;SpnHair;lore_friendly_music;AmmoLootDropVFE;tsarslib;ItemTweakerAPIExtraClothingAddon;ItemTweakerAPI;TsarcraftCache2;TrueMusicMoodImprovement;StickyWeight"
MOD_WORKSHOP_IDS: "2544353492;2584991527;2588598892;2778537451;2964435557;2169435993;2725360009;2763647806;2866258937;2286124931;2650547917;2950902979;2694448564;2685168362;2503622437;2822286426;1510950729;2874678809;2734705913;2808679062;2313387159;2710167561;2875848298;2804531012;3101379739;3138722707;2535461640;3117340325;2959512313;3134776712;2949818236;2786499395;2795677303;1299328280;2619072426;3008416736;2447729538;2847184718;2864231031;2920089312;2793164190;2758443202;2946221823;2797104510;2648779556;2667899942;3109119611;1687801932;2567438952;2689292423;2783373547;2783580134;2748047915;3121062639;3045079599;3022845661;3056136040;3163764362;2845952197;2584112711;2711720885;2838950860;2849247394;2678653895;2990322197;2760035814;2687798127;2949998111;3115293671;3236152598;2904920097;2934621024;2983905789;2335368829;2907834593;2920899878;1703604612;2778576730;2812326159;3041733782;2714848168;2831786301;2853710135;2613146550;2810869183;2717792692;2925034918;2908614026;2866536557;2684285534;2463184726;2839277937;3041910754;2392709985;2810800927;566115016;2688809268;3048902085;2997503254" MOD_WORKSHOP_IDS: "2544353492;2584991527;2588598892;2778537451;2964435557;2169435993;2725360009;2763647806;2866258937;2286124931;2650547917;2950902979;2694448564;2685168362;2503622437;2822286426;1510950729;2874678809;2734705913;2808679062;2313387159;2710167561;2875848298;2804531012;3101379739;3138722707;2535461640;3117340325;2959512313;3134776712;2949818236;2786499395;2795677303;1299328280;2619072426;3008416736;2447729538;2847184718;2864231031;2920089312;2793164190;2758443202;2946221823;2797104510;2648779556;2667899942;3109119611;1687801932;2567438952;2689292423;2783373547;2783580134;2748047915;3121062639;3045079599;3022845661;3056136040;3163764362;2845952197;2584112711;2711720885;2838950860;2849247394;2678653895;2990322197;2760035814;2687798127;2949998111;3115293671;3236152598;2904920097;2934621024;2983905789;2335368829;2907834593;2920899878;1703604612;2778576730;2812326159;3041733782;2714848168;2831786301;2853710135;2613146550;2810869183;2717792692;2925034918;2908614026;2866536557;2684285534;2463184726;2839277937;3041910754;2392709985;2810800927;566115016;2688809268;3048902085;2997503254"
RCON_PASSWORD: "SuperMegaDicks" RCON_PASSWORD: "SuperMegaDicks"
SERVER_NAME: "The Salty Spitoon" SERVER_NAME: "The Salty Spitoon"

View File

@ -0,0 +1,39 @@
# vim:ft=ansible:
#
# This is a really stupid game, source here:
# https://github.com/Oliveriver/5d-diplomacy-with-multiverse-time-travel
#
- name: docker deploy 5d-diplomacy-with-multiverse-timetravel
docker_container:
name: 5d-diplomacy-with-multiverse-timetravel
state: started
#image: deluan/5d-diplomacy-with-multiverse-timetravel:latest
image: rehashedsalt/5dd:latest
env:
ConnectionStrings__Database: "Server=5dd-mssql;Database=diplomacy;User=SA;Password={{ five_dd_db_pass }};Encrypt=True;TrustServerCertificate=True"
networks:
- name: web
aliases: [ "5d-diplomacy-with-multiverse-timetravel" ]
# For unproxied use
ports:
- 5173:8080
labels:
traefik.enable: "true"
traefik.http.routers.5d-diplomacy-with-multiverse-timetravel.rule: Host(`5dd.desu.ltd`)
traefik.http.routers.5d-diplomacy-with-multiverse-timetravel.entrypoints: web
tags: [ docker, 5d-diplomacy-with-multiverse-timetravel ]
- name: docker deploy 5dd mssql db
docker_container:
name: 5dd-mssql
image: mcr.microsoft.com/mssql/server:2022-latest
user: root
env:
ACCEPT_EULA: "y"
MSSQL_SA_PASSWORD: "{{ five_dd_db_pass }}"
volumes:
- /data/5dd/mssql/data:/var/opt/mssql/data
- /data/5dd/mssql/log:/var/opt/mssql/log
- /data/5dd/mssql/secrets:/var/opt/mssql/secrets
networks:
- name: web
aliases: [ "5dd-mssql" ]

View File

@ -2,7 +2,7 @@
- name: docker deploy 9iron - name: docker deploy 9iron
docker_container: docker_container:
name: 9iron name: 9iron
image: rehashedsalt/9iron:latest image: docker.io/rehashedsalt/9iron:latest
networks: networks:
- name: web - name: web
aliases: [ "9iron" ] aliases: [ "9iron" ]

View File

@ -0,0 +1,17 @@
# vim:ft=ansible:
- name: docker deploy bazarr
docker_container:
name: bazarr
image: ghcr.io/linuxserver/bazarr:latest
networks:
- name: web
aliases: [ "bazarr" ]
volumes:
- /data/bazarr/config:/config
- /data/shared/downloads:/data
- /data/shared/media/shows:/tv
labels:
traefik.enable: "true"
traefik.http.routers.bazarr.rule: Host(`bazarr.media.desu.ltd`)
traefik.http.routers.bazarr.entrypoints: web
tags: [ docker, bazarr ]

View File

@ -2,7 +2,7 @@
- name: docker deploy desultd - name: docker deploy desultd
docker_container: docker_container:
name: desultd name: desultd
image: rehashedsalt/desultd:latest image: docker.io/rehashedsalt/desultd:latest
networks: networks:
- name: web - name: web
aliases: [ "desultd" ] aliases: [ "desultd" ]

View File

@ -2,7 +2,7 @@
- name: docker deploy element-web - name: docker deploy element-web
docker_container: docker_container:
name: element-web name: element-web
image: vectorim/element-web:latest image: ghcr.io/element-hq/element-web:develop
env: env:
TZ: "America/Chicago" TZ: "America/Chicago"
networks: networks:

View File

@ -2,7 +2,7 @@
- name: docker deploy gitea - name: docker deploy gitea
docker_container: docker_container:
name: gitea name: gitea
image: gitea/gitea:1 image: docker.io/gitea/gitea:1
env: env:
USER_UID: "1002" USER_UID: "1002"
USER_GID: "1002" USER_GID: "1002"

View File

@ -13,7 +13,7 @@
- name: docker deploy grafana - name: docker deploy grafana
docker_container: docker_container:
name: grafana name: grafana
image: grafana/grafana-oss:main image: docker.io/grafana/grafana-oss:main
env: env:
TZ: "America/Chicago" TZ: "America/Chicago"
# This enables logging to STDOUT for log aggregators to more easily hook it # This enables logging to STDOUT for log aggregators to more easily hook it
@ -31,7 +31,7 @@
- name: docker deploy grafana matrix bridge - name: docker deploy grafana matrix bridge
docker_container: docker_container:
name: grafana-matrix-bridge name: grafana-matrix-bridge
image: registry.gitlab.com/hectorjsmith/grafana-matrix-forwarder:latest image: registry.gitlab.com/hctrdev/grafana-matrix-forwarder:latest
env: env:
GMF_MATRIX_USER: "@grafana:desu.ltd" GMF_MATRIX_USER: "@grafana:desu.ltd"
GMF_MATRIX_PASSWORD: "{{ secret_grafana_matrix_token }}" GMF_MATRIX_PASSWORD: "{{ secret_grafana_matrix_token }}"

View File

@ -0,0 +1,44 @@
# vim:ft=ansible:
#
# This is a really stupid game, source here:
# https://github.com/Oliveriver/5d-diplomacy-with-multiverse-time-travel
#
- name: set up jellyfin dirs
ansible.builtin.file:
state: directory
owner: 911
group: 911
mode: "0750"
path: "{{ item }}"
with_items:
- /data/jellyfin/config
- /data/jellyfin/cache
tags: [ docker, jellyfin ]
- name: docker deploy jellyfin
docker_container:
name: jellyfin
state: started
image: ghcr.io/jellyfin/jellyfin:latest
user: 911:911
groups:
- 109 # render on Ubuntu systems
env:
JELLYFIN_PublishedServerUrl: "http://jellyfin.desu.ltd"
networks:
- name: web
aliases: [ "jellyfin" ]
# For unproxied use
#ports:
# - 8096/tcp
volumes:
- /data/jellyfin/config:/config
- /data/jellyfin/cache:/cache
- /data/shared/media:/media
devices:
- /dev/dri/renderD128:/dev/dri/renderD128
labels:
traefik.enable: "true"
traefik.http.routers.jellyfin.rule: Host(`jellyfin.desu.ltd`)
traefik.http.routers.jellyfin.entrypoints: web
traefik.http.services.jellyfin.loadbalancer.server.port: "8096"
tags: [ docker, jellyfin ]

View File

@ -2,14 +2,55 @@
- name: docker deploy lidarr - name: docker deploy lidarr
docker_container: docker_container:
name: lidarr name: lidarr
image: linuxserver/lidarr:latest state: started
#image: linuxserver/lidarr:latest
image: ghcr.io/hotio/lidarr:pr-plugins
networks: networks:
- name: web - name: web
aliases: [ "lidarr" ] aliases: [ "lidarr" ]
env: env:
PUID: "911"
PGID: "911"
TZ: "America/Chicago" TZ: "America/Chicago"
VPN_ENABLED: "false"
volumes: volumes:
# https://github.com/RandomNinjaAtk/arr-scripts?tab=readme-ov-file
- /data/lidarr/bin:/usr/local/bin
- /data/lidarr/config:/config - /data/lidarr/config:/config
- /data/shared/downloads:/data - /data/shared/downloads:/data
- /data/shared/media/music:/music - /data/shared/media/music:/music
labels:
traefik.enable: "true"
traefik.http.routers.lidarr.rule: Host(`lidarr.media.desu.ltd`)
traefik.http.routers.lidarr.entrypoints: web
tags: [ docker, lidarr ] tags: [ docker, lidarr ]
- name: assure slskd cleanup cronjob
ansible.builtin.cron:
user: root
name: slskd-cleanup
state: present
hour: 4
job: "find /data/shared/downloads/soulseek -mtime +7 -print -delete"
tags: [ slskd, cron, cleanup ]
- name: docker deploy slskd
docker_container:
name: lidarr-slskd
state: started
image: ghcr.io/slskd/slskd:latest
user: "911:911"
networks:
- name: web
aliases: [ "slskd" ]
env:
SLSKD_REMOTE_CONFIGURATION: "true"
ports:
- "50300:50300"
volumes:
- /data/slskd:/app
- /data/shared/downloads/soulseek:/app/downloads
labels:
traefik.enable: "true"
traefik.http.routers.lidarr-slskd.rule: Host(`slskd.media.desu.ltd`)
traefik.http.routers.lidarr-slskd.entrypoints: web
traefik.http.services.lidarr-slskd.loadbalancer.server.port: "5030"
tags: [ docker, slskd ]

View File

@ -0,0 +1,39 @@
# vim:ft=ansible:
- name: docker deploy navidrome
docker_container:
name: navidrome
state: started
image: ghcr.io/navidrome/navidrome:latest
user: 911:911
env:
ND_BASEURL: "https://music.desu.ltd"
ND_PROMETHEUS_ENABLED: "true"
ND_LOGLEVEL: "info"
ND_LASTFM_ENABLED: "true"
ND_LASTFM_APIKEY: !vault |
$ANSIBLE_VAULT;1.1;AES256
63333239613931623033656233353537653830623065386632393232316537356261393938323533
6632633034643637653136633235393335303535653136340a363331653839383930396633363133
62313964396161326231376534333064343736633466363962313662353665313230396237666363
6565613939666663300a313462366137363661373839326636613064643032356437376536333366
30366238646363316639373730343336373234313338663261616331666162653362626364323463
3131666231383138623965656163373364326432353137663665
ND_LASTFM_SECRET: !vault |
$ANSIBLE_VAULT;1.1;AES256
39316232373136663435323662333137636635326535643735383734666562303339663134336137
3132613237613436336663303330623334663262313337350a393963653765343262333533373763
37623230393638616535623861333135353038646532343038313865626435623830343361633938
3232646462346163380a616462366435343934326232366233636564626262653965333564363731
66656532663965616561313032646231663366663636663838633535393566363631346535383866
6335623230303333346266306637353061356665383264333266
networks:
- name: web
aliases: [ "navidrome" ]
volumes:
- /data/navidrome/data:/data
- /data/shared/media/music:/music:ro
labels:
traefik.enable: "true"
traefik.http.routers.navidrome.rule: Host(`music.desu.ltd`)
traefik.http.routers.navidrome.entrypoints: web
tags: [ docker, navidrome ]

View File

@ -2,7 +2,7 @@
- name: deploy netbox - name: deploy netbox
module_defaults: module_defaults:
docker_container: docker_container:
image: netboxcommunity/netbox:v3.1.5 image: ghcr.io/netbox-community/netbox:v3.1.5
state: started state: started
restart_policy: unless-stopped restart_policy: unless-stopped
pull: yes pull: yes

View File

@ -2,17 +2,7 @@
- name: docker deploy nextcloud - name: docker deploy nextcloud
docker_container: docker_container:
name: nextcloud name: nextcloud
image: nextcloud:27 image: docker.io/nextcloud:30
# The entrypoint workaround is for this issue:
#
# https://github.com/nextcloud/docker/issues/1414
#
# This installs imagemagick to allow for SVG support and to clear the last
# setup warning in the application.
# It can be safely removed upon closure of this issue. I'm just doing it to
# make the big bad triangle go away.
entrypoint: /bin/sh
command: -c "apt-get update; apt-get install -y libmagickcore-6.q16-6-extra; /entrypoint.sh apache2-foreground"
env: env:
PHP_UPLOAD_LIMIT: 1024M PHP_UPLOAD_LIMIT: 1024M
networks: networks:
@ -23,11 +13,22 @@
- /data/nextcloud/config:/var/www/html/config - /data/nextcloud/config:/var/www/html/config
- /data/nextcloud/themes:/var/www/html/themes - /data/nextcloud/themes:/var/www/html/themes
- /data/nextcloud/data:/var/www/html/data - /data/nextcloud/data:/var/www/html/data
- /data/shared:/shared
tags: [ docker, nextcloud ] tags: [ docker, nextcloud ]
# Vanilla Nextcloud cron
- name: assure nextcloud cron cronjob - name: assure nextcloud cron cronjob
ansible.builtin.cron: user=root name=nextcloud minute=*/5 job="docker exec --user www-data nextcloud php -f /var/www/html/cron.php" ansible.builtin.cron: user=root name=nextcloud minute=*/5 job="docker exec --user www-data nextcloud php -f /var/www/html/cron.php"
tags: [ docker, nextcloud, cron ] tags: [ docker, nextcloud, cron ]
# Plugin crons
- name: assure nextcloud preview generator cronjob
ansible.builtin.cron: user=root name=nextcloud-preview-generator hour=1 minute=10 job="docker exec --user www-data nextcloud php occ preview:pre-generate"
tags: [ docker, nextcloud, cron ]
# Maintenance tasks
- name: assure nextcloud update cronjob - name: assure nextcloud update cronjob
ansible.builtin.cron: user=root name=nextcloud-update minute=*/30 job="docker exec --user www-data nextcloud php occ app:update --all" ansible.builtin.cron: user=root name=nextcloud-update minute=*/30 job="docker exec --user www-data nextcloud php occ app:update --all"
tags: [ docker, nextcloud, cron ] tags: [ docker, nextcloud, cron ]
- name: assure nextcloud db indices cronjob
ansible.builtin.cron: user=root name=nextcloud-update-db-inidices hour=1 job="docker exec --user www-data nextcloud php occ db:add-missing-indices"
tags: [ docker, nextcloud, cron ]
- name: assure nextcloud expensive migration cronjob
ansible.builtin.cron: user=root name=nextcloud-update-expensive-migration hour=1 minute=30 job="docker exec --user www-data nextcloud php occ db:add-missing-indices"
tags: [ docker, nextcloud, cron ]

View File

@ -2,10 +2,14 @@
- name: docker deploy prowlarr - name: docker deploy prowlarr
docker_container: docker_container:
name: prowlarr name: prowlarr
image: linuxserver/prowlarr:develop image: ghcr.io/linuxserver/prowlarr:develop
networks: networks:
- name: web - name: web
aliases: [ "prowlarr" ] aliases: [ "prowlarr" ]
volumes: volumes:
- /data/prowlarr/config:/config - /data/prowlarr/config:/config
labels:
traefik.enable: "true"
traefik.http.routers.prowlarr.rule: Host(`prowlarr.media.desu.ltd`)
traefik.http.routers.prowlarr.entrypoints: web
tags: [ docker, prowlarr ] tags: [ docker, prowlarr ]

View File

@ -2,7 +2,7 @@
- name: docker deploy radarr - name: docker deploy radarr
docker_container: docker_container:
name: radarr name: radarr
image: linuxserver/radarr:latest image: ghcr.io/linuxserver/radarr:latest
networks: networks:
- name: web - name: web
aliases: [ "radarr" ] aliases: [ "radarr" ]
@ -10,4 +10,8 @@
- /data/radarr/config:/config - /data/radarr/config:/config
- /data/shared/downloads:/data - /data/shared/downloads:/data
- /data/shared/media/movies:/tv - /data/shared/media/movies:/tv
labels:
traefik.enable: "true"
traefik.http.routers.radarr.rule: Host(`radarr.media.desu.ltd`)
traefik.http.routers.radarr.entrypoints: web
tags: [ docker, radarr ] tags: [ docker, radarr ]

View File

@ -2,7 +2,7 @@
- name: docker deploy sonarr - name: docker deploy sonarr
docker_container: docker_container:
name: sonarr name: sonarr
image: linuxserver/sonarr:latest image: ghcr.io/linuxserver/sonarr:latest
networks: networks:
- name: web - name: web
aliases: [ "sonarr" ] aliases: [ "sonarr" ]
@ -10,4 +10,8 @@
- /data/sonarr/config:/config - /data/sonarr/config:/config
- /data/shared/downloads:/data - /data/shared/downloads:/data
- /data/shared/media/shows:/tv - /data/shared/media/shows:/tv
labels:
traefik.enable: "true"
traefik.http.routers.sonarr.rule: Host(`sonarr.media.desu.ltd`)
traefik.http.routers.sonarr.entrypoints: web
tags: [ docker, sonarr ] tags: [ docker, sonarr ]

View File

@ -4,7 +4,7 @@
# NOTE: We depend on the default configuration of Apache here, specifically # NOTE: We depend on the default configuration of Apache here, specifically
# the default to have server-generated indexes. Makes srv easier to navigate # the default to have server-generated indexes. Makes srv easier to navigate
name: srv name: srv
image: httpd:latest image: docker.io/httpd:latest
networks: networks:
- name: web - name: web
aliases: [ "srv" ] aliases: [ "srv" ]

View File

@ -2,7 +2,7 @@
- name: docker deploy transmission - name: docker deploy transmission
docker_container: docker_container:
name: transmission name: transmission
image: haugene/transmission-openvpn:latest image: docker.io/haugene/transmission-openvpn:latest
env: env:
USER: transmission USER: transmission
PASS: "{{ secret_transmission_user_pass }}" PASS: "{{ secret_transmission_user_pass }}"
@ -11,6 +11,8 @@
OPENVPN_USERNAME: "{{ secret_pia_user }}" OPENVPN_USERNAME: "{{ secret_pia_user }}"
OPENVPN_PASSWORD: "{{ secret_pia_pass }}" OPENVPN_PASSWORD: "{{ secret_pia_pass }}"
LOCAL_NETWORK: 192.168.0.0/16 LOCAL_NETWORK: 192.168.0.0/16
devices:
- /dev/net/tun
capabilities: capabilities:
- NET_ADMIN - NET_ADMIN
ports: ports:
@ -23,4 +25,9 @@
- /data/transmission/config:/config - /data/transmission/config:/config
- /data/shared/downloads:/data - /data/shared/downloads:/data
- /data/transmission/watch:/watch - /data/transmission/watch:/watch
labels:
traefik.enable: "true"
traefik.http.routers.transmission.rule: Host(`transmission.media.desu.ltd`)
traefik.http.routers.transmission.entrypoints: web
traefik.http.services.transmission.loadbalancer.server.port: "9091"
tags: [ docker, transmission ] tags: [ docker, transmission ]

View File

@ -14,7 +14,7 @@ roles:
version: 2.0.0 version: 2.0.0
# Upstream: https://github.com/geerlingguy/ansible-role-postgresql # Upstream: https://github.com/geerlingguy/ansible-role-postgresql
- src: geerlingguy.postgresql - src: geerlingguy.postgresql
version: 3.5.0 version: 3.5.2
# Upstream: https://github.com/willshersystems/ansible-sshd # Upstream: https://github.com/willshersystems/ansible-sshd
- src: willshersystems.sshd - src: willshersystems.sshd
version: v0.23.0 version: v0.23.0

View File

@ -6,6 +6,7 @@
append: "{{ adminuser_groups_append }}" append: "{{ adminuser_groups_append }}"
groups: "{{ adminuser_groups + adminuser_groups_extra }}" groups: "{{ adminuser_groups + adminuser_groups_extra }}"
shell: "{{ adminuser_shell }}" shell: "{{ adminuser_shell }}"
tags: [ adminuser ]
- name: assure admin user ssh key - name: assure admin user ssh key
ansible.builtin.user: ansible.builtin.user:
name: "{{ adminuser_name }}" name: "{{ adminuser_name }}"
@ -13,15 +14,20 @@
ssh_key_type: "{{ adminuser_ssh_key_type }}" ssh_key_type: "{{ adminuser_ssh_key_type }}"
ssh_key_file: ".ssh/id_{{ adminuser_ssh_key_type }}" ssh_key_file: ".ssh/id_{{ adminuser_ssh_key_type }}"
when: adminuser_ssh_key when: adminuser_ssh_key
tags: [ adminuser ]
- name: assure admin user ssh authorized keys - name: assure admin user ssh authorized keys
authorized_key: user={{ adminuser_name }} key={{ item }} authorized_key: user={{ adminuser_name }} key={{ item }}
loop: "{{ adminuser_ssh_authorized_keys }}" loop: "{{ adminuser_ssh_authorized_keys }}"
tags: [ adminuser ]
- name: remove admin user ssh keys - name: remove admin user ssh keys
authorized_key: state=absent user={{ adminuser_name }} key={{ item }} authorized_key: state=absent user={{ adminuser_name }} key={{ item }}
loop: "{{ adminuser_ssh_unauthorized_keys }}" loop: "{{ adminuser_ssh_unauthorized_keys }}"
tags: [ adminuser ]
- name: assure admin user pass - name: assure admin user pass
ansible.builtin.user: name={{ adminuser_name }} password={{ adminuser_password }} ansible.builtin.user: name={{ adminuser_name }} password={{ adminuser_password }}
when: adminuser_password is defined when: adminuser_password is defined
tags: [ adminuser ]
- name: assure admin user sudo rule - name: assure admin user sudo rule
ansible.builtin.lineinfile: path=/etc/sudoers line={{ adminuser_sudo_rule }} ansible.builtin.lineinfile: path=/etc/sudoers line={{ adminuser_sudo_rule }}
when: adminuser_sudo when: adminuser_sudo
tags: [ adminuser ]

@ -1 +1 @@
Subproject commit 1a332f6788d4ae24b52948850965358790861432 Subproject commit 56549b8ac718997c6b5c314636955e46ee5e8cc1

View File

@ -1,4 +0,0 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
- name: install ansible
pip: name=ansible<5,ansible-lint state=latest

View File

@ -1,12 +1,18 @@
# Which backup script to use. Configuration is somewhat unique to each script
backup_script: s3backup
restore_script: s3restore
# When to kick off backups using the systemd timer # When to kick off backups using the systemd timer
backup_time: "*-*-* 02:00:00" backup_time: "*-*-* 02:00:00"
# What format should the datestamps in the filenames of any backups be in? # What variation should the systemd timer have?
# Defaults to YYYY-MM-DD-hhmm # Default value of "5400" is 1h30min in seconds
# So January 5th, 2021 at 3:41PM would be 2021-01-05-1541 backup_time_randomization: "5400"
backup_dateformat: "%Y-%m-%d-%H%M"
# Should this machine backup?
# Disabling this variable templates out the scripts, but not the units
backup_restic: yes
# Should this machine prune?
# Be very careful with this -- it's an expensive operation
backup_restic_prune: no
# How frequently should we prune?
backup_restic_prune_time: "*-*-01 12:00:00"
# S3 configuration for scripts that use it # S3 configuration for scripts that use it
# Which bucket to upload the backup to # Which bucket to upload the backup to
@ -20,13 +26,11 @@ backup_s3_aws_secret_access_key: REPLACEME
# List of files/directories to back up # List of files/directories to back up
# Note that tar is NOT instructed to recurse through symlinks # Note that tar is NOT instructed to recurse through symlinks
# If you want it to do that, end the path with a slash! # If you want it to do that, end the path with a slash!
backup_s3backup_list: [] backup_s3backup_list:
- "/etc"
- "/home/{{ adminuser_name }}"
backup_s3backup_list_extra: [] backup_s3backup_list_extra: []
# List of files/directories to --exclude # List of files/directories to --exclude
backup_s3backup_exclude_list: [] backup_s3backup_exclude_list:
- "/home/{{ adminuser_name }}/Vaults/*"
backup_s3backup_exclude_list_extra: [] backup_s3backup_exclude_list_extra: []
# Arguments to pass to tar
# Note that passing f here is probably a bad idea
backup_s3backup_tar_args: cz
backup_s3backup_tar_args_extra: ""
# The backup URL to use for S3 copies

View File

@ -4,3 +4,6 @@
- name: restart backup timer - name: restart backup timer
ansible.builtin.systemd: name=backup.timer state=restarted daemon_reload=yes ansible.builtin.systemd: name=backup.timer state=restarted daemon_reload=yes
become: yes become: yes
- name: restart prune timer
ansible.builtin.systemd: name=backup-prune.timer state=restarted daemon_reload=yes
become: yes

View File

@ -1,63 +1,51 @@
#!/usr/bin/env ansible-playbook #!/usr/bin/env ansible-playbook
# vim:ft=ansible: # vim:ft=ansible:
--- ---
- name: template out backup script # Install restic if we can
ansible.builtin.template: src={{ backup_script }}.sh dest=/opt/backup.sh mode=0700 owner=root group=root - name: install restic
- name: template out analyze script
ansible.builtin.template: src={{ backup_script }}-analyze.sh dest=/opt/analyze.sh mode=0700 owner=root group=root
- name: template out restore script
ansible.builtin.template: src={{ restore_script }}.sh dest=/opt/restore.sh mode=0700 owner=root group=root
- name: configure systemd service
ansible.builtin.template: src=backup.service dest=/etc/systemd/system/backup.service mode=0644
- name: configure systemd timer
ansible.builtin.template: src=backup.timer dest=/etc/systemd/system/backup.timer mode=0644
notify: restart backup timer
- name: enable timer
ansible.builtin.systemd: name=backup.timer state=started enabled=yes daemon_reload=yes
- name: deploy kopia
block: block:
- name: ensure kopia dirs - name: install restic through apt
ansible.builtin.file: ansible.builtin.apt: name=restic state=present
state: directory when: ansible_pkg_mgr == "apt"
owner: root - name: install restic through rpm-ostree
group: root community.general.rpm_ostree_pkg: name=restic state=present
mode: "0750" when: ansible_os_family == "RedHat" and ansible_pkg_mgr == "atomic_container"
path: "{{ item }}" tags: [ packages ]
with_items: # The script
- /data/kopia/config - name: template out backup-related files
- /data/kopia/cache ansible.builtin.template:
- /data/kopia/logs src: "{{ item.src }}"
- name: template out password file dest: "/opt/{{ item.dest | default(item.src, true) }}"
copy: mode: 0700
content: "{{ backup_kopia_password }}" owner: root
owner: root group: root
group: root with_items:
mode: "0600" - src: restic-password
dest: /data/kopia/config/repository.config.kopia-password - src: restic-wrapper.sh
- name: template out configuration file dest: restic-wrapper
template: # Backup service/timer definitions
src: repository.config.j2 - name: set up backups
owner: root block:
group: root - name: template out backup script
mode: "0600" ansible.builtin.template: src=backup.sh dest=/opt/backup.sh mode=0700 owner=root group=root
dest: /data/kopia/config/repository.config - name: configure systemd service
- name: deploy kopia ansible.builtin.template: src=backup.service dest=/etc/systemd/system/backup.service mode=0644
community.docker.docker_container: - name: configure systemd timer
name: kopia ansible.builtin.template: src=backup.timer dest=/etc/systemd/system/backup.timer mode=0644
image: kopia/kopia:latest notify: restart backup timer
env: - name: enable timer
KOPIA_PASSWORD: "{{ backup_kopia_password }}" ansible.builtin.systemd: name=backup.timer state=started enabled=yes daemon_reload=yes
command: when: backup_restic
- "repository" # Prune script
- "connect" - name: set up restic prune
- "from-config" block:
- "--file" - name: template out prune script
- "/app/config/repository.config" ansible.builtin.template: src=backup-prune.sh dest=/opt/backup-prune.sh mode=0700 owner=root group=root
volumes: - name: configure prune systemd service
- /data/kopia/config:/app/config ansible.builtin.template: src=backup-prune.service dest=/etc/systemd/system/backup-prune.service mode=0644
- /data/kopia/cache:/app/cache - name: configure prune systemd timer
- /data/kopia/logs:/app/logs ansible.builtin.template: src=backup-prune.timer dest=/etc/systemd/system/backup-prune.timer mode=0644
# Shared tmp so Kopia can dump restorable backups to the host notify: restart prune timer
- /tmp:/tmp:shared - name: enable prune timer
# And a RO mount for the host so it can be backed up ansible.builtin.systemd: name=backup-prune.timer state=started enabled=yes daemon_reload=yes
- /:/host:ro,rslave when: backup_restic_prune

View File

@ -0,0 +1,18 @@
# vim:ft=systemd
[Unit]
Description=Backup prune service
After=network-online.target
Wants=network-online.target
StartLimitInterval=3600
StartLimitBurst=2
[Service]
Type=oneshot
#MemoryMax=512M
Environment="GOGC=20"
ExecStart=/opt/backup-prune.sh
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,11 @@
#! /bin/sh
#
# backup-prune.sh
# An Ansible-managed script to prune restic backups every now and again
#
set -e
/opt/restic-wrapper \
--verbose \
prune

View File

@ -0,0 +1,10 @@
# vim:ft=systemd
[Unit]
Description=Backup prune timer
[Timer]
Persistent=true
OnCalendar={{ backup_restic_prune_time }}
[Install]
WantedBy=timers.target

View File

@ -3,11 +3,17 @@
Description=Nightly backup service Description=Nightly backup service
After=network-online.target After=network-online.target
Wants=network-online.target Wants=network-online.target
StartLimitInterval=600
StartLimitBurst=5
[Service] [Service]
Type=oneshot Type=oneshot
MemoryMax=256M #MemoryMax=512M
Environment="GOGC=20"
ExecStart=/opt/backup.sh ExecStart=/opt/backup.sh
Restart=on-failure
RestartSec=5
RestartSteps=10
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@ -0,0 +1,118 @@
#! /bin/bash
#
# backup.sh
# Ansible-managed backup script that uses restic to automate machine bacukps to
# an S3 bucket. Intelligently handles a few extra apps, too.
#
# NOTICE: DO NOT MODIFY THIS FILE
# Any changes made will be clobbered by Ansible
# Please make any configuration changes in the main repo
#
set -e
# Directories to backup
# Ansible will determine the entries here
# We use a bash array because it affords us some level of sanitization, enough
# to let us back up items whose paths contain spaces
declare -a DIRS
{% for item in backup_s3backup_list + backup_s3backup_list_extra %}
DIRS+=("{{ item }}")
{% endfor %}
# End directory manual configuration
# Helper functions
backup() {
# Takes a file or directory to backup and backs it up
[ -z "$*" ] && return 1
for dir in "$@"; do
echo "- $dir"
done
# First, we remove stale locks. This command will only remove locks that have not been
# updated in the last half hour. By default, restic updates them during an ongoing
# operation every 5 minutes, so this should be perfectly fine to do.
# What I'm not sure of (but should be fine because we auto-restart if need be) is if two
# processes doing this concurrently will cause issues. I'd hope not but you never know.
# restic-unlock(1)
/opt/restic-wrapper \
--verbose \
unlock
# Back up everything in the $DIRS array (which was passed as args)
# This results in some level of pollution with regard to what paths are backed up
# (especially on ostree systems where we do the etc diff) but that's syntactic and
# we can script around it.
/opt/restic-wrapper \
--verbose \
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
--exclude="{{ item }}" \
{% endfor %}
--exclude="/data/**/backup" \
--exclude="/data/**/backups" \
--exclude="*.bak" \
--exclude="*.tmp" \
--exclude="*.swp" \
--retry-lock=3h \
backup \
"$@"
# In addition, we should also prune our backups
# https://restic.readthedocs.io/en/stable/060_forget.html
# --keep-daily n Keeps daily backups for the last n days
# --keep-weekly n Keeps weekly backups for the last n weeks
# --keep-montly n Keeps monthly backups for the last n months
# --keep-tag foo Keeps all snapshots tagged with "foo"
# --host "$HOSTNAME" Only act on *our* snapshots. We assume other machines are taking
# care of their own houses.
/opt/restic-wrapper \
--verbose \
--retry-lock=3h \
forget \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--keep-tag noremove \
--host "$HOSTNAME"
}
# Dump Postgres DBs, if possible
if command -v psql > /dev/null 2>&1; then
# Put down a place for us to store backups, if we don't have it already
backupdir="/opt/postgres-backups"
mkdir -p "$backupdir"
# Populate a list of databases
declare -a DATABASES
while read line; do
DATABASES+=("$line")
done < <(sudo -u postgres psql -t -A -c "SELECT datname FROM pg_database where datname not in ('template0', 'template1', 'postgres');" 2>/dev/null)
# pgdump all DBs, compress them, and pipe straight up to S3
echo "Commencing backup on the following databases:"
for dir in "${DATABASES[@]}"; do
echo "- $dir"
done
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
for db in "${DATABASES[@]}"; do
echo "Backing up $db"
path="$backupdir/$db.pgsql.gz"
sudo -u postgres pg_dump "$db" \
| gzip -v9 \
> "$path"
DIRS+=("$path")
done
fi
# Tar up all items in the backup list, recursively, and pipe them straight
# up to S3
if [ -n "${DIRS[*]}" ]; then
echo "Commencing backup on the following items:"
for dir in "${DIRS[@]}"; do
echo "- $dir"
done
echo "Will ignore the following items:"
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
echo "- {{ item }}"
{% endfor %}
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
backup ${DIRS[*]}
fi

View File

@ -5,6 +5,7 @@ Description=Nightly backup timer
[Timer] [Timer]
Persistent=true Persistent=true
OnCalendar={{ backup_time }} OnCalendar={{ backup_time }}
RandomizedDelaySec={{ backup_time_randomization }}
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target

View File

@ -1,21 +0,0 @@
{
"storage": {
"type": "b2",
"config": {
"bucket": "desultd-kopia",
"keyID": "{{ backup_kopia_access_key_id }}",
"key": "{{ backup_kopia_secret_access_key }}"
}
},
"caching": {
"cacheDirectory": "/app/cache/cachedir",
"maxCacheSize": 5242880000,
"maxMetadataCacheSize": 5242880000,
"maxListCacheDuration": 30
},
"hostname": "{{ inventory_hostname }}",
"username": "salt",
"description": "Desu LTD Backups",
"enableActions": false,
"formatBlobCacheDuration": 900000000000
}

View File

@ -0,0 +1 @@
{{ backup_restic_password }}

View File

@ -0,0 +1,11 @@
#! /bin/sh
export AWS_ACCESS_KEY_ID="{{ backup_s3_aws_access_key_id }}"
export AWS_SECRET_ACCESS_KEY="{{ backup_s3_aws_secret_access_key }}"
export RESTIC_CACHE_DIR="/var/cache/restic"
mkdir -p "$RESTIC_CACHE_DIR"
chown root: "$RESTIC_CACHE_DIR"
chmod 0700 "$RESTIC_CACHE_DIR"
exec nice -n 10 restic \
-r "s3:{{ backup_s3_aws_endpoint_url }}/{{ backup_s3_bucket }}/restic" \
-p /opt/restic-password \
"$@"

View File

@ -1,17 +0,0 @@
#! /bin/bash
#
# s3backup-analyze.sh
# A companion script to s3backup to analyze disk usage for backups
# NOTICE: DO NOT MODIFY THIS FILE
# Any changes made will be clobbered by Ansible
# Please make any configuration changes in the main repo
exec ncdu \
{% for item in backup_s3backup_list + backup_s3backup_list_extra %}
"{{ item }}" \
{% endfor %}
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
--exclude "{{ item }}" \
{% endfor %}
-r

View File

@ -1,115 +0,0 @@
#! /bin/bash
#
# s3backup.sh
# General-purpose, Ansible-managed backup script to push directories, DBs, and
# more up to an S3 bucket
#
# NOTICE: THIS FILE CONTAINS SECRETS
# This file may contain the following secrets depending on configuration:
# * An AWS access key
# * An AWS session token
# These are NOT things you want arbitrary readers to access! Ansible will
# attempt to ensure this file has 0700 permissions, but that won't stop you
# from changing that yourself
# DO NOT ALLOW THIS FILE TO BE READ BY NON-ROOT USERS
# NOTICE: DO NOT MODIFY THIS FILE
# Any changes made will be clobbered by Ansible
# Please make any configuration changes in the main repo
set -e
# AWS S3 configuration
# NOTE: THIS IS SECRET INFORMATION
export AWS_ACCESS_KEY_ID="{{ backup_s3_aws_access_key_id }}"
export AWS_SECRET_ACCESS_KEY="{{ backup_s3_aws_secret_access_key }}"
# Directories to backup
# Ansible will determine the entries here
# We use a bash array because it affords us some level of sanitization, enough
# to let us back up items whose paths contain spaces
declare -a DIRS
{% for item in backup_s3backup_list + backup_s3backup_list_extra %}
DIRS+=("{{ item }}")
{% endfor %}
# End directory manual configuration
# If we have ostree, add diff'd configs to the list, too
if command -v ostree > /dev/null 2>&1; then
for file in $(
ostree admin config-diff 2>/dev/null | \
grep -oP '^[A|M]\s*\K.*'
); do
DIRS+=("/etc/$file")
done
fi
# Helper functions
backup() {
# Takes a file or directory to backup and backs it up
[ -z "$1" ] && return 1
dir="$1"
echo "- $dir"
nice -n 10 tar {{ backup_s3backup_tar_args }}{{ backup_s3backup_tar_args_extra }} \
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
--exclude "{{ item }}" \
{% endfor %}
"$dir" \
| aws s3 cp --expected-size 274877906944 - \
{% if backup_s3_aws_endpoint_url is defined %}
--endpoint-url="{{ backup_s3_aws_endpoint_url }}" \
{% endif %}
"s3://{{ backup_s3_bucket }}/$HOSTNAME/$dir/$(date "+{{ backup_dateformat }}").tar.gz"
}
# Tar up all items in the backup list, recursively, and pipe them straight
# up to S3
if [ -n "${DIRS[*]}" ]; then
echo "Commencing backup on the following items:"
for dir in "${DIRS[@]}"; do
echo "- $dir"
done
echo "Will ignore the following items:"
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
echo "- {{ item }}"
{% endfor %}
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
for dir in "${DIRS[@]}"; do
if [ "$dir" == "/data" ]; then
for datadir in "$dir"/*; do
[ -e "$datadir" ] && backup "$datadir"
done
else
backup "$dir"
fi
done
fi
# Dump Postgres DBs, if possible
if command -v psql > /dev/null 2>&1; then
# Populate a list of databases
declare -a DATABASES
while read line; do
DATABASES+=("$line")
done < <(sudo -u postgres psql -t -A -c "SELECT datname FROM pg_database where datname not in ('template0', 'template1', 'postgres');" 2>/dev/null)
# pgdump all DBs, compress them, and pipe straight up to S3
echo "Commencing backup on the following databases:"
for dir in "${DATABASES[@]}"; do
echo "- $dir"
done
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
for db in "${DATABASES[@]}"; do
echo "Backing up $db"
sudo -u postgres pg_dump "$db" \
| gzip -v9 \
| aws s3 cp - \
{% if backup_s3_aws_endpoint_url is defined %}
--endpoint-url="{{ backup_s3_aws_endpoint_url }}" \
{% endif %}
"s3://{{ backup_s3_bucket }}/$HOSTNAME/pgdump/$db/$(date "+{{ backup_dateformat }}").pgsql.gz"
done
fi

View File

@ -1,47 +0,0 @@
#! /bin/bash
#
# s3pgdump.sh
# General-purpose, Ansible-managed backup script to dump PostgreSQL DBs to
# an S3 bucket
#
# NOTICE: THIS FILE CONTAINS SECRETS
# This file may contain the following secrets depending on configuration:
# * An AWS access key
# * An AWS session token
# These are NOT things you want arbitrary readers to access! Ansible will
# attempt to ensure this file has 0700 permissions, but that won't stop you
# from changing that yourself
# DO NOT ALLOW THIS FILE TO BE READ BY NON-ROOT USERS
# NOTICE: DO NOT MODIFY THIS FILE
# Any changes made will be clobbered by Ansible
# Please make any configuration changes in the main repo
set -e
# AWS S3 configuration
# NOTE: THIS IS SECRET INFORMATION
export AWS_ACCESS_KEY_ID="{{ backup_s3_aws_access_key_id }}"
export AWS_SECRET_ACCESS_KEY="{{ backup_s3_aws_secret_access_key }}"
# Populate a list of databases
declare -a DATABASES
while read line; do
DATABASES+=("$line")
done < <(sudo -u postgres psql -t -A -c "SELECT datname FROM pg_database where datname not in ('template0', 'template1', 'postgres');" 2>/dev/null)
# pgdump all DBs, compress them, and pipe straight up to S3
echo "Commencing backup on the following databases:"
for dir in "${DATABASES[@]}"; do
echo "- $dir"
done
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
for db in "${DATABASES[@]}"; do
echo "Backing up $db"
sudo -u postgres pg_dump "$db" \
| gzip -v9 \
| aws s3 cp - \
"s3://{{ backup_s3_bucket }}/{{ inventory_hostname }}/$db-$(date "+{{ backup_dateformat }}").pgsql.gz"
done

View File

@ -1,72 +0,0 @@
#! /bin/bash
#
# s3restore.sh
# Companion script to s3backup.sh, this script obtains a listing of recent
# backups and offers the user a choice to restore from.
#
# This script offers no automation; it is intended for use by hand.
#
# NOTICE: THIS FILE CONTAINS SECRETS
# This file may contain the following secrets depending on configuration:
# * An AWS access key
# * An AWS session token
# These are NOT things you want arbitrary readers to access! Ansible will
# attempt to ensure this file has 0700 permissions, but that won't stop you
# from changing that yourself
# DO NOT ALLOW THIS FILE TO BE READ BY NON-ROOT USERS
# NOTICE: DO NOT MODIFY THIS FILE
# Any changes made will be clobbered by Ansible
# Please make any configuration changes in the main repo
set -e
url="s3://{{ backup_s3_bucket}}/$HOSTNAME/"
# AWS S3 configuration
# NOTE: THIS IS SECRET INFORMATION
export AWS_ACCESS_KEY_ID="{{ backup_s3_aws_access_key_id }}"
export AWS_SECRET_ACCESS_KEY="{{ backup_s3_aws_secret_access_key }}"
# Obtain a list possible restorable for this host
declare -a BACKUPS
printf "Querying S3 for restoreable backups (\e[35m$url\e[0m)...\n"
while read line; do
filename="$(echo "$line" | awk '{print $NF}')"
BACKUPS+=("$filename")
done < <(aws s3 \
{% if backup_s3_aws_endpoint_url is defined %}
--endpoint-url="{{ backup_s3_aws_endpoint_url }}" \
{% endif %}
ls "$url")
# Present the user with some options
printf "Possible restorable backups:\n"
printf "\e[37m\t%s\t%s\n\e[0m" "Index" "Filename"
for index in "${!BACKUPS[@]}"; do
printf "\t\e[32m%s\e[0m\t\e[34m%s\e[0m\n" "$index" "${BACKUPS[$index]}"
done
# Ensure we can write to pwd
if ! [ -w "$PWD" ]; then
printf "To restore a backup, please navigate to a writeable directory\n"
exit 1
fi
# Query for a backup to pull down
printf "Please select a backup by \e[32mindex\e[0m to pull down\n"
printf "It will be copied into the current directory as a tarball\n"
read -p "?" restoreindex
# Sanity check user input
if [ -z "${BACKUPS[$restoreindex]}" ]; then
printf "Invalid selection, aborting: $restoreindex\n"
exit 2
fi
# Copy the thing
printf "Pulling backup...\n"
aws s3 \
{% if backup_s3_aws_endpoint_url is defined %}
--endpoint-url="{{ backup_s3_aws_endpoint_url }}" \
{% endif %}
cp "$url${BACKUPS[$restoreindex]}" ./

View File

@ -11,7 +11,6 @@
- apt-file - apt-file
- aptitude - aptitude
- at - at
- awscli
- htop - htop
- jq - jq
- ncdu - ncdu
@ -19,8 +18,6 @@
- nfs-common - nfs-common
- openssh-server - openssh-server
- pwgen - pwgen
- python-is-python3 # God damn you Nextcloud role
- python2 # Needed for some legacy crap
- python3-apt - python3-apt
- python3-boto - python3-boto
- python3-boto3 - python3-boto3
@ -44,10 +41,7 @@
- name: configure rpm-ostree packages - name: configure rpm-ostree packages
community.general.rpm_ostree_pkg: community.general.rpm_ostree_pkg:
name: name:
- awscli
- htop - htop
- ibm-plex-fonts-all
- ncdu - ncdu
- screen
- vim - vim
when: ansible_os_family == "RedHat" and ansible_pkg_mgr == "atomic_container" when: ansible_os_family == "RedHat" and ansible_pkg_mgr == "atomic_container"

View File

@ -13,6 +13,14 @@ alias ls="ls $lsarguments"
alias ll="ls -Al --file-type $lsarguments" alias ll="ls -Al --file-type $lsarguments"
unset lsarguments unset lsarguments
# Extra shell aliases for things
resticwrapper="/opt/restic-wrapper"
if [ -e "$resticwrapper" ]; then
alias r="$resticwrapper"
alias r-snapshots="$resticwrapper snapshots -g host -c"
alias r-prune="$resticwrapper prune"
fi
# Set some bash-specific stuff # Set some bash-specific stuff
[ "${BASH-}" ] && [ "$BASH" != "/bin/sh" ] || return [ "${BASH-}" ] && [ "$BASH" != "/bin/sh" ] || return
# Like a fancy prompt # Like a fancy prompt

View File

@ -148,22 +148,60 @@ desktop_apt_packages_remove_extra: []
desktop_apt_debs: [] desktop_apt_debs: []
desktop_apt_debs_extra: [] desktop_apt_debs_extra: []
desktop_flatpak_remotes: desktop_ostree_layered_packages:
- name: flathub - akmod-v4l2loopback # Used by OBS for proper virtual webcam
url: "https://dl.flathub.org/repo/flathub.flatpakrepo" - cava # Sadly does not enable functionality in waybar :<
- name: flathub-beta - cryfs # Used for vaults
url: "https://flathub.org/beta-repo/flathub-beta.flatpakrepo" - foot # Wayblue ships Kitty but I don't like the dev direction
desktop_flatpak_remotes_extra: [] - htop # For some reason not the default
- ibm-plex-fonts-all
- iotop # Requires uncontainerized access to the host
- libvirt
- ncdu
- NetworkManager-tui
- obs-studio # Has to be installed native for virtual webcam
- restic # Also called in via the backup role, but doing this here saves a deployment
- vim # It's just way too much hassle that this isn't installed by default
- virt-manager # VMs, baby
- ydotool # Must be layered in and configured since it's a hw emulator thing
- zerotier-one # Ideally layered in since it's a network daemon
desktop_ostree_layered_packages_extra: []
desktop_ostree_removed_packages:
- firefox
- firefox-langpacks
desktop_ostree_removed_packages_extra: []
desktop_flatpak_packages: desktop_flatpak_packages:
- remote: flathub - remote: flathub
packages: packages:
- com.discordapp.Discord - com.bambulab.BambuStudio
- com.obsproject.Studio - com.github.Matoking.protontricks
- com.github.tchx84.Flatseal
- com.nextcloud.desktopclient.nextcloud
- com.spotify.Client
- com.valvesoftware.Steam
- com.visualstudio.code
- com.vscodium.codium
- dev.vencord.Vesktop
- im.riot.Riot
- io.freetubeapp.FreeTube
- io.github.Cockatrice.cockatrice
- io.github.hydrusnetwork.hydrus
- io.mpv.Mpv
- md.obsidian.Obsidian
- net.lutris.Lutris
- net.minetest.Minetest - net.minetest.Minetest
- org.DolphinEmu.dolphin-emu - org.DolphinEmu.dolphin-emu
- org.freecad.FreeCAD
- org.gimp.GIMP
- org.gnucash.GnuCash
- org.keepassxc.KeePassXC
- org.libreoffice.LibreOffice
- org.mozilla.firefox - org.mozilla.firefox
- remote: flathub-beta - org.mozilla.Thunderbird
packages: - org.openscad.OpenSCAD
- net.lutris.Lutris - org.qbittorrent.qBittorrent
# - remote: unmojang
# packages:
# - org.unmojang.FjordLauncher
desktop_flatpak_packages_extra: [] desktop_flatpak_packages_extra: []

View File

@ -0,0 +1,5 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
---
dependencies:
- role: flatpak

View File

@ -27,14 +27,16 @@
ansible.builtin.apt: deb="{{ item }}" ansible.builtin.apt: deb="{{ item }}"
loop: "{{ desktop_apt_debs + desktop_apt_debs_extra }}" loop: "{{ desktop_apt_debs + desktop_apt_debs_extra }}"
when: ansible_pkg_mgr == "apt" when: ansible_pkg_mgr == "apt"
- name: configure ostree
block:
- name: configure layered packages for ostree
community.general.rpm_ostree_pkg: name="{{ desktop_ostree_layered_packages + desktop_ostree_layered_packages_extra }}"
- name: configure removed base packages for ostree
community.general.rpm_ostree_pkg: name="{{ desktop_ostree_removed_packages + desktop_ostree_removed_packages_extra }}" state=absent
when: ansible_os_family == "RedHat" and ansible_pkg_mgr == "atomic_container"
- name: configure pip3 packages - name: configure pip3 packages
ansible.builtin.pip: executable=/usr/bin/pip3 state=latest name="{{ desktop_pip3_packages + desktop_pip3_packages_extra }}" ansible.builtin.pip: executable=/usr/bin/pip3 state=latest name="{{ desktop_pip3_packages + desktop_pip3_packages_extra }}"
when: ansible_os_family != "Gentoo" when: ansible_pkg_mgr == "apt"
- name: configure flatpak - name: configure installed flatpaks
block: flatpak: name="{{ item.packages }}" state=present remote="{{ item.remote | default('flathub', true) }}"
- name: configure flatpak remotes with_items: "{{ desktop_flatpak_packages + desktop_flatpak_packages_extra }}"
flatpak_remote: name="{{ item.name }}" state=present flatpakrepo_url="{{ item.url }}"
with_items: "{{ desktop_flatpak_remotes + desktop_flatpak_remotes_extra }}"
- name: configure installed flatpaks
flatpak: name="{{ item.packages }}" state=present remote="{{ item.remote | default('flathub', true) }}"
with_items: "{{ desktop_flatpak_packages + desktop_flatpak_packages_extra }}"

View File

@ -1,41 +0,0 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
tmodloader_name: generic
# Container settings
tmodloader_uid: 1521
tmodloader_gid: 1521
tmodloader_state: started
tmodloader_image: rehashedsalt/tmodloader-docker:bleeding
tmodloader_restart_policy: unless-stopped
tmodloader_timezone: "America/Chicago"
# Container network settings
tmodloader_external_port: "7777"
tmodloader_data_prefix: "/data/terraria/{{ tmodloader_name }}"
# Server configuration
# We have two variables here; things you might not want to change and things
# that you probably will
tmodloader_config:
autocreate: "3"
difficulty: "1"
secure: "0"
tmodloader_config_extra:
maxplayers: "8"
motd: "Deployed via Ansible edition"
password: "dicks"
# Server configuration specific to this Ansible role
# DO NOT CHANGE
tmodloader_config_internal:
port: "7777"
world: "/terraria/ModLoader/Worlds/World.wld"
worldpath: "/terraria/ModLoader/Worlds"
# A list of mods to acquire
# The default server of mirror.sgkoi.dev is the official tModLoader mod browser
# mirror
tmodloader_mod_server: "https://mirror.sgkoi.dev"
# tmodloader_mods:
# - "CalamityMod"
# - "RecipeBrowser"
# - "BossChecklist"
tmodloader_mods: []

View File

@ -1,7 +0,0 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
- name: restart tmodloader {{ tmodloader_name }}
docker_container:
name: "tmodloader-{{ tmodloader_name }}"
state: started
restart: yes

View File

@ -1,76 +0,0 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
---
- name: assure tmodloader {{ tmodloader_name }} directory structure
ansible.builtin.file:
state: directory
owner: "{{ tmodloader_uid }}"
group: "{{ tmodloader_gid }}"
mode: "0750"
path: "{{ item }}"
# We recurse here since these directories and all of their contents
# should be read-write by the container without exception.
recurse: yes
with_items:
- "{{ tmodloader_data_prefix }}/backups"
- "{{ tmodloader_data_prefix }}/data"
- "{{ tmodloader_data_prefix }}/data/ModLoader"
- "{{ tmodloader_data_prefix }}/data/ModLoader/Mods"
- "{{ tmodloader_data_prefix }}/data/ModLoader/Worlds"
- "{{ tmodloader_data_prefix }}/logs"
- name: assure mods
ansible.builtin.shell:
cmd: "curl -L \"{{ tmodloader_mod_server }}\" -o \"{{ item }}.tmod\" && chown \"{{ tmodloader_uid }}:{{ tmodloader_gid }}\" \"{{ item }}.tmod\""
chdir: "{{ tmodloader_data_prefix }}/data/ModLoader/Mods"
creates: "{{ tmodloader_data_prefix }}/data/ModLoader/Mods/{{ item }}.tmod"
with_list: "{{ tmodloader_mods }}"
when: tmodloader_mods
notify: "restart tmodloader {{ tmodloader_name }}"
- name: enable mods
ansible.builtin.template:
src: enabled.json
dest: "{{ tmodloader_data_prefix }}/data/ModLoader/Mods/enabled.json"
owner: "{{ tmodloader_uid }}"
group: "{{ tmodloader_gid }}"
mode: "0750"
when: tmodloader_mods
notify: "restart tmodloader {{ tmodloader_name }}"
- name: assure tmodloader {{ tmodloader_name }} files
ansible.builtin.file:
state: touch
owner: "{{ tmodloader_uid }}"
group: "{{ tmodloader_gid }}"
mode: "0750"
path: "{{ item }}"
with_items:
- "{{ tmodloader_data_prefix }}/config.txt"
- name: assure {{ tmodloader_name }} configs
ansible.builtin.lineinfile:
state: present
regexp: "^{{ item.key }}"
line: "{{ item.key }}={{ item.value }}"
path: "{{ tmodloader_data_prefix }}/config.txt"
with_dict: "{{ tmodloader_config | combine(tmodloader_config_extra) | combine(tmodloader_config_internal) }}"
notify: "restart tmodloader {{ tmodloader_name }}"
- name: assure {{ tmodloader_name }} backup cronjob
ansible.builtin.cron:
user: root
name: "terraria-{{ tmodloader_name }}"
minute: "*/30"
job: "tar czvf \"{{ tmodloader_data_prefix }}/backups/world-$(date +%Y-%m-%d-%H%M).tgz\" \"{{ tmodloader_data_prefix }}/data/ModLoader/Worlds\" \"{{ tmodloader_data_prefix }}/data/tModLoader/Worlds\""
- name: assure tmodloader {{ tmodloader_name }} container
docker_container:
name: "tmodloader-{{ tmodloader_name }}"
state: started
image: "{{ tmodloader_image }}"
restart_policy: "{{ tmodloader_restart_policy }}"
pull: yes
user: "{{ tmodloader_uid }}:{{ tmodloader_gid }}"
env:
TZ: "{{ tmodloader_timezone }}"
ports:
- "{{ tmodloader_external_port }}:7777"
volumes:
- "{{ tmodloader_data_prefix }}/data:/terraria"
- "{{ tmodloader_data_prefix }}/config.txt:/terraria/config.txt"
- "{{ tmodloader_data_prefix }}/logs:/terraria-server/tModLoader-Logs"

View File

@ -1,6 +0,0 @@
[
{% for item in tmodloader_mods[1:] %}
"{{ item }}",
{% endfor %}
"{{ tmodloader_mods[0] }}"
]

View File

@ -0,0 +1,7 @@
#!/usr/bin/env ansible-playbook
---
flatpak_remotes:
- name: flathub
state: present
url: "https://dl.flathub.org/repo/flathub.flatpakrepo"
flatpak_remotes_extra: []

View File

@ -0,0 +1,17 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
---
- name: install flatpak on apt distros
when: ansible_pkg_mgr == "apt"
block:
- name: install flatpak packages
ansible.builtin.apt:
state: present
pkg:
- flatpak
- name: configure flatpak remotes
with_items: "{{ flatpak_remotes + flatpak_remotes_extra }}"
community.general.flatpak_remote:
name: "{{ item.name }}"
state: "{{ item.state }}"
flatpakrepo_url: "{{ item.url }}"

View File

@ -0,0 +1,28 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
# What is the name of the server? This should be unique per instance
terraria_server_name: "generic"
# Remove this Terraria server instead of provision it?
terraria_server_remove: no
# What mods should be enabled?
terraria_mods: []
# Basic server configuration
terraria_shutdown_message: "Server is going down NOW!"
terraria_motd: "Literally playing Minecraft"
terraria_password: "dicks"
terraria_port: "7777"
terraria_world_name: "World"
# Leaving this value blank rolls one for us
terraria_world_seed: ""
# 1 Small
# 2 Medium
# 3 Large
terraria_world_size: "3"
# 0 Normal
# 1 Expert
# 2 Master
# 3 Journey
terraria_world_difficulty: "1"

View File

@ -0,0 +1,58 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
#
# Docs available here:
# https://github.com/JACOBSMILE/tmodloader1.4
#
# If you need to run a command in this container:
# docker exec tmodloader inject "say Hello World!"
#
---
- name: set backups tmodloader - {{ terraria_server_name }}
vars:
backup_dirs:
- "/data/tmodloader/{{ terraria_server_name }}/data/tModLoader/Worlds"
backup_dest: "/data/tmodloader/{{ terraria_server_name }}/backups"
ansible.builtin.cron:
user: root
name: "terraria-{{ terraria_server_name }}-backup"
state: "{{ 'absent' if terraria_server_remove else 'present' }}"
minute: "*/15"
job: "tar czvf \"{{ backup_dest }}/world-$(date +\\%Y-\\%m-\\%d-\\%H\\%M).tgz\" {{ backup_dirs | join(' ') }} && find {{ backup_dest }}/ -type f -iname \\*.tgz -mtime +1 -print -delete"
tags: [ docker, tmodloader, cron, backup, tar ]
- name: assure backups dir tmodloader - {{ terraria_server_name }}
ansible.builtin.file:
path: "/data/tmodloader/{{ terraria_server_name }}/backups"
state: directory
owner: root
group: root
mode: "0700"
tags: [ docker, tmodloader, file, directory, backup ]
- name: docker deploy tmodloader - {{ terraria_server_name }}
community.general.docker_container:
name: tmodloader-{{ terraria_server_name }}
state: "{{ 'absent' if terraria_server_remove else 'started' }}"
image: docker.io/jacobsmile/tmodloader1.4:latest
env:
TMOD_AUTODOWNLOAD: "{{ terraria_mods | sort() | join(',') }}"
TMOD_ENABLEDMODS: "{{ terraria_mods | sort() | join(',') }}"
TMOD_SHUTDOWN_MESSAGE: "{{ terraria_shutdown_message }}"
TMOD_MOTD: "{{ terraria_motd }}"
TMOD_PASS: "{{ terraria_password }}"
TMOD_WORLDNAME: "{{ terraria_world_name }}"
TMOD_WORLDSEED: "{{ terraria_world_seed }}"
TMOD_WORLDSIZE: "{{ terraria_world_size }}"
TMOD_DIFFICULTY: "{{ terraria_world_difficulty }}"
TMOD_PORT: "7777"
# In theory, this allows you to change how much data the server sends
# This is in Hz. Crank it lower to throttle it at the cost of NPC jitteriness
#TMOD_NPCSTREAM: "60"
ports:
- "{{ terraria_port }}:7777/tcp"
- "{{ terraria_port }}:7777/udp"
volumes:
- "/data/tmodloader/{{ terraria_server_name }}/data:/data"
- "/data/tmodloader/{{ terraria_server_name }}/logs:/terraria-server/tModLoader-Logs"
- "/data/tmodloader/{{ terraria_server_name }}/dotnet:/terraria-server/dotnet"
tags: [ docker, tmodloader ]

View File

@ -0,0 +1,37 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
# Core container configuration
ingress_container_image: docker.io/traefik:latest
ingress_container_name: ingress
# Core service configuration
ingress_container_tls: no
ingress_container_dashboard: no
# Secondary container configuration
ingress_container_ports:
- 80:80
- 443:443
ingress_container_ports_dashboard:
- 8080:8080
ingress_container_timezone: America/Chicago
ingress_container_docker_socket_location: "/var/run/docker.sock"
# Command args
ingress_command_args:
- "--api.dashboard=true"
- "--providers.docker"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ingress_command_args_tls:
- "--entrypoints.web.address=:443"
- "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.letsencrypt.acme.email=rehashedsalt@cock.li"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
ingress_command_args_extra: []
# Network configuration
ingress_container_networks:
- name: web
aliases: [ "ingress" ]

View File

@ -0,0 +1,16 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
- name: assure traefik container
docker_container:
name: "{{ ingress_container_name }}"
image: "{{ ingress_container_image }}"
restart_policy: unless-stopped
command: "{{ ingress_command_args + ingress_command_args_tls + ingress_command_args_extra if ingress_container_tls else ingress_command_args + ingress_command_args_extra }}"
env:
TZ: "{{ ingress_container_timezone }}"
networks: "{{ ingress_container_networks }}"
ports: "{{ ingress_container_ports + ingress_container_ports_dashboard if ingress_container_dashboard else ingress_container_ports }}"
volumes:
- "{{ ingress_container_docker_socket_location }}:/var/run/docker.sock"
- "/data/traefik/letsencrypt:/letsencrypt"
tags: [ docker, ingress, traefik ]

View File

@ -2,7 +2,7 @@
# vim:ft=ansible: # vim:ft=ansible:
# Core container configuration # Core container configuration
ingress_container_image: jonasal/nginx-certbot:latest ingress_container_image: docker.io/jonasal/nginx-certbot:latest
ingress_container_name: ingress ingress_container_name: ingress
# Secondary container configuration # Secondary container configuration
@ -21,6 +21,12 @@ ingress_container_networks:
# Certbot configuration # Certbot configuration
ingress_container_certbot_email: rehashedsalt@cock.li ingress_container_certbot_email: rehashedsalt@cock.li
# Volumes
ingress_container_volumes:
- "{{ ingress_container_persist_dir }}/letsencrypt:/etc/letsencrypt"
- "{{ ingress_container_persist_dir }}/user_conf.d:{{ ingress_container_config_mount }}:ro"
ingress_container_volumes_extra: []
# General Nginx configuration # General Nginx configuration
ingress_listen_args: "443 http2 ssl" ingress_listen_args: "443 http2 ssl"
ingress_resolver: 8.8.8.8 ingress_resolver: 8.8.8.8

View File

@ -3,3 +3,8 @@
- name: restart ingress container - name: restart ingress container
docker_container: name="{{ ingress_container_name }}" state=started restart=yes docker_container: name="{{ ingress_container_name }}" state=started restart=yes
become: yes become: yes
- name: reload ingress container
community.docker.docker_container_exec:
container: "{{ ingress_container_name }}"
command: nginx -s reload
become: yes

View File

@ -5,9 +5,6 @@
with_items: with_items:
- letsencrypt - letsencrypt
- user_conf.d - user_conf.d
- name: template out ingress configuration file
ansible.builtin.template: src=vhosts.conf.j2 dest="{{ ingress_container_persist_dir }}/user_conf.d/vhosts.conf" mode="0640"
notify: restart ingress container
- name: assure ingress container - name: assure ingress container
docker_container: docker_container:
name: ingress name: ingress
@ -17,6 +14,14 @@
CERTBOT_EMAIL: "{{ ingress_container_certbot_email }}" CERTBOT_EMAIL: "{{ ingress_container_certbot_email }}"
networks: "{{ ingress_container_networks }}" networks: "{{ ingress_container_networks }}"
ports: "{{ ingress_container_ports }}" ports: "{{ ingress_container_ports }}"
volumes: volumes: "{{ ingress_container_volumes + ingress_container_volumes_extra }}"
- "{{ ingress_container_persist_dir }}/letsencrypt:/etc/letsencrypt" - name: template out configuration
- "{{ ingress_container_persist_dir }}/user_conf.d:{{ ingress_container_config_mount }}:ro" block:
- name: template out ingress configuration file
ansible.builtin.template: src=vhosts.conf.j2 dest="{{ ingress_container_persist_dir }}/user_conf.d/vhosts.conf" mode="0640"
notify: reload ingress container
- name: test templated configuration file
community.docker.docker_container_exec:
container: ingress
command: nginx -t
changed_when: false

View File

@ -53,9 +53,14 @@ server {
proxy_buffers 4 256k; proxy_buffers 4 256k;
proxy_busy_buffers_size 256k; proxy_busy_buffers_size 256k;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass {{ server.proxy_pass }}; proxy_pass {{ server.proxy_pass }};
proxy_request_buffering off;
{% if server.proxy_extra is defined %}{{ server.proxy_extra }}{% endif %}
} }
{% elif server.proxies is defined %} {% elif server.proxies is defined %}
# Proxy locations # Proxy locations
@ -65,9 +70,14 @@ server {
proxy_buffers 4 256k; proxy_buffers 4 256k;
proxy_busy_buffers_size 256k; proxy_busy_buffers_size 256k;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass {{ proxy.pass }}; proxy_pass {{ proxy.pass }};
proxy_request_buffering off;
{% if proxy.extra is defined %}{{ proxy.extra }}{% endif %}
} }
{% endfor %} {% endfor %}
{% endif %} {% endif %}

View File

@ -0,0 +1,18 @@
#!/usr/bin/env ansible-playbook
---
kodi_flatpak_name: "tv.kodi.Kodi"
kodi_autologin_user: "kodi"
kodi_autologin_user_groups:
- audio # Gotta be able to play audio
- tty # Required to start Cage
- video # Not sure if required, but could be useful for hw accel
kodi_autologin_service: "kodi.service"
kodi_apt_packages:
- alsa-utils # For testing audio
- cage # A kiosk wayland compositor
- pipewire # Audio routing
- pipewire-pulse
- wireplumber
- xwayland # Required for Kodi since it's not Wayland-native

View File

@ -0,0 +1,8 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
---
- name: restart kodi
ansible.builtin.systemd:
name: "{{ kodi_autologin_service }}"
state: restarted
daemon_reload: yes

5
roles/kodi/meta/main.yml Normal file
View File

@ -0,0 +1,5 @@
#!/usr/bin/env ansible-playbook
# vim:ft=ansible:
---
dependencies:
- role: flatpak

Some files were not shown because too many files have changed in this diff Show More