Compare commits
1 Commits
Author | SHA1 | Date | |
---|---|---|---|
a0815b3100 |
@ -64,34 +64,31 @@ Test:
|
||||
# PRE-MAIN CONFIGURATION
|
||||
Local:
|
||||
stage: play-pre
|
||||
only:
|
||||
- pipelines
|
||||
- schedules
|
||||
script:
|
||||
- ansible-playbook --skip-tags no-auto playbooks/site_local.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
|
||||
Pre:
|
||||
stage: play-pre
|
||||
only:
|
||||
- pipelines
|
||||
- schedules
|
||||
script:
|
||||
- ansible-playbook --skip-tags no-auto playbooks/site_pre.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
|
||||
|
||||
# MAIN CONFIGURATION
|
||||
Main:
|
||||
stage: play-main
|
||||
only:
|
||||
- pipelines
|
||||
- schedules
|
||||
retry: 1
|
||||
script:
|
||||
- ansible-playbook --skip-tags no-auto playbooks/site_main.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
|
||||
Common:
|
||||
stage: play-main
|
||||
script:
|
||||
- ansible-playbook --skip-tags no-auto playbooks/site_common.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
|
||||
Nagios:
|
||||
stage: play-main
|
||||
retry: 1
|
||||
script:
|
||||
- ansible-playbook -l vm-general-1.ashburn.mgmt.desu.ltd playbooks/prod_web.yml --tags nagios --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
|
||||
|
||||
# CLEANUP
|
||||
Cleanup:
|
||||
stage: play-post
|
||||
only:
|
||||
- pipelines
|
||||
- schedules
|
||||
script:
|
||||
- ansible-playbook --skip-tags no-auto playbooks/site_post.yml --ssh-common-args='-o ProxyCommand="ssh -W %h:%p -q ansible@bastion1.dallas.mgmt.desu.ltd"' --vault-password-file ~/.vault_pass
|
||||
|
79
README.md
79
README.md
@ -1,60 +1,17 @@
|
||||
# Desu LTD Ansible
|
||||
# Salt's Ansible Repository
|
||||
|
||||
Ansible scripts that manage infra for all of Desu LTD
|
||||
Useful for management across all of 9iron, thefuck, and desu.
|
||||
|
||||
## Initialization
|
||||
|
||||
Clone the repo, then:
|
||||
* Clone
|
||||
* `ansible-galaxy install -r requirements.yml`
|
||||
|
||||
```bash
|
||||
# Set up execution environment
|
||||
python3 -m venv venv
|
||||
. venv/bin/activate
|
||||
pip3 install -r requirements.txt
|
||||
# Set up Ansible Galaxy roles
|
||||
ansible-galaxy install -r requirements.yml
|
||||
# Set up password
|
||||
# This one's optional if you want to --ask-vault-pass instead
|
||||
touch ~/.vault_pass
|
||||
chmod 0600 ~/.vault_pass
|
||||
vim ~/.vault_pass
|
||||
```
|
||||
For quick bootstrapping of tools and libraries used in this repo, see [rehashedsalt/ansible-env](https://gitlab.com/rehashedsalt/docker-ansible-env). I use that exact image for CI/CD.
|
||||
|
||||
Regular runs of this repo are invoked in [rehashedsalt/ansible-env](https://gitlab.com/rehashedsalt/docker-ansible-env). See Obsidian notes for details.
|
||||
## Deployment
|
||||
|
||||
## Usage
|
||||
|
||||
To run the whole playbook:
|
||||
|
||||
```bash
|
||||
./site.yml
|
||||
```
|
||||
|
||||
To deploy a core service to a single machine while you're working on it:
|
||||
|
||||
```bash
|
||||
./playbooks/site_main.yml -l my.host --tags someservice
|
||||
```
|
||||
|
||||
All `yml` files that can be invoked at the command line are marked executable and have a shebang at the top. If they do not have these features, you're looking at an include or something.
|
||||
|
||||
## Structure
|
||||
|
||||
The structure of the playbooks in this repo is as follows:
|
||||
|
||||
* `site.yml` - Master playbook, calls in:
|
||||
|
||||
* `playbooks/site_local.yml` - Tasks that run solely on the Ansible controller. Mostly used for DNS
|
||||
|
||||
* `playbooks/site_pre.yml` - Basic machine bootstrapping and configuration that must be done before services are deployed. Does things like connect a machine to the management Zerotier network, ensure basic packages, ensure monitoring can hook in, etc.
|
||||
|
||||
* `playbooks/site_main.yml` - Main service deployment is done here. If you're iterating on a service, invoke this one
|
||||
|
||||
* `playbooks/site_post.yml` - Cleanup tasks. Mostly relevant for the regular autoruns. Cleans up old Docker images and reboots boxes
|
||||
|
||||
Most services are containerized -- their definitions are in `playbooks/tasks` and are included where relevant.
|
||||
|
||||
## Bootstrapping
|
||||
### Linux Machines
|
||||
|
||||
Each Linux machine will require the following to be fulfilled for Ansible to access it:
|
||||
|
||||
@ -68,14 +25,24 @@ Each Linux machine will require the following to be fulfilled for Ansible to acc
|
||||
|
||||
To automate these host-local steps, use the script file `contrib/bootstrap.sh`.
|
||||
|
||||
## Netbox
|
||||
### Windows Machines
|
||||
|
||||
These playbooks depend heavily on Netbox for:
|
||||
lol don't
|
||||
|
||||
* Inventory, including primary IP, hostname, etc.
|
||||
### All Machines
|
||||
|
||||
* Data on what services to deploy
|
||||
Adding a new server will require these:
|
||||
|
||||
* Data on what services to monitor
|
||||
* The server is accessible from the Ansible host;
|
||||
|
||||
Thus, if Netbox is inaccessible, a large portion of these scripts will malfunction. If you anticipate Netbox will be unavailable for whatever reason, run `ansible-inventory` by hand and save the output to a file. Macros for things like monitoring will not work, but you'll at least have an inventory and tags.
|
||||
* The server has been added to NetBox OR in `inventory-hard`
|
||||
|
||||
* DNS records for the machine are set; and
|
||||
|
||||
From there, running the playbook `site.yml` should get the machine up to snuff.
|
||||
|
||||
## Zerotier
|
||||
|
||||
A lot of my home-network side of things is connected together via ZeroTier; initial deployment/repairs may require specifying an `ansible_host` for the inventory item in question to connect to it locally. Subsequent plays will require connectivity to my home ZeroTier network.
|
||||
|
||||
Cloud-managed devices require no such workarounds.
|
||||
|
13
ansible.cfg
13
ansible.cfg
@ -1,12 +1,14 @@
|
||||
[defaults]
|
||||
# Tune this higher if you have a large number of machines
|
||||
forks = 8
|
||||
# I have a large number of machines, which warrants a large forks setting
|
||||
# here.
|
||||
forks = 16
|
||||
# We set gathering to smart here as I'm often executing the site-wide playbook,
|
||||
# which means a ton of redundant time gathering facts that haven't changed
|
||||
# otherwise.
|
||||
gathering = smart
|
||||
# host_key_checking is disabled because nearly 90% of my Ansible plays are in
|
||||
# ephemeral environments and I'm constantly spinning machines up and down.
|
||||
# In theory this is an attack vector that I need to work on a solution for.
|
||||
host_key_checking = false
|
||||
# Explicitly set the python3 interpreter for legacy hosts.
|
||||
interpreter_python = python3
|
||||
@ -26,7 +28,7 @@ roles_path = .roles:roles
|
||||
system_warnings = true
|
||||
# We set this to avoid circumstances in which we time out waiting for a privesc
|
||||
# prompt. Zerotier, as a management network, can be a bit slow at times.
|
||||
#timeout = 30
|
||||
timeout = 60
|
||||
# Bad
|
||||
vault_password_file = ~/.vault_pass
|
||||
|
||||
@ -39,8 +41,9 @@ always = true
|
||||
become = true
|
||||
|
||||
[ssh_connection]
|
||||
# We set retries to be a fairly higher number, all things considered.
|
||||
#retries = 3
|
||||
# The number of retries here is insane because of the volatility of my home
|
||||
# network, where a number of my machines live.
|
||||
retries = 15
|
||||
# These extra args are used for bastioning, where the ephemeral Ansible
|
||||
# controller remotes into a bastion machine to access the rest of the
|
||||
# environment.
|
||||
|
@ -1,4 +0,0 @@
|
||||
#! /bin/sh
|
||||
git submodule update --recursive --remote --init
|
||||
git submodule -q foreach 'git checkout -q master && git pull'
|
||||
git status
|
1
inventories/hardcoded/host_vars
Symbolic link
1
inventories/hardcoded/host_vars
Symbolic link
@ -0,0 +1 @@
|
||||
../production/host_vars
|
1
inventories/production-no-auto/host_vars
Symbolic link
1
inventories/production-no-auto/host_vars
Symbolic link
@ -0,0 +1 @@
|
||||
../production/host_vars
|
@ -17,100 +17,6 @@ netbox_token: !vault |
|
||||
37323530333463383062396363616263386430356438306133393130626365333932323734383165
|
||||
3064663435626339393836353837643730333266366436373033
|
||||
|
||||
# Terraria modlists
|
||||
tml_basic_qol:
|
||||
# Better Zoom: Enables zooming out further than 100% for higher-res monitors
|
||||
- "2562953970"
|
||||
# Smarter Cursor: Cursor be smarter idort
|
||||
- "2877850919"
|
||||
# Heart Crystal & Life Fruit Glow
|
||||
- "2853619836"
|
||||
# Ore Excavation (Veinminer)
|
||||
- "2565639705"
|
||||
# Shared World Map
|
||||
- "2815010161"
|
||||
# Boss Cursor
|
||||
- "2816694149"
|
||||
# WMITF (What Mod Is This From (WAILA (WAWLA (WTFAILA))))
|
||||
- "2563851005"
|
||||
# Multiplayer Boss Fight Stats
|
||||
- "2822937879"
|
||||
# Census (Shows you all the NPCs and their move-in requirements)
|
||||
- "2687866031"
|
||||
# Shop Expander (Prevents overloading shops)
|
||||
- "2828370879"
|
||||
# Boss Checklist
|
||||
- "2669644269"
|
||||
# Auto Trash
|
||||
- "2565540604"
|
||||
tml_advanced_qol:
|
||||
# Quality of Terraria (IT HAS INSTA HOIKS LET'S FUCKING GO)
|
||||
# Also adds the "Can be shimmered into" and similar text
|
||||
- "2797518634"
|
||||
# Chat Source
|
||||
- "2566083800"
|
||||
# The Shop Market (it's like the Market from that one Minecraft mod)
|
||||
- "2572367426"
|
||||
# Fishing with Explosives
|
||||
- "3238219681"
|
||||
# Generated Housing (Adds pregenned home)
|
||||
- "3141716573"
|
||||
# Happiness Removal
|
||||
- "2563345152"
|
||||
tml_libs:
|
||||
# Luminance, library mod
|
||||
- "3222493606"
|
||||
# Subworld Lib: Required by a few mods (TSA and others)
|
||||
- "2785100219"
|
||||
tml_basics:
|
||||
# Magic Storage Starter Kit
|
||||
- "2906446375"
|
||||
# Magic Storage, absoluteAquarian utilities
|
||||
- "2563309347"
|
||||
- "2908170107"
|
||||
# Wing Slot Extra
|
||||
- "2597324266"
|
||||
# Better Caves
|
||||
- "3158254975"
|
||||
tml_calamity:
|
||||
# Calamity, Calamity Music, CalValEX
|
||||
- "2824688072"
|
||||
- "2824688266"
|
||||
- "2824688804"
|
||||
tml_calamity_classes:
|
||||
# Calamity Ranger Expansion
|
||||
- "2860270524"
|
||||
# Calamity Whips
|
||||
- "2839001756"
|
||||
tml_calamity_clamity:
|
||||
# Clamity (sic), Music
|
||||
- "3028584450"
|
||||
- "3161277410"
|
||||
tml_fargos:
|
||||
# Luminance, library mod
|
||||
- "3222493606"
|
||||
# Fargos Mutant Mod. Adds the NPC and infinite items and instas and stuff
|
||||
- "2570931073"
|
||||
# Fargos Souls, adds... souls
|
||||
- "2815540735"
|
||||
# Fargos Souls DLC (Calamity compat)
|
||||
- "3044249615"
|
||||
# Fargos Souls More Cross-Mod (Consolaria, Spirit, Mod of Redemption compat)
|
||||
- "3326463997"
|
||||
tml_touhou:
|
||||
# Gensokyo (UN Owen Was Her plays in the distance)
|
||||
- "2817254924"
|
||||
tml_spirit:
|
||||
# Spirit Mod
|
||||
- "2982372319"
|
||||
tml_secrets:
|
||||
# Secrets of the Shadows
|
||||
- "2843112914"
|
||||
tml_yoyo_revamp:
|
||||
# Moomoo's Yoyo Revamp (and Lib)
|
||||
- "2977808495"
|
||||
- "3069154070"
|
||||
|
||||
# Admin user configuration
|
||||
adminuser_name: salt
|
||||
adminuser_ssh_authorized_keys:
|
||||
@ -120,16 +26,8 @@ adminuser_ssh_authorized_keys:
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFS78eNBEZ1fWnGt0qyagCRG7P+8i3kYBqTYgou3O4U8 putty-generated on dsk-ryzen-0.desu.ltd
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINq8NPEqSM0w7CkhdhsSgDsrcpgAvVg18oz9OybkqhHg salt@dsk-ryzen-0
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGwFJmaV4JuxOOgF6Bqwo6FaCN5Mpcvd4/Vee7PsMBxu salt@lap-fw-diy-1.ws.mgmt.desu.ltd
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwcV0mKhhQveIOjFKwt01S8WVtOn3Pfz6qa2P4/JR7S salt@lap-s76-lemp13-0.ws.mgmt.desu.ltd
|
||||
|
||||
# For backups
|
||||
backup_restic_password: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
65623036653432326435353932623037626532316631613763623237323533363938363462316237
|
||||
6363613363346239666630323134643866653436633537300a663732363565383061326135656539
|
||||
33313334656330366632613334366664613366313631363964373038396636623735633830386336
|
||||
3230316663373966390a663732373134323561313633363435376263643834383739643739303761
|
||||
62376231353936333666613661323864343439383736386636356561636463626266
|
||||
backup_s3_bucket: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
66316231643933316261303631656432376339663264666661663634616465326537303331626634
|
||||
@ -152,7 +50,29 @@ backup_s3_aws_secret_access_key: !vault |
|
||||
3635616437373236650a353661343131303332376161316664333833393833373830623130666633
|
||||
66356130646434653039363863346630363931383832353637636131626530616434
|
||||
backup_s3_aws_endpoint_url: "https://s3.us-east-005.backblazeb2.com"
|
||||
|
||||
backup_kopia_bucket_name: desultd-kopia
|
||||
backup_kopia_access_key_id: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
34633366656134376166636164643233353461396263313237653032353764613737393865373763
|
||||
6665633239396333633132323936343030346362333734640a356631373230383663383530333434
|
||||
32386639393135373236373263363365366163346234643135363766666666373938373135653663
|
||||
3836623735393563610a613332623965633032356266643638386230323965366233353930313239
|
||||
38666562326232353165323934303966643630383235393830613939616330333839
|
||||
backup_kopia_secret_access_key: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
31373662326464396136346663626635363332303862613466316236333431636136373038666531
|
||||
6630616565613431323464373862373963356335643435360a353665356163313635393137363330
|
||||
66383531326535653066386432646464346161336363373334313064303261616238613564396439
|
||||
6439333432653862370a303461346438623263636364633437356432613831366462666666303633
|
||||
63643862643033376363353836616137366432336339383931363837353161373036
|
||||
backup_kopia_password: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
34306564393161336162633833356464373065643633343935373566316465373939663838343537
|
||||
3831343963666432323538636665663733353435636337340a633738306463646133643730333032
|
||||
33303962306136636163623930306238666633333738373435636366666339623562323531323732
|
||||
3330633238386336330a346431383233383533303131323736306636353033356538303264383963
|
||||
37306461613834643063383965356664326265383431336332303333636365316163363437343634
|
||||
6439613537396535656361616365386261336139366133393637
|
||||
|
||||
# For zerotier
|
||||
zerotier_personal_network_id: !vault |
|
||||
@ -170,15 +90,6 @@ zerotier_management_network_id: !vault |
|
||||
3430303130303766610a633131656431396332626336653562616363666433366664373635613934
|
||||
30316335396166633361666466346232323630396534386332613937366232613965
|
||||
|
||||
# For 5dd
|
||||
five_dd_db_pass: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
31343335306261333630316366366536356165346437393631643630636436626265616239666562
|
||||
3233353738643136356564396339666137353163393465330a306431376364353734346465643261
|
||||
64633065383939383562346332323636306565336139343734323861316335333932383863363233
|
||||
6130353534363563340a636164666631393132346535393936363963326430643638323330663437
|
||||
31396433303762633139376237373236383732623734626538653933366464623135
|
||||
|
||||
# For ara
|
||||
secret_ara_db_pass: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
@ -271,6 +182,65 @@ secret_grafana_matrix_token: !vault |
|
||||
30326666616362366133396562323433323435613232666337336430623230383765346333343232
|
||||
3765346238303835633337636233376263366130303436336439
|
||||
|
||||
# For Nagios
|
||||
secret_nagios_admin_pass: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
64333231393831303031616363363030613464653161313531316465346263313063626638363437
|
||||
3965303861646232393663633066363039636637343161340a643162633133336335313632383861
|
||||
34616338636630633539353335336631313361656633333539323130626132356263653436343363
|
||||
3930323538613137370a373861376566376631356564623665313662636562626234643862343863
|
||||
61326232633266633262613931303631396163326266386363366639366639613938
|
||||
secret_nagios_matrix_token: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
66366665666437643765366533646666386162393038653262333461376566333366363332643135
|
||||
6233376362633566303939623832636366333330393238370a323766366164393733383736633435
|
||||
37633137626634643530653665613166633439376333633663633561313864396465623036653063
|
||||
6433376138386531380a383762393137613738643538343438633730313135613730613139393536
|
||||
35666133666262383862663637623738643836383633653864626231623034613662646563623936
|
||||
3763356331333561383833386162616664376335333139376363
|
||||
nagios_contacts:
|
||||
- name: matrix
|
||||
host_notification_commands: notify-host-by-matrix
|
||||
service_notification_commands: notify-service-by-matrix
|
||||
host_notification_period: ansible-not-late-at-night
|
||||
service_notification_period: ansible-not-late-at-night
|
||||
extra:
|
||||
- key: contactgroups
|
||||
value: ansible
|
||||
- name: salt
|
||||
host_notification_commands: notify-host-by-email
|
||||
service_notification_commands: notify-service-by-email
|
||||
extra:
|
||||
- key: email
|
||||
value: alerts@babor.tech
|
||||
nagios_commands:
|
||||
# This command is included in the container image
|
||||
- name: check_nrpe
|
||||
command: "$USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$"
|
||||
- name: check_by_ssh
|
||||
command: "$USER1$/check_by_ssh -H $HOSTADDRESS$ -F /opt/nagios/etc/ssh_config -t 30 -q -i /opt/nagios/etc/id_ed25519 -l nagios-checker -C \"$ARG1$\""
|
||||
- name: notify-host-by-matrix
|
||||
command: "/usr/bin/printf \"%b\" \"$NOTIFICATIONTYPE$\\n$HOSTNAME$ is $HOSTSTATE$\\nAddress: $HOSTADDRESS$\\nInfo: $HOSTOUTPUT$\\nDate/Time: $LONGDATETIME$\" | /opt/Custom-Nagios-Plugins/notify-by-matrix"
|
||||
- name: notify-service-by-matrix
|
||||
command: "/usr/bin/printf \"%b\" \"$NOTIFICATIONTYPE$\\nService $HOSTALIAS$ - $SERVICEDESC$ is $SERVICESTATE$\\nInfo: $SERVICEOUTPUT$\\nDate/Time: $LONGDATETIME$\" | /opt/Custom-Nagios-Plugins/notify-by-matrix"
|
||||
nagios_services:
|
||||
# check_by_ssh checks
|
||||
- name: Last Ansible Play
|
||||
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_file_age /var/lib/ansible-last-run -w 432000 -c 604800
|
||||
- name: Reboot Required
|
||||
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_reboot_required
|
||||
- name: Unit backup.service
|
||||
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_systemd_unit backup.service
|
||||
hostgroup: "ansible,!role-hypervisor"
|
||||
- name: Unit backup.timer
|
||||
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_systemd_unit backup.timer
|
||||
hostgroup: "ansible,!role-hypervisor"
|
||||
# Tag-specific checks
|
||||
# zerotier
|
||||
- name: Unit zerotier-one.service
|
||||
command: check_by_ssh!/usr/local/bin/monitoring-scripts/check_systemd_unit zerotier-one.service
|
||||
hostgroup: tag-zt-personal
|
||||
|
||||
# For Netbox
|
||||
secret_netbox_user_pass: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
|
@ -2,3 +2,27 @@
|
||||
|
||||
# Docker settings
|
||||
docker_apt_arch: arm64
|
||||
|
||||
# DB secrets
|
||||
secret_grafana_local_db_pass: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
32326333383035393665316566363266623130313435353165613463336663393634353261623738
|
||||
3466636437303938363332633635363666633965386534630a646132666239623666306133313034
|
||||
63343030613033653964303330643063326636346263363264333061663964373036386536313333
|
||||
6432613734616361380a346138396335366638323266613963623731633437653964326465373538
|
||||
63613762633635613232303565383032313164393935303531356666303965663463366335376137
|
||||
6135376566336662313734333235633362386132333064303534
|
||||
secret_netbox_local_db_pass: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
33333232623431393930626435313138643963663731336530663066633563666439383936316538
|
||||
6337376232613937303635386235346561326134616265300a326266373834303137623439366438
|
||||
33616365353663633434653463643964613231343335326234343331396137363439666138376332
|
||||
3564356231336230630a336639656337353538633931623536303430363836386137646563613338
|
||||
66326661313064306162363265303636333765383736336231346136383763613131
|
||||
secret_keepalived_pass: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
65353963616637303932643435643262333438666566333138373539393836636135656162323965
|
||||
3036313035343835393439663065326536323464316566340a613966333731356631613536643332
|
||||
64613934346234316564613564363863356663653063333432316434353633333138643561316638
|
||||
6563386233656364310a626363663234623161363537323035663663383333353138386239623934
|
||||
65613231666661633262633439393462316337393532623263363630353133373236
|
||||
|
@ -0,0 +1 @@
|
||||
zerotier_repo_deb: "deb http://download.zerotier.com/debian/jammy jammy main"
|
@ -0,0 +1,2 @@
|
||||
# vim:ft=ansible
|
||||
docker_apt_repository: "deb https://download.docker.com/linux/ubuntu focal stable"
|
@ -5,4 +5,7 @@
|
||||
become: no
|
||||
tasks:
|
||||
- name: print os info
|
||||
debug: msg="{{ inventory_hostname }} - {{ ansible_distribution }} {{ ansible_distribution_version }}"
|
||||
debug: msg="{{ item }}"
|
||||
with_items:
|
||||
- "{{ ansible_distribution }}"
|
||||
- "{{ ansible_distribution_version }}"
|
||||
|
@ -22,6 +22,7 @@
|
||||
PermitRootLogin: no
|
||||
PrintMotd: no
|
||||
PubkeyAuthentication: yes
|
||||
Subsystem: "sftp /usr/lib/openssh/sftp-server"
|
||||
UsePAM: yes
|
||||
X11Forwarding: no
|
||||
# We avoid running on "atomic_container" distros since they already ship
|
||||
|
@ -3,6 +3,7 @@
|
||||
---
|
||||
# Home desktops
|
||||
- hosts: device_roles_bastion
|
||||
gather_facts: no
|
||||
roles:
|
||||
- role: backup
|
||||
vars:
|
||||
|
@ -4,23 +4,26 @@
|
||||
# Home desktops
|
||||
- hosts: device_roles_workstation
|
||||
roles:
|
||||
- role: backup
|
||||
vars:
|
||||
backup_s3backup_exclude_list_extra:
|
||||
# This isn't prefixed with / because, on ostree systems, this is in /var/home
|
||||
- "home/*/.var/app/com.valvesoftware.Steam"
|
||||
- "home/*/.var/app/com.visualstudio.code"
|
||||
- "home/*/.var/app/com.vscodium.codium"
|
||||
- "home/*/.cache"
|
||||
- "home/*/.ollama"
|
||||
- "home/*/.local/share/containers"
|
||||
- "home/*/.local/share/Trash"
|
||||
tags: [ backup ]
|
||||
- role: desktop
|
||||
tags: [ desktop ]
|
||||
- role: udev
|
||||
vars:
|
||||
udev_rules:
|
||||
# Switch RCM stuff
|
||||
- SUBSYSTEM=="usb", ATTR{idVendor}=="0955", MODE="0664", GROUP="plugdev"
|
||||
tags: [ desktop, udev ]
|
||||
- hosts: lap-fw-diy-1.ws.mgmt.desu.ltd
|
||||
roles:
|
||||
- role: backup
|
||||
vars:
|
||||
backup_s3backup_tar_args_extra: h
|
||||
backup_s3backup_list_extra:
|
||||
- /home/salt/.backup/
|
||||
tags: [ backup ]
|
||||
- hosts: dsk-ryzen-1.ws.mgmt.desu.ltd
|
||||
roles:
|
||||
- role: desktop
|
||||
- role: backup
|
||||
vars:
|
||||
backup_s3backup_tar_args_extra: h
|
||||
backup_s3backup_list_extra:
|
||||
- /home/salt/.backup/
|
||||
tags: [ backup ]
|
@ -2,7 +2,8 @@
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
# Home media storage Pi
|
||||
- hosts: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
- hosts: pi-homeauto-1.home.mgmt.desu.ltd
|
||||
gather_facts: no
|
||||
module_defaults:
|
||||
docker_container:
|
||||
state: started
|
||||
@ -14,21 +15,10 @@
|
||||
tags: [ docker ]
|
||||
tasks:
|
||||
- name: include tasks for apps
|
||||
include_tasks: tasks/{{ task }}
|
||||
include_tasks: tasks/app/{{ task }}
|
||||
with_items:
|
||||
# Home automation shit
|
||||
- app/ddns-route53.yml
|
||||
- app/homeassistant.yml
|
||||
- app/prometheus-netgear-exporter.yml
|
||||
# Media acquisition
|
||||
- web/lidarr.yml
|
||||
- web/prowlarr.yml
|
||||
- web/radarr.yml
|
||||
- web/sonarr.yml
|
||||
- web/transmission.yml
|
||||
# Media presentation
|
||||
- web/navidrome.yml
|
||||
- web/jellyfin.yml
|
||||
- ddns-route53.yml
|
||||
- homeassistant.yml
|
||||
loop_control:
|
||||
loop_var: task
|
||||
tags: [ always ]
|
||||
@ -37,11 +27,18 @@
|
||||
vars:
|
||||
backup_s3backup_list_extra:
|
||||
- /data
|
||||
backup_time: "Sun *-*-* 02:00:00"
|
||||
tags: [ backup ]
|
||||
- role: ingress-traefik
|
||||
- role: ingress
|
||||
vars:
|
||||
ingress_container_tls: no
|
||||
ingress_container_dashboard: no
|
||||
ingress_container_image: "nginx:latest"
|
||||
ingress_container_ports:
|
||||
- 80:80
|
||||
ingress_container_config_mount: /etc/nginx/conf.d
|
||||
ingress_container_persist_dir: /data/nginx
|
||||
ingress_listen_args: 80
|
||||
ingress_listen_tls: no
|
||||
ingress_servers:
|
||||
- name: homeauto.local.desu.ltd
|
||||
proxy_pass: http://localhost:8123
|
||||
tags: [ ingress ]
|
||||
# - role: kodi
|
||||
# tags: [ kodi ]
|
||||
|
@ -108,29 +108,15 @@
|
||||
value: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
- record: prometheus.desu.ltd
|
||||
value: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
# Games
|
||||
- record: 5dd.desu.ltd
|
||||
value: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
# Public media stuff
|
||||
- record: music.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
- record: jellyfin.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
- record: lidarr.media.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
- record: prowlarr.media.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
- record: slskd.media.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
value: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
- record: sonarr.media.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
value: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
- record: radarr.media.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
value: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
- record: transmission.media.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
# HA
|
||||
- record: homeassistant.desu.ltd
|
||||
value: srv-fw-13-1.home.mgmt.desu.ltd
|
||||
value: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
loop_control:
|
||||
label: "{{ item.record }}"
|
||||
delegate_to: localhost
|
||||
|
@ -4,22 +4,10 @@
|
||||
---
|
||||
- hosts: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
tasks:
|
||||
- name: assure postgresql repo key
|
||||
ansible.builtin.apt_key:
|
||||
url: https://www.postgresql.org/media/keys/ACCC4CF8.asc
|
||||
state: present
|
||||
tags: [ db, psql, repo ]
|
||||
- name: assure postgresql repo
|
||||
ansible.builtin.apt_repository:
|
||||
# Ex. "focal-pgdg main"
|
||||
repo: 'deb http://apt.postgresql.org/pub/repos/apt {{ ansible_distribution_release }}-pgdg main'
|
||||
state: present
|
||||
tags: [ db, psql, repo ]
|
||||
- name: assure prometheus psql exporter
|
||||
ansible.builtin.docker_container:
|
||||
name: prometheus-psql-exporter
|
||||
image: quay.io/prometheuscommunity/postgres-exporter
|
||||
restart_policy: unless-stopped
|
||||
env:
|
||||
DATA_SOURCE_URI: "10.0.0.2:5432/postgres"
|
||||
DATA_SOURCE_USER: "nagios"
|
||||
@ -30,15 +18,6 @@
|
||||
roles:
|
||||
- role: geerlingguy.postgresql
|
||||
vars:
|
||||
postgresql_version: "14"
|
||||
postgresql_data_dir: "/var/lib/postgresql/{{ postgresql_version }}/main"
|
||||
postgresql_bin_path: "/var/lib/postgresql/{{ postgresql_version }}/bin"
|
||||
postgresql_config_path: "/etc/postgresql/{{ postgresql_version }}/main"
|
||||
postgresql_packages:
|
||||
- "postgresql-{{ postgresql_version }}"
|
||||
- "postgresql-client-{{ postgresql_version }}"
|
||||
- "postgresql-server-dev-{{ postgresql_version }}"
|
||||
- libpq-dev
|
||||
postgresql_global_config_options:
|
||||
- option: listen_addresses
|
||||
value: 10.0.0.2,127.0.0.1
|
||||
|
@ -3,7 +3,7 @@
|
||||
# Webservers
|
||||
---
|
||||
- hosts: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
#gather_facts: no
|
||||
gather_facts: no
|
||||
module_defaults:
|
||||
docker_container:
|
||||
restart_policy: unless-stopped
|
||||
@ -29,14 +29,16 @@
|
||||
- web/nextcloud.yml
|
||||
- web/synapse.yml
|
||||
# Backend web services
|
||||
- web/prowlarr.yml
|
||||
- web/radarr.yml
|
||||
- web/sonarr.yml
|
||||
- web/srv.yml
|
||||
- web/transmission.yml
|
||||
# Games
|
||||
- game/factorio.yml
|
||||
- game/minecraft-createfarming.yml
|
||||
- game/minecraft-magicpack.yml
|
||||
- game/minecraft-weedie.yml
|
||||
- game/minecraft-direwolf20.yml
|
||||
- game/zomboid.yml
|
||||
- game/satisfactory.yml
|
||||
tags: [ always ]
|
||||
roles:
|
||||
- role: backup
|
||||
@ -45,9 +47,7 @@
|
||||
- /app/gitea/gitea
|
||||
- /data
|
||||
backup_s3backup_exclude_list_extra:
|
||||
- /data/minecraft/magicpack/backups
|
||||
- /data/minecraft/direwolf20/backups
|
||||
- /data/minecraft/weedie/backups
|
||||
- /data/shared/media
|
||||
- /data/shared/downloads
|
||||
- /data/zomboid/ZomboidDedicatedServer/steamapps/workshop
|
||||
@ -60,22 +60,16 @@
|
||||
tags: [ web, git ]
|
||||
- role: prometheus
|
||||
tags: [ prometheus, monitoring, no-test ]
|
||||
- role: gameserver-terraria
|
||||
- role: nagios
|
||||
vars:
|
||||
terraria_server_name: "lea-wants-to-play"
|
||||
terraria_motd: "DID SOMEBODY SAY MEATLOAF??"
|
||||
terraria_world_name: "SuperBepisLand"
|
||||
terraria_world_seed: "Make it 'all eight'. As many eights as you can fit in the text box."
|
||||
terraria_mods: "{{ tml_basics + tml_basic_qol + tml_libs + tml_calamity + tml_yoyo_revamp + tml_calamity_classes }}"
|
||||
tags: [ terraria, tmodloader, lea ]
|
||||
# - role: gameserver-terraria
|
||||
# vars:
|
||||
# terraria_server_remove: yes
|
||||
# terraria_server_name: "generic"
|
||||
# terraria_world_name: "Seaborgium"
|
||||
# terraria_world_seed: "benis"
|
||||
# terraria_mods: "{{ tml_basic_qol + tml_advanced_qol + tml_libs + tml_basics + tml_calamity + tml_calamity_classes + tml_calamity_clamity + tml_fargos + tml_touhou + tml_yoyo_revamp + tml_spirit + tml_secrets + tml_yoyo_revamp }}"
|
||||
# tags: [ terraria, tmodloader, generic ]
|
||||
# Definitions for contacts and checks are defined in inventory vars
|
||||
# See group_vars/all.yml if you need to change those
|
||||
nagios_matrix_server: "https://matrix.desu.ltd"
|
||||
nagios_matrix_room: "!NWNCKlNmOTcarMcMIh:desu.ltd"
|
||||
nagios_matrix_token: "{{ secret_nagios_matrix_token }}"
|
||||
nagios_data_dir: /data/nagios
|
||||
nagios_admin_pass: "{{ secret_nagios_admin_pass }}"
|
||||
tags: [ nagios, no-auto ]
|
||||
- role: ingress
|
||||
vars:
|
||||
ingress_head: |
|
||||
@ -117,12 +111,12 @@
|
||||
pass: http://element:80
|
||||
directives:
|
||||
- "client_max_body_size 0"
|
||||
- name: nagios.desu.ltd
|
||||
proxy_pass: http://nagios:80
|
||||
- name: nc.desu.ltd
|
||||
directives:
|
||||
- "add_header Strict-Transport-Security \"max-age=31536000\""
|
||||
- "client_max_body_size 0"
|
||||
- "keepalive_requests 99999"
|
||||
- "keepalive_timeout 600"
|
||||
proxy_pass: http://nextcloud:80
|
||||
locations:
|
||||
- location: "^~ /.well-known"
|
||||
@ -143,6 +137,27 @@
|
||||
- "allow 45.79.58.44/32" # bastion1.dallas.mgmt.desu.ltd
|
||||
- "deny all"
|
||||
proxy_pass: http://prometheus:9090
|
||||
# desu.ltd media bullshit
|
||||
- name: prowlarr.media.desu.ltd
|
||||
directives:
|
||||
- "allow {{ common_home_address }}/{{ common_home_address_mask }}"
|
||||
- "deny all"
|
||||
proxy_pass: http://prowlarr:9696
|
||||
- name: sonarr.media.desu.ltd
|
||||
directives:
|
||||
- "allow {{ common_home_address }}/{{ common_home_address_mask }}"
|
||||
- "deny all"
|
||||
proxy_pass: http://sonarr:8989
|
||||
- name: radarr.media.desu.ltd
|
||||
directives:
|
||||
- "allow {{ common_home_address }}/{{ common_home_address_mask }}"
|
||||
- "deny all"
|
||||
proxy_pass: http://radarr:7878
|
||||
- name: transmission.media.desu.ltd
|
||||
directives:
|
||||
- "allow {{ common_home_address }}/{{ common_home_address_mask }}"
|
||||
- "deny all"
|
||||
proxy_pass: http://transmission:9091
|
||||
# 9iron
|
||||
- name: www.9iron.club
|
||||
directives:
|
||||
|
5
playbooks/site_common.yml
Executable file
5
playbooks/site_common.yml
Executable file
@ -0,0 +1,5 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
# Supplementary tags
|
||||
- import_playbook: tags_ansible.yml
|
@ -8,5 +8,3 @@
|
||||
- import_playbook: prod_web.yml
|
||||
# Home automation stuff
|
||||
- import_playbook: home_automation.yml
|
||||
# Backup management stuff
|
||||
- import_playbook: tags_restic-prune.yml
|
||||
|
8
playbooks/tags_ansible.yml
Executable file
8
playbooks/tags_ansible.yml
Executable file
@ -0,0 +1,8 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
- hosts: tags_ansible
|
||||
gather_facts: no
|
||||
roles:
|
||||
- role: ansible
|
||||
tags: [ ansible ]
|
@ -3,11 +3,34 @@
|
||||
---
|
||||
- hosts: tags_autoreboot
|
||||
gather_facts: no
|
||||
module_defaults:
|
||||
nagios:
|
||||
author: Ansible
|
||||
action: downtime
|
||||
cmdfile: /data/nagios/var/rw/nagios.cmd
|
||||
comment: "Ansible tags_autoreboot task"
|
||||
host: "{{ inventory_hostname }}"
|
||||
minutes: 10
|
||||
serial: 1
|
||||
tasks:
|
||||
- name: check for reboot-required
|
||||
ansible.builtin.stat: path=/var/run/reboot-required
|
||||
register: s
|
||||
- name: reboot
|
||||
ansible.builtin.reboot: reboot_timeout=600
|
||||
block:
|
||||
- name: attempt to schedule downtime
|
||||
block:
|
||||
- name: register nagios host downtime
|
||||
nagios:
|
||||
service: host
|
||||
delegate_to: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
- name: register nagios service downtime
|
||||
nagios:
|
||||
service: all
|
||||
delegate_to: vm-general-1.ashburn.mgmt.desu.ltd
|
||||
rescue:
|
||||
- name: notify of failure to reboot
|
||||
ansible.builtin.debug: msg="Miscellaneous failure when scheduling downtime"
|
||||
- name: reboot
|
||||
ansible.builtin.reboot: reboot_timeout=600
|
||||
when: s.stat.exists
|
||||
|
@ -2,56 +2,71 @@
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
- hosts: tags_nagios
|
||||
gather_facts: yes
|
||||
gather_facts: no
|
||||
roles:
|
||||
- role: git
|
||||
vars:
|
||||
git_repos:
|
||||
- repo: https://git.desu.ltd/salt/monitoring-scripts
|
||||
dest: /usr/local/bin/monitoring-scripts
|
||||
tags: [ nagios, git ]
|
||||
tasks:
|
||||
- name: assure prometheus containers for docker hosts
|
||||
block:
|
||||
- name: assure prometheus node exporter
|
||||
# https://github.com/prometheus/node_exporter
|
||||
ansible.builtin.docker_container:
|
||||
name: prometheus-node-exporter
|
||||
image: quay.io/prometheus/node-exporter:latest
|
||||
restart_policy: unless-stopped
|
||||
command:
|
||||
- '--path.rootfs=/host'
|
||||
- '--collector.interrupts'
|
||||
- '--collector.processes'
|
||||
network_mode: host
|
||||
pid_mode: host
|
||||
volumes:
|
||||
- /:/host:ro,rslave
|
||||
tags: [ prometheus ]
|
||||
- name: assure prometheus cadvisor exporter
|
||||
ansible.builtin.docker_container:
|
||||
name: prometheus-cadvisor-exporter
|
||||
image: gcr.io/cadvisor/cadvisor:latest
|
||||
restart_policy: unless-stopped
|
||||
ports:
|
||||
- 9101:8080/tcp
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker:/var/lib/docker:ro
|
||||
- /dev/disk:/dev/disk:ro
|
||||
devices:
|
||||
- /dev/kmsg
|
||||
when: ansible_pkg_mgr != "atomic_container"
|
||||
- name: assure prometheus containers for coreos
|
||||
block:
|
||||
- name: assure prometheus node exporter
|
||||
# https://github.com/prometheus/node_exporter
|
||||
containers.podman.podman_container:
|
||||
name: prometheus-node-exporter
|
||||
image: quay.io/prometheus/node-exporter:latest
|
||||
restart_policy: unless-stopped
|
||||
command:
|
||||
- '--path.rootfs=/host'
|
||||
- '--collector.interrupts'
|
||||
- '--collector.processes'
|
||||
network_mode: host
|
||||
pid_mode: host
|
||||
volumes:
|
||||
- /:/host:ro,rslave
|
||||
tags: [ prometheus ]
|
||||
when: ansible_pkg_mgr == "atomic_container"
|
||||
- name: assure nagios plugin packages
|
||||
ansible.builtin.apt: name=monitoring-plugins,nagios-plugins-contrib
|
||||
tags: [ nagios ]
|
||||
- name: assure nagios user
|
||||
ansible.builtin.user: name=nagios-checker state=present system=yes
|
||||
tags: [ nagios ]
|
||||
- name: assure nagios user ssh key
|
||||
authorized_key:
|
||||
user: nagios-checker
|
||||
state: present
|
||||
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKNavw28C0mKIQVRLQDW2aoovliU1XCGaenDhIMwumK/ Nagios monitoring"
|
||||
tags: [ nagios ]
|
||||
- name: assure nagios user sudo rule file
|
||||
ansible.builtin.file: path=/etc/sudoers.d/50-nagios-checker mode=0750 owner=root group=root state=touch modification_time=preserve access_time=preserve
|
||||
tags: [ nagios, sudo ]
|
||||
- name: assure nagios user sudo rules
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/sudoers.d/50-nagios-checker
|
||||
line: "nagios-checker ALL = (root) NOPASSWD: {{ item }}"
|
||||
with_items:
|
||||
- /usr/lib/nagios/plugins/check_disk
|
||||
- /usr/local/bin/monitoring-scripts/check_docker
|
||||
- /usr/local/bin/monitoring-scripts/check_temp
|
||||
tags: [ nagios, sudo ]
|
||||
- name: assure prometheus node exporter
|
||||
# https://github.com/prometheus/node_exporter
|
||||
ansible.builtin.docker_container:
|
||||
name: prometheus-node-exporter
|
||||
image: quay.io/prometheus/node-exporter:latest
|
||||
command:
|
||||
- '--path.rootfs=/host'
|
||||
- '--collector.interrupts'
|
||||
- '--collector.processes'
|
||||
network_mode: host
|
||||
pid_mode: host
|
||||
volumes:
|
||||
- /:/host:ro,rslave
|
||||
tags: [ prometheus ]
|
||||
- name: assure prometheus cadvisor exporter
|
||||
ansible.builtin.docker_container:
|
||||
name: prometheus-cadvisor-exporter
|
||||
image: gcr.io/cadvisor/cadvisor:latest
|
||||
ports:
|
||||
- 9101:8080/tcp
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker:/var/lib/docker:ro
|
||||
- /dev/disk:/dev/disk:ro
|
||||
devices:
|
||||
- /dev/kmsg
|
||||
- hosts: all
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: disable nagios user when not tagged
|
||||
ansible.builtin.user: name=nagios-checker state=absent remove=yes
|
||||
when: "'tags_nagios' not in group_names"
|
||||
tags: [ nagios ]
|
||||
|
@ -1,10 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
- hosts: tags_restic-prune
|
||||
roles:
|
||||
- role: backup
|
||||
vars:
|
||||
backup_restic: no
|
||||
backup_restic_prune: yes
|
||||
tags: [ backup, prune, restic, restic-prune ]
|
@ -2,7 +2,7 @@
|
||||
- name: docker deploy homeassistant
|
||||
docker_container:
|
||||
name: homeassistant
|
||||
image: homeassistant/home-assistant:latest
|
||||
image: "ghcr.io/home-assistant/raspberrypi4-homeassistant:stable"
|
||||
privileged: yes
|
||||
network_mode: host
|
||||
volumes:
|
||||
|
@ -1,30 +0,0 @@
|
||||
# vim:ft=ansible:
|
||||
#
|
||||
# Bless this man. Bless him dearly:
|
||||
# https://github.com/DRuggeri/netgear_exporter
|
||||
#
|
||||
- name: docker deploy netgear prometheus exporter
|
||||
vars:
|
||||
netgear_admin_password: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
31346635363565363532653831613034376535653530376237343261623736326230393333326337
|
||||
3062643963353334323439306361356437653834613832310a666366393662303166313733393831
|
||||
32373465356638393138633963666337643333303435653537666361363437633533333263303938
|
||||
6536353530323036350a656330326662373836393736383961393537666537353138346439626566
|
||||
64336631656538343335343535343338613465393635333937656237333531303230
|
||||
docker_container:
|
||||
name: prometheus-netgear-exporter
|
||||
image: ghcr.io/druggeri/netgear_exporter
|
||||
env:
|
||||
NETGEAR_EXPORTER_PASSWORD: "{{ netgear_admin_password }}"
|
||||
networks:
|
||||
- name: web
|
||||
aliases: [ "redis" ]
|
||||
ports:
|
||||
- "9192:9192/tcp"
|
||||
command:
|
||||
- "--url=http://192.168.1.1:5000" # Set the URL to the SOAP port of the router, NOT the admin interface
|
||||
- "--insecure" # Required when accessing over IP
|
||||
- "--timeout=15" # The router is slow as balls
|
||||
- "--filter.collectors=Client,Traffic" # Filter out SystemInfo to lower collection time
|
||||
tags: [ docker, prometheus, netgear, prometheus-netgear ]
|
@ -2,28 +2,16 @@
|
||||
- name: docker deploy minecraft - create farming and delights
|
||||
docker_container:
|
||||
name: minecraft-createfarming
|
||||
state: absent
|
||||
state: started
|
||||
image: itzg/minecraft-server:latest
|
||||
restart_policy: unless-stopped
|
||||
pull: yes
|
||||
env:
|
||||
# Common envvars
|
||||
EULA: "true"
|
||||
OPS: "VintageSalt"
|
||||
SNOOPER_ENABLED: "false"
|
||||
SPAWN_PROTECTION: "0"
|
||||
USE_AIKAR_FLAGS: "true"
|
||||
RCON_CMDS_STARTUP: |-
|
||||
scoreboard objectives add Deaths deathCount
|
||||
#scoreboard objectives add Health health {"text":"❤","color":"red"}
|
||||
RCON_CMDS_ON_CONNECT: |-
|
||||
scoreboard objectives setdisplay list Deaths
|
||||
#scoreboard objectives setdisplay belowName Health
|
||||
# Pack-specific stuff
|
||||
MODRINTH_PROJECT: "https://modrinth.com/modpack/create-farmersdelight/version/1.0.0"
|
||||
MOTD: "Create Farming and Delights! Spinny trains!"
|
||||
TYPE: "MODRINTH"
|
||||
VERSION: "1.20.1"
|
||||
MAX_MEMORY: "6G"
|
||||
#VIEW_DISTANCE: "10"
|
||||
ports:
|
||||
- "25565:25565/tcp"
|
||||
- "25565:25565/udp"
|
||||
|
34
playbooks/tasks/game/minecraft-direwolf20.yml
Normal file
34
playbooks/tasks/game/minecraft-direwolf20.yml
Normal file
@ -0,0 +1,34 @@
|
||||
# vim:ft=ansible:
|
||||
- name: docker deploy minecraft - direwolf20
|
||||
docker_container:
|
||||
name: minecraft-direwolf20
|
||||
state: absent
|
||||
image: itzg/minecraft-server:latest
|
||||
restart_policy: unless-stopped
|
||||
pull: yes
|
||||
env:
|
||||
EULA: "true"
|
||||
GENERIC_PACK: "/modpacks/1.20.1-direwolf20/Da Bois.zip"
|
||||
TYPE: "NEOFORGE"
|
||||
VERSION: "1.20.1"
|
||||
FORGE_VERSION: "47.1.105"
|
||||
MEMORY: "8G"
|
||||
MOTD: "Tannerite Dog Edition\\n#abolishtheatf"
|
||||
OPS: "VintageSalt"
|
||||
RCON_CMDS_STARTUP: |-
|
||||
scoreboard objectives add Deaths deathCount
|
||||
scoreboard objectives add Health health {"text":"❤","color":"red"}
|
||||
RCON_CMDS_ON_CONNECT: |-
|
||||
scoreboard objectives setdisplay list Deaths
|
||||
scoreboard objectives setdisplay belowName Health
|
||||
SNOOPER_ENABLED: "false"
|
||||
SPAWN_PROTECTION: "0"
|
||||
USE_AIKAR_FLAGS: "true"
|
||||
VIEW_DISTANCE: "10"
|
||||
ports:
|
||||
- "25567:25565/tcp"
|
||||
- "25567:25565/udp"
|
||||
volumes:
|
||||
- /data/srv/packs:/modpacks
|
||||
- /data/minecraft/direwolf20:/data
|
||||
tags: [ docker, minecraft, direwolf20 ]
|
@ -1,50 +0,0 @@
|
||||
# vim:ft=ansible:
|
||||
- name: docker deploy minecraft - magicpack
|
||||
docker_container:
|
||||
name: minecraft-magicpack
|
||||
state: absent
|
||||
image: itzg/minecraft-server:java8
|
||||
env:
|
||||
# Common envvars
|
||||
EULA: "true"
|
||||
OPS: "VintageSalt"
|
||||
SNOOPER_ENABLED: "false"
|
||||
SPAWN_PROTECTION: "0"
|
||||
USE_AIKAR_FLAGS: "true"
|
||||
#
|
||||
# This enables the use of Ely.by as an auth and skin server
|
||||
# Comment this and the above line out if you'd like to use Mojang's
|
||||
# https://docs.ely.by/en/authlib-injector.html
|
||||
#
|
||||
# All players should register on Ely.by in order for this to work.
|
||||
# They should also use Fjord Launcher by Unmojang:
|
||||
# https://github.com/unmojang/FjordLauncher
|
||||
#
|
||||
JVM_OPTS: "-javaagent:/authlib-injector.jar=ely.by"
|
||||
RCON_CMDS_STARTUP: |-
|
||||
scoreboard objectives add Deaths deathCount
|
||||
#scoreboard objectives add Health health {"text":"❤","color":"red"}
|
||||
RCON_CMDS_ON_CONNECT: |-
|
||||
scoreboard objectives setdisplay list Deaths
|
||||
#scoreboard objectives setdisplay belowName Health
|
||||
# Pack-specific stuff
|
||||
MODRINTH_PROJECT: "https://srv.9iron.club/files/packs/1.7.10-magicpack/server.mrpack"
|
||||
MOTD: "It's ya boy, uh, skrunkly modpack"
|
||||
TYPE: "MODRINTH"
|
||||
VERSION: "1.7.10"
|
||||
MAX_MEMORY: "6G"
|
||||
#VIEW_DISTANCE: "10"
|
||||
ports:
|
||||
- "25565:25565/tcp"
|
||||
- "25565:25565/udp"
|
||||
- "24454:24454/udp"
|
||||
# Prometheus exporter for Forge
|
||||
# https://www.curseforge.com/minecraft/mc-mods/prometheus-exporter
|
||||
- "19565:19565/tcp"
|
||||
# Prometheus exporter for Fabric
|
||||
# https://modrinth.com/mod/fabricexporter
|
||||
#- "19565:25585/tcp"
|
||||
volumes:
|
||||
- /data/minecraft/magicpack:/data
|
||||
- /data/minecraft/authlib-injector-1.2.5.jar:/authlib-injector.jar
|
||||
tags: [ docker, minecraft, magicpack ]
|
33
playbooks/tasks/game/minecraft-vanilla.yml
Normal file
33
playbooks/tasks/game/minecraft-vanilla.yml
Normal file
@ -0,0 +1,33 @@
|
||||
# vim:ft=ansible:
|
||||
- name: docker deploy minecraft - vanilla
|
||||
docker_container:
|
||||
name: minecraft-vanilla
|
||||
state: absent
|
||||
image: itzg/minecraft-server:latest
|
||||
restart_policy: unless-stopped
|
||||
pull: yes
|
||||
env:
|
||||
DIFFICULTY: "normal"
|
||||
ENABLE_COMMAND_BLOCK: "true"
|
||||
EULA: "true"
|
||||
MAX_PLAYERS: "8"
|
||||
MODRINTH_PROJECT: "https://modrinth.com/modpack/adrenaserver"
|
||||
MOTD: "Tannerite Dog Edition\\n#abolishtheatf"
|
||||
OPS: "VintageSalt"
|
||||
RCON_CMDS_STARTUP: |-
|
||||
scoreboard objectives add Deaths deathCount
|
||||
scoreboard objectives add Health health {"text":"❤","color":"red"}
|
||||
RCON_CMDS_ON_CONNECT: |-
|
||||
scoreboard objectives setdisplay list Deaths
|
||||
scoreboard objectives setdisplay belowName Health
|
||||
SNOOPER_ENABLED: "false"
|
||||
SPAWN_PROTECTION: "0"
|
||||
TYPE: "MODRINTH"
|
||||
USE_AIKAR_FLAGS: "true"
|
||||
VIEW_DISTANCE: "12"
|
||||
ports:
|
||||
- "26565:25565/tcp"
|
||||
- "26565:25565/udp"
|
||||
volumes:
|
||||
- /data/minecraft/vanilla:/data
|
||||
tags: [ docker, minecraft ]
|
@ -1,44 +0,0 @@
|
||||
# vim:ft=ansible:
|
||||
- name: docker deploy minecraft - weediewack next gen pack
|
||||
docker_container:
|
||||
name: minecraft-weedie
|
||||
state: absent
|
||||
image: itzg/minecraft-server:latest
|
||||
env:
|
||||
# Common envvars
|
||||
EULA: "true"
|
||||
OPS: "VintageSalt"
|
||||
SNOOPER_ENABLED: "false"
|
||||
SPAWN_PROTECTION: "0"
|
||||
USE_AIKAR_FLAGS: "true"
|
||||
ALLOW_FLIGHT: "true"
|
||||
RCON_CMDS_STARTUP: |-
|
||||
scoreboard objectives add Deaths deathCount
|
||||
scoreboard objectives add Health health {"text":"❤","color":"red"}
|
||||
RCON_CMDS_ON_CONNECT: |-
|
||||
scoreboard objectives setdisplay list Deaths
|
||||
scoreboard objectives setdisplay belowName Health
|
||||
# Pack-specific stuff
|
||||
TYPE: "Forge"
|
||||
MOTD: "We're doing it a-fucking-gain!"
|
||||
VERSION: "1.20.1"
|
||||
FORGE_VERSION: "47.3.11"
|
||||
MAX_MEMORY: "8G"
|
||||
#GENERIC_PACKS: "Server Files 1.3.7"
|
||||
#GENERIC_PACKS_PREFIX: "https://mediafilez.forgecdn.net/files/5832/451/"
|
||||
#GENERIC_PACKS_SUFFIX: ".zip"
|
||||
#SKIP_GENERIC_PACK_UPDATE_CHECK: "true"
|
||||
#VIEW_DISTANCE: "10"
|
||||
ports:
|
||||
- "25565:25565/tcp"
|
||||
- "25565:25565/udp"
|
||||
- "24454:24454/udp"
|
||||
# Prometheus exporter for Forge
|
||||
# https://www.curseforge.com/minecraft/mc-mods/prometheus-exporter
|
||||
- "19566:19565/tcp"
|
||||
# Prometheus exporter for Fabric
|
||||
# https://modrinth.com/mod/fabricexporter
|
||||
#- "19565:25585/tcp"
|
||||
volumes:
|
||||
- /data/minecraft/weedie:/data
|
||||
tags: [ docker, minecraft, weedie ]
|
@ -1,47 +0,0 @@
|
||||
# vim:ft=ansible:
|
||||
- name: ensure docker network
|
||||
docker_network: name=satisfactory
|
||||
tags: [ satisfactory, docker, network ]
|
||||
- name: docker deploy satisfactory
|
||||
docker_container:
|
||||
name: satisfactory
|
||||
state: absent
|
||||
image: wolveix/satisfactory-server:latest
|
||||
restart_policy: unless-stopped
|
||||
pull: yes
|
||||
networks:
|
||||
- name: satisfactory
|
||||
aliases: [ "gameserver" ]
|
||||
env:
|
||||
MAXPLAYERS: "8"
|
||||
# We have this turned on for modding's sake
|
||||
#SKIPUPDATE: "true"
|
||||
ports:
|
||||
- '7777:7777/udp'
|
||||
- '7777:7777/tcp'
|
||||
volumes:
|
||||
- /data/satisfactory/config:/config
|
||||
tags: [ docker, satisfactory ]
|
||||
- name: docker deploy satisfactory sftp
|
||||
docker_container:
|
||||
name: satisfactory-sftp
|
||||
state: absent
|
||||
image: atmoz/sftp
|
||||
restart_policy: unless-stopped
|
||||
pull: yes
|
||||
ulimits:
|
||||
- 'nofile:262144:262144'
|
||||
ports:
|
||||
- '7776:22/tcp'
|
||||
volumes:
|
||||
- /data/satisfactory/config:/home/servermgr/game
|
||||
command: 'servermgr:{{ server_password }}:1000'
|
||||
vars:
|
||||
server_password: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
33336138656461646462323661363336623235333861663730373535656331623230313334353239
|
||||
6535623833343237626161383833663435643262376133320a616634613764396661316332373339
|
||||
33633662366666623931643635313162366339306539666632643437396637616632633432326631
|
||||
3038333932623638390a386362653463306338326436396230633562313466336464663764643461
|
||||
3134
|
||||
tags: [ docker, satisfactory, sidecar, sftp ]
|
@ -1,39 +0,0 @@
|
||||
# vim:ft=ansible:
|
||||
#
|
||||
# This is a really stupid game, source here:
|
||||
# https://github.com/Oliveriver/5d-diplomacy-with-multiverse-time-travel
|
||||
#
|
||||
- name: docker deploy 5d-diplomacy-with-multiverse-timetravel
|
||||
docker_container:
|
||||
name: 5d-diplomacy-with-multiverse-timetravel
|
||||
state: started
|
||||
#image: deluan/5d-diplomacy-with-multiverse-timetravel:latest
|
||||
image: rehashedsalt/5dd:latest
|
||||
env:
|
||||
ConnectionStrings__Database: "Server=5dd-mssql;Database=diplomacy;User=SA;Password={{ five_dd_db_pass }};Encrypt=True;TrustServerCertificate=True"
|
||||
networks:
|
||||
- name: web
|
||||
aliases: [ "5d-diplomacy-with-multiverse-timetravel" ]
|
||||
# For unproxied use
|
||||
ports:
|
||||
- 5173:8080
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.5d-diplomacy-with-multiverse-timetravel.rule: Host(`5dd.desu.ltd`)
|
||||
traefik.http.routers.5d-diplomacy-with-multiverse-timetravel.entrypoints: web
|
||||
tags: [ docker, 5d-diplomacy-with-multiverse-timetravel ]
|
||||
- name: docker deploy 5dd mssql db
|
||||
docker_container:
|
||||
name: 5dd-mssql
|
||||
image: mcr.microsoft.com/mssql/server:2022-latest
|
||||
user: root
|
||||
env:
|
||||
ACCEPT_EULA: "y"
|
||||
MSSQL_SA_PASSWORD: "{{ five_dd_db_pass }}"
|
||||
volumes:
|
||||
- /data/5dd/mssql/data:/var/opt/mssql/data
|
||||
- /data/5dd/mssql/log:/var/opt/mssql/log
|
||||
- /data/5dd/mssql/secrets:/var/opt/mssql/secrets
|
||||
networks:
|
||||
- name: web
|
||||
aliases: [ "5dd-mssql" ]
|
@ -31,7 +31,7 @@
|
||||
- name: docker deploy grafana matrix bridge
|
||||
docker_container:
|
||||
name: grafana-matrix-bridge
|
||||
image: registry.gitlab.com/hctrdev/grafana-matrix-forwarder:latest
|
||||
image: registry.gitlab.com/hectorjsmith/grafana-matrix-forwarder:latest
|
||||
env:
|
||||
GMF_MATRIX_USER: "@grafana:desu.ltd"
|
||||
GMF_MATRIX_PASSWORD: "{{ secret_grafana_matrix_token }}"
|
||||
|
@ -1,44 +0,0 @@
|
||||
# vim:ft=ansible:
|
||||
#
|
||||
# This is a really stupid game, source here:
|
||||
# https://github.com/Oliveriver/5d-diplomacy-with-multiverse-time-travel
|
||||
#
|
||||
- name: set up jellyfin dirs
|
||||
ansible.builtin.file:
|
||||
state: directory
|
||||
owner: 911
|
||||
group: 911
|
||||
mode: "0750"
|
||||
path: "{{ item }}"
|
||||
with_items:
|
||||
- /data/jellyfin/config
|
||||
- /data/jellyfin/cache
|
||||
tags: [ docker, jellyfin ]
|
||||
- name: docker deploy jellyfin
|
||||
docker_container:
|
||||
name: jellyfin
|
||||
state: started
|
||||
image: jellyfin/jellyfin:latest
|
||||
user: 911:911
|
||||
groups:
|
||||
- 109 # render on Ubuntu systems
|
||||
env:
|
||||
JELLYFIN_PublishedServerUrl: "http://jellyfin.desu.ltd"
|
||||
networks:
|
||||
- name: web
|
||||
aliases: [ "jellyfin" ]
|
||||
# For unproxied use
|
||||
#ports:
|
||||
# - 8096/tcp
|
||||
volumes:
|
||||
- /data/jellyfin/config:/config
|
||||
- /data/jellyfin/cache:/cache
|
||||
- /data/shared/media:/media
|
||||
devices:
|
||||
- /dev/dri/renderD128:/dev/dri/renderD128
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.jellyfin.rule: Host(`jellyfin.desu.ltd`)
|
||||
traefik.http.routers.jellyfin.entrypoints: web
|
||||
traefik.http.services.jellyfin.loadbalancer.server.port: "8096"
|
||||
tags: [ docker, jellyfin ]
|
@ -2,55 +2,14 @@
|
||||
- name: docker deploy lidarr
|
||||
docker_container:
|
||||
name: lidarr
|
||||
state: started
|
||||
#image: linuxserver/lidarr:latest
|
||||
image: ghcr.io/hotio/lidarr:pr-plugins
|
||||
image: linuxserver/lidarr:latest
|
||||
networks:
|
||||
- name: web
|
||||
aliases: [ "lidarr" ]
|
||||
env:
|
||||
PUID: "911"
|
||||
PGID: "911"
|
||||
TZ: "America/Chicago"
|
||||
VPN_ENABLED: "false"
|
||||
volumes:
|
||||
# https://github.com/RandomNinjaAtk/arr-scripts?tab=readme-ov-file
|
||||
- /data/lidarr/bin:/usr/local/bin
|
||||
- /data/lidarr/config:/config
|
||||
- /data/shared/downloads:/data
|
||||
- /data/shared/media/music:/music
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.lidarr.rule: Host(`lidarr.media.desu.ltd`)
|
||||
traefik.http.routers.lidarr.entrypoints: web
|
||||
tags: [ docker, lidarr ]
|
||||
- name: assure slskd cleanup cronjob
|
||||
ansible.builtin.cron:
|
||||
user: root
|
||||
name: slskd-cleanup
|
||||
state: present
|
||||
hour: 4
|
||||
job: "find /data/shared/downloads/soulseek -mtime +7 -print -delete"
|
||||
tags: [ slskd, cron, cleanup ]
|
||||
- name: docker deploy slskd
|
||||
docker_container:
|
||||
name: lidarr-slskd
|
||||
state: started
|
||||
image: slskd/slskd:latest
|
||||
user: "911:911"
|
||||
networks:
|
||||
- name: web
|
||||
aliases: [ "slskd" ]
|
||||
env:
|
||||
SLSKD_REMOTE_CONFIGURATION: "true"
|
||||
ports:
|
||||
- "50300:50300"
|
||||
volumes:
|
||||
- /data/slskd:/app
|
||||
- /data/shared/downloads/soulseek:/app/downloads
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.lidarr-slskd.rule: Host(`slskd.media.desu.ltd`)
|
||||
traefik.http.routers.lidarr-slskd.entrypoints: web
|
||||
traefik.http.services.lidarr-slskd.loadbalancer.server.port: "5030"
|
||||
tags: [ docker, slskd ]
|
||||
|
@ -1,39 +0,0 @@
|
||||
# vim:ft=ansible:
|
||||
- name: docker deploy navidrome
|
||||
docker_container:
|
||||
name: navidrome
|
||||
state: started
|
||||
image: deluan/navidrome:latest
|
||||
user: 911:911
|
||||
env:
|
||||
ND_BASEURL: "https://music.desu.ltd"
|
||||
ND_PROMETHEUS_ENABLED: "true"
|
||||
ND_LOGLEVEL: "info"
|
||||
ND_LASTFM_ENABLED: "true"
|
||||
ND_LASTFM_APIKEY: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
63333239613931623033656233353537653830623065386632393232316537356261393938323533
|
||||
6632633034643637653136633235393335303535653136340a363331653839383930396633363133
|
||||
62313964396161326231376534333064343736633466363962313662353665313230396237666363
|
||||
6565613939666663300a313462366137363661373839326636613064643032356437376536333366
|
||||
30366238646363316639373730343336373234313338663261616331666162653362626364323463
|
||||
3131666231383138623965656163373364326432353137663665
|
||||
ND_LASTFM_SECRET: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
39316232373136663435323662333137636635326535643735383734666562303339663134336137
|
||||
3132613237613436336663303330623334663262313337350a393963653765343262333533373763
|
||||
37623230393638616535623861333135353038646532343038313865626435623830343361633938
|
||||
3232646462346163380a616462366435343934326232366233636564626262653965333564363731
|
||||
66656532663965616561313032646231663366663636663838633535393566363631346535383866
|
||||
6335623230303333346266306637353061356665383264333266
|
||||
networks:
|
||||
- name: web
|
||||
aliases: [ "navidrome" ]
|
||||
volumes:
|
||||
- /data/navidrome/data:/data
|
||||
- /data/shared/media/music:/music:ro
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.navidrome.rule: Host(`music.desu.ltd`)
|
||||
traefik.http.routers.navidrome.entrypoints: web
|
||||
tags: [ docker, navidrome ]
|
@ -2,7 +2,17 @@
|
||||
- name: docker deploy nextcloud
|
||||
docker_container:
|
||||
name: nextcloud
|
||||
image: nextcloud:30
|
||||
image: nextcloud:27
|
||||
# The entrypoint workaround is for this issue:
|
||||
#
|
||||
# https://github.com/nextcloud/docker/issues/1414
|
||||
#
|
||||
# This installs imagemagick to allow for SVG support and to clear the last
|
||||
# setup warning in the application.
|
||||
# It can be safely removed upon closure of this issue. I'm just doing it to
|
||||
# make the big bad triangle go away.
|
||||
entrypoint: /bin/sh
|
||||
command: -c "apt-get update; apt-get install -y libmagickcore-6.q16-6-extra; /entrypoint.sh apache2-foreground"
|
||||
env:
|
||||
PHP_UPLOAD_LIMIT: 1024M
|
||||
networks:
|
||||
@ -13,22 +23,11 @@
|
||||
- /data/nextcloud/config:/var/www/html/config
|
||||
- /data/nextcloud/themes:/var/www/html/themes
|
||||
- /data/nextcloud/data:/var/www/html/data
|
||||
- /data/shared:/shared
|
||||
tags: [ docker, nextcloud ]
|
||||
# Vanilla Nextcloud cron
|
||||
- name: assure nextcloud cron cronjob
|
||||
ansible.builtin.cron: user=root name=nextcloud minute=*/5 job="docker exec --user www-data nextcloud php -f /var/www/html/cron.php"
|
||||
tags: [ docker, nextcloud, cron ]
|
||||
# Plugin crons
|
||||
- name: assure nextcloud preview generator cronjob
|
||||
ansible.builtin.cron: user=root name=nextcloud-preview-generator hour=1 minute=10 job="docker exec --user www-data nextcloud php occ preview:pre-generate"
|
||||
tags: [ docker, nextcloud, cron ]
|
||||
# Maintenance tasks
|
||||
- name: assure nextcloud update cronjob
|
||||
ansible.builtin.cron: user=root name=nextcloud-update minute=*/30 job="docker exec --user www-data nextcloud php occ app:update --all"
|
||||
tags: [ docker, nextcloud, cron ]
|
||||
- name: assure nextcloud db indices cronjob
|
||||
ansible.builtin.cron: user=root name=nextcloud-update-db-inidices hour=1 job="docker exec --user www-data nextcloud php occ db:add-missing-indices"
|
||||
tags: [ docker, nextcloud, cron ]
|
||||
- name: assure nextcloud expensive migration cronjob
|
||||
ansible.builtin.cron: user=root name=nextcloud-update-expensive-migration hour=1 minute=30 job="docker exec --user www-data nextcloud php occ db:add-missing-indices"
|
||||
tags: [ docker, nextcloud, cron ]
|
||||
|
@ -8,8 +8,4 @@
|
||||
aliases: [ "prowlarr" ]
|
||||
volumes:
|
||||
- /data/prowlarr/config:/config
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.prowlarr.rule: Host(`prowlarr.media.desu.ltd`)
|
||||
traefik.http.routers.prowlarr.entrypoints: web
|
||||
tags: [ docker, prowlarr ]
|
||||
|
@ -10,8 +10,4 @@
|
||||
- /data/radarr/config:/config
|
||||
- /data/shared/downloads:/data
|
||||
- /data/shared/media/movies:/tv
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.radarr.rule: Host(`radarr.media.desu.ltd`)
|
||||
traefik.http.routers.radarr.entrypoints: web
|
||||
tags: [ docker, radarr ]
|
||||
|
@ -10,8 +10,4 @@
|
||||
- /data/sonarr/config:/config
|
||||
- /data/shared/downloads:/data
|
||||
- /data/shared/media/shows:/tv
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.sonarr.rule: Host(`sonarr.media.desu.ltd`)
|
||||
traefik.http.routers.sonarr.entrypoints: web
|
||||
tags: [ docker, sonarr ]
|
||||
|
@ -2,7 +2,7 @@
|
||||
- name: docker deploy synapse
|
||||
docker_container:
|
||||
name: synapse
|
||||
image: matrixdotorg/synapse:latest
|
||||
image: ghcr.io/element-hq/synapse:latest
|
||||
env:
|
||||
TZ: "America/Chicago"
|
||||
SYNAPSE_SERVER_NAME: matrix.desu.ltd
|
||||
|
@ -11,8 +11,6 @@
|
||||
OPENVPN_USERNAME: "{{ secret_pia_user }}"
|
||||
OPENVPN_PASSWORD: "{{ secret_pia_pass }}"
|
||||
LOCAL_NETWORK: 192.168.0.0/16
|
||||
devices:
|
||||
- /dev/net/tun
|
||||
capabilities:
|
||||
- NET_ADMIN
|
||||
ports:
|
||||
@ -25,9 +23,4 @@
|
||||
- /data/transmission/config:/config
|
||||
- /data/shared/downloads:/data
|
||||
- /data/transmission/watch:/watch
|
||||
labels:
|
||||
traefik.enable: "true"
|
||||
traefik.http.routers.transmission.rule: Host(`transmission.media.desu.ltd`)
|
||||
traefik.http.routers.transmission.entrypoints: web
|
||||
traefik.http.services.transmission.loadbalancer.server.port: "9091"
|
||||
tags: [ docker, transmission ]
|
||||
|
@ -6,7 +6,6 @@
|
||||
append: "{{ adminuser_groups_append }}"
|
||||
groups: "{{ adminuser_groups + adminuser_groups_extra }}"
|
||||
shell: "{{ adminuser_shell }}"
|
||||
tags: [ adminuser ]
|
||||
- name: assure admin user ssh key
|
||||
ansible.builtin.user:
|
||||
name: "{{ adminuser_name }}"
|
||||
@ -14,20 +13,15 @@
|
||||
ssh_key_type: "{{ adminuser_ssh_key_type }}"
|
||||
ssh_key_file: ".ssh/id_{{ adminuser_ssh_key_type }}"
|
||||
when: adminuser_ssh_key
|
||||
tags: [ adminuser ]
|
||||
- name: assure admin user ssh authorized keys
|
||||
authorized_key: user={{ adminuser_name }} key={{ item }}
|
||||
loop: "{{ adminuser_ssh_authorized_keys }}"
|
||||
tags: [ adminuser ]
|
||||
- name: remove admin user ssh keys
|
||||
authorized_key: state=absent user={{ adminuser_name }} key={{ item }}
|
||||
loop: "{{ adminuser_ssh_unauthorized_keys }}"
|
||||
tags: [ adminuser ]
|
||||
- name: assure admin user pass
|
||||
ansible.builtin.user: name={{ adminuser_name }} password={{ adminuser_password }}
|
||||
when: adminuser_password is defined
|
||||
tags: [ adminuser ]
|
||||
- name: assure admin user sudo rule
|
||||
ansible.builtin.lineinfile: path=/etc/sudoers line={{ adminuser_sudo_rule }}
|
||||
when: adminuser_sudo
|
||||
tags: [ adminuser ]
|
||||
|
@ -1 +1 @@
|
||||
Subproject commit 56549b8ac718997c6b5c314636955e46ee5e8cc1
|
||||
Subproject commit 1a332f6788d4ae24b52948850965358790861432
|
4
roles/ansible/tasks/main.yml
Normal file
4
roles/ansible/tasks/main.yml
Normal file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
- name: install ansible
|
||||
pip: name=ansible<5,ansible-lint state=latest
|
@ -1,18 +1,12 @@
|
||||
# Which backup script to use. Configuration is somewhat unique to each script
|
||||
backup_script: s3backup
|
||||
restore_script: s3restore
|
||||
# When to kick off backups using the systemd timer
|
||||
backup_time: "*-*-* 02:00:00"
|
||||
# What variation should the systemd timer have?
|
||||
# Default value of "5400" is 1h30min in seconds
|
||||
backup_time_randomization: "5400"
|
||||
|
||||
# Should this machine backup?
|
||||
# Disabling this variable templates out the scripts, but not the units
|
||||
backup_restic: yes
|
||||
|
||||
# Should this machine prune?
|
||||
# Be very careful with this -- it's an expensive operation
|
||||
backup_restic_prune: no
|
||||
# How frequently should we prune?
|
||||
backup_restic_prune_time: "*-*-01 12:00:00"
|
||||
# What format should the datestamps in the filenames of any backups be in?
|
||||
# Defaults to YYYY-MM-DD-hhmm
|
||||
# So January 5th, 2021 at 3:41PM would be 2021-01-05-1541
|
||||
backup_dateformat: "%Y-%m-%d-%H%M"
|
||||
|
||||
# S3 configuration for scripts that use it
|
||||
# Which bucket to upload the backup to
|
||||
@ -26,11 +20,13 @@ backup_s3_aws_secret_access_key: REPLACEME
|
||||
# List of files/directories to back up
|
||||
# Note that tar is NOT instructed to recurse through symlinks
|
||||
# If you want it to do that, end the path with a slash!
|
||||
backup_s3backup_list:
|
||||
- "/etc"
|
||||
- "/home/{{ adminuser_name }}"
|
||||
backup_s3backup_list: []
|
||||
backup_s3backup_list_extra: []
|
||||
# List of files/directories to --exclude
|
||||
backup_s3backup_exclude_list:
|
||||
- "/home/{{ adminuser_name }}/Vaults/*"
|
||||
backup_s3backup_exclude_list: []
|
||||
backup_s3backup_exclude_list_extra: []
|
||||
# Arguments to pass to tar
|
||||
# Note that passing f here is probably a bad idea
|
||||
backup_s3backup_tar_args: cz
|
||||
backup_s3backup_tar_args_extra: ""
|
||||
# The backup URL to use for S3 copies
|
||||
|
@ -4,6 +4,3 @@
|
||||
- name: restart backup timer
|
||||
ansible.builtin.systemd: name=backup.timer state=restarted daemon_reload=yes
|
||||
become: yes
|
||||
- name: restart prune timer
|
||||
ansible.builtin.systemd: name=backup-prune.timer state=restarted daemon_reload=yes
|
||||
become: yes
|
||||
|
@ -1,51 +1,63 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
# Install restic if we can
|
||||
- name: install restic
|
||||
- name: template out backup script
|
||||
ansible.builtin.template: src={{ backup_script }}.sh dest=/opt/backup.sh mode=0700 owner=root group=root
|
||||
- name: template out analyze script
|
||||
ansible.builtin.template: src={{ backup_script }}-analyze.sh dest=/opt/analyze.sh mode=0700 owner=root group=root
|
||||
- name: template out restore script
|
||||
ansible.builtin.template: src={{ restore_script }}.sh dest=/opt/restore.sh mode=0700 owner=root group=root
|
||||
- name: configure systemd service
|
||||
ansible.builtin.template: src=backup.service dest=/etc/systemd/system/backup.service mode=0644
|
||||
- name: configure systemd timer
|
||||
ansible.builtin.template: src=backup.timer dest=/etc/systemd/system/backup.timer mode=0644
|
||||
notify: restart backup timer
|
||||
- name: enable timer
|
||||
ansible.builtin.systemd: name=backup.timer state=started enabled=yes daemon_reload=yes
|
||||
- name: deploy kopia
|
||||
block:
|
||||
- name: install restic through apt
|
||||
ansible.builtin.apt: name=restic state=present
|
||||
when: ansible_pkg_mgr == "apt"
|
||||
- name: install restic through rpm-ostree
|
||||
community.general.rpm_ostree_pkg: name=restic state=present
|
||||
when: ansible_os_family == "RedHat" and ansible_pkg_mgr == "atomic_container"
|
||||
tags: [ packages ]
|
||||
# The script
|
||||
- name: template out backup-related files
|
||||
ansible.builtin.template:
|
||||
src: "{{ item.src }}"
|
||||
dest: "/opt/{{ item.dest | default(item.src, true) }}"
|
||||
mode: 0700
|
||||
owner: root
|
||||
group: root
|
||||
with_items:
|
||||
- src: restic-password
|
||||
- src: restic-wrapper.sh
|
||||
dest: restic-wrapper
|
||||
# Backup service/timer definitions
|
||||
- name: set up backups
|
||||
block:
|
||||
- name: template out backup script
|
||||
ansible.builtin.template: src=backup.sh dest=/opt/backup.sh mode=0700 owner=root group=root
|
||||
- name: configure systemd service
|
||||
ansible.builtin.template: src=backup.service dest=/etc/systemd/system/backup.service mode=0644
|
||||
- name: configure systemd timer
|
||||
ansible.builtin.template: src=backup.timer dest=/etc/systemd/system/backup.timer mode=0644
|
||||
notify: restart backup timer
|
||||
- name: enable timer
|
||||
ansible.builtin.systemd: name=backup.timer state=started enabled=yes daemon_reload=yes
|
||||
when: backup_restic
|
||||
# Prune script
|
||||
- name: set up restic prune
|
||||
block:
|
||||
- name: template out prune script
|
||||
ansible.builtin.template: src=backup-prune.sh dest=/opt/backup-prune.sh mode=0700 owner=root group=root
|
||||
- name: configure prune systemd service
|
||||
ansible.builtin.template: src=backup-prune.service dest=/etc/systemd/system/backup-prune.service mode=0644
|
||||
- name: configure prune systemd timer
|
||||
ansible.builtin.template: src=backup-prune.timer dest=/etc/systemd/system/backup-prune.timer mode=0644
|
||||
notify: restart prune timer
|
||||
- name: enable prune timer
|
||||
ansible.builtin.systemd: name=backup-prune.timer state=started enabled=yes daemon_reload=yes
|
||||
when: backup_restic_prune
|
||||
- name: ensure kopia dirs
|
||||
ansible.builtin.file:
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0750"
|
||||
path: "{{ item }}"
|
||||
with_items:
|
||||
- /data/kopia/config
|
||||
- /data/kopia/cache
|
||||
- /data/kopia/logs
|
||||
- name: template out password file
|
||||
copy:
|
||||
content: "{{ backup_kopia_password }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0600"
|
||||
dest: /data/kopia/config/repository.config.kopia-password
|
||||
- name: template out configuration file
|
||||
template:
|
||||
src: repository.config.j2
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0600"
|
||||
dest: /data/kopia/config/repository.config
|
||||
- name: deploy kopia
|
||||
community.docker.docker_container:
|
||||
name: kopia
|
||||
image: kopia/kopia:latest
|
||||
env:
|
||||
KOPIA_PASSWORD: "{{ backup_kopia_password }}"
|
||||
command:
|
||||
- "repository"
|
||||
- "connect"
|
||||
- "from-config"
|
||||
- "--file"
|
||||
- "/app/config/repository.config"
|
||||
volumes:
|
||||
- /data/kopia/config:/app/config
|
||||
- /data/kopia/cache:/app/cache
|
||||
- /data/kopia/logs:/app/logs
|
||||
# Shared tmp so Kopia can dump restorable backups to the host
|
||||
- /tmp:/tmp:shared
|
||||
# And a RO mount for the host so it can be backed up
|
||||
- /:/host:ro,rslave
|
||||
|
@ -1,18 +0,0 @@
|
||||
# vim:ft=systemd
|
||||
[Unit]
|
||||
Description=Backup prune service
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
StartLimitInterval=3600
|
||||
StartLimitBurst=2
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
#MemoryMax=512M
|
||||
Environment="GOGC=20"
|
||||
ExecStart=/opt/backup-prune.sh
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
@ -1,11 +0,0 @@
|
||||
#! /bin/sh
|
||||
#
|
||||
# backup-prune.sh
|
||||
# An Ansible-managed script to prune restic backups every now and again
|
||||
#
|
||||
|
||||
set -e
|
||||
|
||||
/opt/restic-wrapper \
|
||||
--verbose \
|
||||
prune
|
@ -1,10 +0,0 @@
|
||||
# vim:ft=systemd
|
||||
[Unit]
|
||||
Description=Backup prune timer
|
||||
|
||||
[Timer]
|
||||
Persistent=true
|
||||
OnCalendar={{ backup_restic_prune_time }}
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
@ -3,17 +3,11 @@
|
||||
Description=Nightly backup service
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
StartLimitInterval=600
|
||||
StartLimitBurst=5
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
#MemoryMax=512M
|
||||
Environment="GOGC=20"
|
||||
MemoryMax=256M
|
||||
ExecStart=/opt/backup.sh
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
RestartSteps=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
@ -1,116 +0,0 @@
|
||||
#! /bin/bash
|
||||
#
|
||||
# backup.sh
|
||||
# Ansible-managed backup script that uses restic to automate machine bacukps to
|
||||
# an S3 bucket. Intelligently handles a few extra apps, too.
|
||||
#
|
||||
# NOTICE: DO NOT MODIFY THIS FILE
|
||||
# Any changes made will be clobbered by Ansible
|
||||
# Please make any configuration changes in the main repo
|
||||
#
|
||||
|
||||
set -e
|
||||
|
||||
# Directories to backup
|
||||
# Ansible will determine the entries here
|
||||
|
||||
# We use a bash array because it affords us some level of sanitization, enough
|
||||
# to let us back up items whose paths contain spaces
|
||||
declare -a DIRS
|
||||
{% for item in backup_s3backup_list + backup_s3backup_list_extra %}
|
||||
DIRS+=("{{ item }}")
|
||||
{% endfor %}
|
||||
# End directory manual configuration
|
||||
|
||||
# Helper functions
|
||||
backup() {
|
||||
# Takes a file or directory to backup and backs it up
|
||||
[ -z "$*" ] && return 1
|
||||
|
||||
for dir in "$@"; do
|
||||
echo "- $dir"
|
||||
done
|
||||
# First, we remove stale locks. This command will only remove locks that have not been
|
||||
# updated in the last half hour. By default, restic updates them during an ongoing
|
||||
# operation every 5 minutes, so this should be perfectly fine to do.
|
||||
# What I'm not sure of (but should be fine because we auto-restart if need be) is if two
|
||||
# processes doing this concurrently will cause issues. I'd hope not but you never know.
|
||||
# restic-unlock(1)
|
||||
/opt/restic-wrapper \
|
||||
--verbose \
|
||||
unlock
|
||||
# Back up everything in the $DIRS array (which was passed as args)
|
||||
# This results in some level of pollution with regard to what paths are backed up
|
||||
# (especially on ostree systems where we do the etc diff) but that's syntactic and
|
||||
# we can script around it.
|
||||
/opt/restic-wrapper \
|
||||
--verbose \
|
||||
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
|
||||
--exclude="{{ item }}" \
|
||||
{% endfor %}
|
||||
--exclude="/data/**/backup" \
|
||||
--exclude="/data/**/backups" \
|
||||
--exclude="*.bak" \
|
||||
--exclude="*.tmp" \
|
||||
--exclude="*.swp" \
|
||||
backup \
|
||||
"$@"
|
||||
# In addition, we should also prune our backups
|
||||
# https://restic.readthedocs.io/en/stable/060_forget.html
|
||||
# --keep-daily n Keeps daily backups for the last n days
|
||||
# --keep-weekly n Keeps weekly backups for the last n weeks
|
||||
# --keep-montly n Keeps monthly backups for the last n months
|
||||
# --keep-tag foo Keeps all snapshots tagged with "foo"
|
||||
# --host "$HOSTNAME" Only act on *our* snapshots. We assume other machines are taking
|
||||
# care of their own houses.
|
||||
/opt/restic-wrapper \
|
||||
--verbose \
|
||||
forget \
|
||||
--keep-daily 7 \
|
||||
--keep-weekly 4 \
|
||||
--keep-monthly 6 \
|
||||
--keep-tag noremove \
|
||||
--host "$HOSTNAME"
|
||||
}
|
||||
|
||||
# Dump Postgres DBs, if possible
|
||||
if command -v psql > /dev/null 2>&1; then
|
||||
# Put down a place for us to store backups, if we don't have it already
|
||||
backupdir="/opt/postgres-backups"
|
||||
mkdir -p "$backupdir"
|
||||
# Populate a list of databases
|
||||
declare -a DATABASES
|
||||
while read line; do
|
||||
DATABASES+=("$line")
|
||||
done < <(sudo -u postgres psql -t -A -c "SELECT datname FROM pg_database where datname not in ('template0', 'template1', 'postgres');" 2>/dev/null)
|
||||
|
||||
# pgdump all DBs, compress them, and pipe straight up to S3
|
||||
echo "Commencing backup on the following databases:"
|
||||
for dir in "${DATABASES[@]}"; do
|
||||
echo "- $dir"
|
||||
done
|
||||
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
|
||||
for db in "${DATABASES[@]}"; do
|
||||
echo "Backing up $db"
|
||||
path="$backupdir/$db.pgsql.gz"
|
||||
sudo -u postgres pg_dump "$db" \
|
||||
| gzip -v9 \
|
||||
> "$path"
|
||||
DIRS+=("$path")
|
||||
done
|
||||
fi
|
||||
|
||||
# Tar up all items in the backup list, recursively, and pipe them straight
|
||||
# up to S3
|
||||
if [ -n "${DIRS[*]}" ]; then
|
||||
echo "Commencing backup on the following items:"
|
||||
for dir in "${DIRS[@]}"; do
|
||||
echo "- $dir"
|
||||
done
|
||||
echo "Will ignore the following items:"
|
||||
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
|
||||
echo "- {{ item }}"
|
||||
{% endfor %}
|
||||
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
|
||||
backup ${DIRS[*]}
|
||||
fi
|
@ -5,7 +5,6 @@ Description=Nightly backup timer
|
||||
[Timer]
|
||||
Persistent=true
|
||||
OnCalendar={{ backup_time }}
|
||||
RandomizedDelaySec={{ backup_time_randomization }}
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
|
21
roles/backup/templates/repository.config.j2
Normal file
21
roles/backup/templates/repository.config.j2
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"storage": {
|
||||
"type": "b2",
|
||||
"config": {
|
||||
"bucket": "desultd-kopia",
|
||||
"keyID": "{{ backup_kopia_access_key_id }}",
|
||||
"key": "{{ backup_kopia_secret_access_key }}"
|
||||
}
|
||||
},
|
||||
"caching": {
|
||||
"cacheDirectory": "/app/cache/cachedir",
|
||||
"maxCacheSize": 5242880000,
|
||||
"maxMetadataCacheSize": 5242880000,
|
||||
"maxListCacheDuration": 30
|
||||
},
|
||||
"hostname": "{{ inventory_hostname }}",
|
||||
"username": "salt",
|
||||
"description": "Desu LTD Backups",
|
||||
"enableActions": false,
|
||||
"formatBlobCacheDuration": 900000000000
|
||||
}
|
@ -1 +0,0 @@
|
||||
{{ backup_restic_password }}
|
@ -1,11 +0,0 @@
|
||||
#! /bin/sh
|
||||
export AWS_ACCESS_KEY_ID="{{ backup_s3_aws_access_key_id }}"
|
||||
export AWS_SECRET_ACCESS_KEY="{{ backup_s3_aws_secret_access_key }}"
|
||||
export RESTIC_CACHE_DIR="/var/cache/restic"
|
||||
mkdir -p "$RESTIC_CACHE_DIR"
|
||||
chown root: "$RESTIC_CACHE_DIR"
|
||||
chmod 0700 "$RESTIC_CACHE_DIR"
|
||||
exec nice -n 10 restic \
|
||||
-r "s3:{{ backup_s3_aws_endpoint_url }}/{{ backup_s3_bucket }}/restic" \
|
||||
-p /opt/restic-password \
|
||||
"$@"
|
17
roles/backup/templates/s3backup-analyze.sh
Normal file
17
roles/backup/templates/s3backup-analyze.sh
Normal file
@ -0,0 +1,17 @@
|
||||
#! /bin/bash
|
||||
#
|
||||
# s3backup-analyze.sh
|
||||
# A companion script to s3backup to analyze disk usage for backups
|
||||
|
||||
# NOTICE: DO NOT MODIFY THIS FILE
|
||||
# Any changes made will be clobbered by Ansible
|
||||
# Please make any configuration changes in the main repo
|
||||
|
||||
exec ncdu \
|
||||
{% for item in backup_s3backup_list + backup_s3backup_list_extra %}
|
||||
"{{ item }}" \
|
||||
{% endfor %}
|
||||
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
|
||||
--exclude "{{ item }}" \
|
||||
{% endfor %}
|
||||
-r
|
115
roles/backup/templates/s3backup.sh
Normal file
115
roles/backup/templates/s3backup.sh
Normal file
@ -0,0 +1,115 @@
|
||||
#! /bin/bash
|
||||
#
|
||||
# s3backup.sh
|
||||
# General-purpose, Ansible-managed backup script to push directories, DBs, and
|
||||
# more up to an S3 bucket
|
||||
#
|
||||
# NOTICE: THIS FILE CONTAINS SECRETS
|
||||
# This file may contain the following secrets depending on configuration:
|
||||
# * An AWS access key
|
||||
# * An AWS session token
|
||||
# These are NOT things you want arbitrary readers to access! Ansible will
|
||||
# attempt to ensure this file has 0700 permissions, but that won't stop you
|
||||
# from changing that yourself
|
||||
# DO NOT ALLOW THIS FILE TO BE READ BY NON-ROOT USERS
|
||||
|
||||
# NOTICE: DO NOT MODIFY THIS FILE
|
||||
# Any changes made will be clobbered by Ansible
|
||||
# Please make any configuration changes in the main repo
|
||||
|
||||
set -e
|
||||
|
||||
# AWS S3 configuration
|
||||
# NOTE: THIS IS SECRET INFORMATION
|
||||
export AWS_ACCESS_KEY_ID="{{ backup_s3_aws_access_key_id }}"
|
||||
export AWS_SECRET_ACCESS_KEY="{{ backup_s3_aws_secret_access_key }}"
|
||||
|
||||
# Directories to backup
|
||||
# Ansible will determine the entries here
|
||||
|
||||
# We use a bash array because it affords us some level of sanitization, enough
|
||||
# to let us back up items whose paths contain spaces
|
||||
declare -a DIRS
|
||||
{% for item in backup_s3backup_list + backup_s3backup_list_extra %}
|
||||
DIRS+=("{{ item }}")
|
||||
{% endfor %}
|
||||
# End directory manual configuration
|
||||
|
||||
# If we have ostree, add diff'd configs to the list, too
|
||||
if command -v ostree > /dev/null 2>&1; then
|
||||
for file in $(
|
||||
ostree admin config-diff 2>/dev/null | \
|
||||
grep -oP '^[A|M]\s*\K.*'
|
||||
); do
|
||||
DIRS+=("/etc/$file")
|
||||
done
|
||||
fi
|
||||
|
||||
# Helper functions
|
||||
backup() {
|
||||
# Takes a file or directory to backup and backs it up
|
||||
[ -z "$1" ] && return 1
|
||||
|
||||
dir="$1"
|
||||
echo "- $dir"
|
||||
|
||||
nice -n 10 tar {{ backup_s3backup_tar_args }}{{ backup_s3backup_tar_args_extra }} \
|
||||
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
|
||||
--exclude "{{ item }}" \
|
||||
{% endfor %}
|
||||
"$dir" \
|
||||
| aws s3 cp --expected-size 274877906944 - \
|
||||
{% if backup_s3_aws_endpoint_url is defined %}
|
||||
--endpoint-url="{{ backup_s3_aws_endpoint_url }}" \
|
||||
{% endif %}
|
||||
"s3://{{ backup_s3_bucket }}/$HOSTNAME/$dir/$(date "+{{ backup_dateformat }}").tar.gz"
|
||||
}
|
||||
|
||||
# Tar up all items in the backup list, recursively, and pipe them straight
|
||||
# up to S3
|
||||
if [ -n "${DIRS[*]}" ]; then
|
||||
echo "Commencing backup on the following items:"
|
||||
for dir in "${DIRS[@]}"; do
|
||||
echo "- $dir"
|
||||
done
|
||||
echo "Will ignore the following items:"
|
||||
{% for item in backup_s3backup_exclude_list + backup_s3backup_exclude_list_extra %}
|
||||
echo "- {{ item }}"
|
||||
{% endfor %}
|
||||
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
|
||||
for dir in "${DIRS[@]}"; do
|
||||
if [ "$dir" == "/data" ]; then
|
||||
for datadir in "$dir"/*; do
|
||||
[ -e "$datadir" ] && backup "$datadir"
|
||||
done
|
||||
else
|
||||
backup "$dir"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Dump Postgres DBs, if possible
|
||||
if command -v psql > /dev/null 2>&1; then
|
||||
# Populate a list of databases
|
||||
declare -a DATABASES
|
||||
while read line; do
|
||||
DATABASES+=("$line")
|
||||
done < <(sudo -u postgres psql -t -A -c "SELECT datname FROM pg_database where datname not in ('template0', 'template1', 'postgres');" 2>/dev/null)
|
||||
|
||||
# pgdump all DBs, compress them, and pipe straight up to S3
|
||||
echo "Commencing backup on the following databases:"
|
||||
for dir in "${DATABASES[@]}"; do
|
||||
echo "- $dir"
|
||||
done
|
||||
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
|
||||
for db in "${DATABASES[@]}"; do
|
||||
echo "Backing up $db"
|
||||
sudo -u postgres pg_dump "$db" \
|
||||
| gzip -v9 \
|
||||
| aws s3 cp - \
|
||||
{% if backup_s3_aws_endpoint_url is defined %}
|
||||
--endpoint-url="{{ backup_s3_aws_endpoint_url }}" \
|
||||
{% endif %}
|
||||
"s3://{{ backup_s3_bucket }}/$HOSTNAME/pgdump/$db/$(date "+{{ backup_dateformat }}").pgsql.gz"
|
||||
done
|
||||
fi
|
47
roles/backup/templates/s3pgdump.sh
Normal file
47
roles/backup/templates/s3pgdump.sh
Normal file
@ -0,0 +1,47 @@
|
||||
#! /bin/bash
|
||||
#
|
||||
# s3pgdump.sh
|
||||
# General-purpose, Ansible-managed backup script to dump PostgreSQL DBs to
|
||||
# an S3 bucket
|
||||
#
|
||||
|
||||
# NOTICE: THIS FILE CONTAINS SECRETS
|
||||
# This file may contain the following secrets depending on configuration:
|
||||
# * An AWS access key
|
||||
# * An AWS session token
|
||||
# These are NOT things you want arbitrary readers to access! Ansible will
|
||||
# attempt to ensure this file has 0700 permissions, but that won't stop you
|
||||
# from changing that yourself
|
||||
# DO NOT ALLOW THIS FILE TO BE READ BY NON-ROOT USERS
|
||||
|
||||
# NOTICE: DO NOT MODIFY THIS FILE
|
||||
# Any changes made will be clobbered by Ansible
|
||||
# Please make any configuration changes in the main repo
|
||||
|
||||
set -e
|
||||
|
||||
# AWS S3 configuration
|
||||
# NOTE: THIS IS SECRET INFORMATION
|
||||
export AWS_ACCESS_KEY_ID="{{ backup_s3_aws_access_key_id }}"
|
||||
export AWS_SECRET_ACCESS_KEY="{{ backup_s3_aws_secret_access_key }}"
|
||||
|
||||
# Populate a list of databases
|
||||
declare -a DATABASES
|
||||
while read line; do
|
||||
DATABASES+=("$line")
|
||||
done < <(sudo -u postgres psql -t -A -c "SELECT datname FROM pg_database where datname not in ('template0', 'template1', 'postgres');" 2>/dev/null)
|
||||
|
||||
# pgdump all DBs, compress them, and pipe straight up to S3
|
||||
echo "Commencing backup on the following databases:"
|
||||
for dir in "${DATABASES[@]}"; do
|
||||
echo "- $dir"
|
||||
done
|
||||
echo "Will upload resultant backups to {{ backup_s3_bucket }}"
|
||||
for db in "${DATABASES[@]}"; do
|
||||
echo "Backing up $db"
|
||||
sudo -u postgres pg_dump "$db" \
|
||||
| gzip -v9 \
|
||||
| aws s3 cp - \
|
||||
"s3://{{ backup_s3_bucket }}/{{ inventory_hostname }}/$db-$(date "+{{ backup_dateformat }}").pgsql.gz"
|
||||
done
|
||||
|
72
roles/backup/templates/s3restore.sh
Normal file
72
roles/backup/templates/s3restore.sh
Normal file
@ -0,0 +1,72 @@
|
||||
#! /bin/bash
|
||||
#
|
||||
# s3restore.sh
|
||||
# Companion script to s3backup.sh, this script obtains a listing of recent
|
||||
# backups and offers the user a choice to restore from.
|
||||
#
|
||||
# This script offers no automation; it is intended for use by hand.
|
||||
#
|
||||
# NOTICE: THIS FILE CONTAINS SECRETS
|
||||
# This file may contain the following secrets depending on configuration:
|
||||
# * An AWS access key
|
||||
# * An AWS session token
|
||||
# These are NOT things you want arbitrary readers to access! Ansible will
|
||||
# attempt to ensure this file has 0700 permissions, but that won't stop you
|
||||
# from changing that yourself
|
||||
# DO NOT ALLOW THIS FILE TO BE READ BY NON-ROOT USERS
|
||||
|
||||
# NOTICE: DO NOT MODIFY THIS FILE
|
||||
# Any changes made will be clobbered by Ansible
|
||||
# Please make any configuration changes in the main repo
|
||||
|
||||
set -e
|
||||
url="s3://{{ backup_s3_bucket}}/$HOSTNAME/"
|
||||
|
||||
# AWS S3 configuration
|
||||
# NOTE: THIS IS SECRET INFORMATION
|
||||
export AWS_ACCESS_KEY_ID="{{ backup_s3_aws_access_key_id }}"
|
||||
export AWS_SECRET_ACCESS_KEY="{{ backup_s3_aws_secret_access_key }}"
|
||||
|
||||
# Obtain a list possible restorable for this host
|
||||
declare -a BACKUPS
|
||||
printf "Querying S3 for restoreable backups (\e[35m$url\e[0m)...\n"
|
||||
while read line; do
|
||||
filename="$(echo "$line" | awk '{print $NF}')"
|
||||
BACKUPS+=("$filename")
|
||||
done < <(aws s3 \
|
||||
{% if backup_s3_aws_endpoint_url is defined %}
|
||||
--endpoint-url="{{ backup_s3_aws_endpoint_url }}" \
|
||||
{% endif %}
|
||||
ls "$url")
|
||||
|
||||
# Present the user with some options
|
||||
printf "Possible restorable backups:\n"
|
||||
printf "\e[37m\t%s\t%s\n\e[0m" "Index" "Filename"
|
||||
for index in "${!BACKUPS[@]}"; do
|
||||
printf "\t\e[32m%s\e[0m\t\e[34m%s\e[0m\n" "$index" "${BACKUPS[$index]}"
|
||||
done
|
||||
|
||||
# Ensure we can write to pwd
|
||||
if ! [ -w "$PWD" ]; then
|
||||
printf "To restore a backup, please navigate to a writeable directory\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Query for a backup to pull down
|
||||
printf "Please select a backup by \e[32mindex\e[0m to pull down\n"
|
||||
printf "It will be copied into the current directory as a tarball\n"
|
||||
read -p "?" restoreindex
|
||||
|
||||
# Sanity check user input
|
||||
if [ -z "${BACKUPS[$restoreindex]}" ]; then
|
||||
printf "Invalid selection, aborting: $restoreindex\n"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Copy the thing
|
||||
printf "Pulling backup...\n"
|
||||
aws s3 \
|
||||
{% if backup_s3_aws_endpoint_url is defined %}
|
||||
--endpoint-url="{{ backup_s3_aws_endpoint_url }}" \
|
||||
{% endif %}
|
||||
cp "$url${BACKUPS[$restoreindex]}" ./
|
@ -11,6 +11,7 @@
|
||||
- apt-file
|
||||
- aptitude
|
||||
- at
|
||||
- awscli
|
||||
- htop
|
||||
- jq
|
||||
- ncdu
|
||||
@ -18,6 +19,8 @@
|
||||
- nfs-common
|
||||
- openssh-server
|
||||
- pwgen
|
||||
- python-is-python3 # God damn you Nextcloud role
|
||||
- python2 # Needed for some legacy crap
|
||||
- python3-apt
|
||||
- python3-boto
|
||||
- python3-boto3
|
||||
@ -41,7 +44,10 @@
|
||||
- name: configure rpm-ostree packages
|
||||
community.general.rpm_ostree_pkg:
|
||||
name:
|
||||
- awscli
|
||||
- htop
|
||||
- ibm-plex-fonts-all
|
||||
- ncdu
|
||||
- screen
|
||||
- vim
|
||||
when: ansible_os_family == "RedHat" and ansible_pkg_mgr == "atomic_container"
|
||||
|
@ -13,14 +13,6 @@ alias ls="ls $lsarguments"
|
||||
alias ll="ls -Al --file-type $lsarguments"
|
||||
unset lsarguments
|
||||
|
||||
# Extra shell aliases for things
|
||||
resticwrapper="/opt/restic-wrapper"
|
||||
if [ -e "$resticwrapper" ]; then
|
||||
alias r="$resticwrapper"
|
||||
alias r-snapshots="$resticwrapper snapshots -g host -c"
|
||||
alias r-prune="$resticwrapper prune"
|
||||
fi
|
||||
|
||||
# Set some bash-specific stuff
|
||||
[ "${BASH-}" ] && [ "$BASH" != "/bin/sh" ] || return
|
||||
# Like a fancy prompt
|
||||
|
@ -148,60 +148,22 @@ desktop_apt_packages_remove_extra: []
|
||||
desktop_apt_debs: []
|
||||
desktop_apt_debs_extra: []
|
||||
|
||||
desktop_ostree_layered_packages:
|
||||
- akmod-v4l2loopback # Used by OBS for proper virtual webcam
|
||||
- cava # Sadly does not enable functionality in waybar :<
|
||||
- cryfs # Used for vaults
|
||||
- foot # Wayblue ships Kitty but I don't like the dev direction
|
||||
- htop # For some reason not the default
|
||||
- ibm-plex-fonts-all
|
||||
- iotop # Requires uncontainerized access to the host
|
||||
- libvirt
|
||||
- ncdu
|
||||
- NetworkManager-tui
|
||||
- obs-studio # Has to be installed native for virtual webcam
|
||||
- restic # Also called in via the backup role, but doing this here saves a deployment
|
||||
- vim # It's just way too much hassle that this isn't installed by default
|
||||
- virt-manager # VMs, baby
|
||||
- ydotool # Must be layered in and configured since it's a hw emulator thing
|
||||
- zerotier-one # Ideally layered in since it's a network daemon
|
||||
desktop_ostree_layered_packages_extra: []
|
||||
desktop_ostree_removed_packages:
|
||||
- firefox
|
||||
- firefox-langpacks
|
||||
desktop_ostree_removed_packages_extra: []
|
||||
desktop_flatpak_remotes:
|
||||
- name: flathub
|
||||
url: "https://dl.flathub.org/repo/flathub.flatpakrepo"
|
||||
- name: flathub-beta
|
||||
url: "https://flathub.org/beta-repo/flathub-beta.flatpakrepo"
|
||||
desktop_flatpak_remotes_extra: []
|
||||
|
||||
desktop_flatpak_packages:
|
||||
- remote: flathub
|
||||
packages:
|
||||
- com.bambulab.BambuStudio
|
||||
- com.github.Matoking.protontricks
|
||||
- com.github.tchx84.Flatseal
|
||||
- com.nextcloud.desktopclient.nextcloud
|
||||
- com.spotify.Client
|
||||
- com.valvesoftware.Steam
|
||||
- com.visualstudio.code
|
||||
- com.vscodium.codium
|
||||
- dev.vencord.Vesktop
|
||||
- im.riot.Riot
|
||||
- io.freetubeapp.FreeTube
|
||||
- io.github.Cockatrice.cockatrice
|
||||
- io.github.hydrusnetwork.hydrus
|
||||
- io.mpv.Mpv
|
||||
- md.obsidian.Obsidian
|
||||
- net.lutris.Lutris
|
||||
- com.discordapp.Discord
|
||||
- com.obsproject.Studio
|
||||
- net.minetest.Minetest
|
||||
- org.DolphinEmu.dolphin-emu
|
||||
- org.freecad.FreeCAD
|
||||
- org.gimp.GIMP
|
||||
- org.gnucash.GnuCash
|
||||
- org.keepassxc.KeePassXC
|
||||
- org.libreoffice.LibreOffice
|
||||
- org.mozilla.firefox
|
||||
- org.mozilla.Thunderbird
|
||||
- org.openscad.OpenSCAD
|
||||
- org.qbittorrent.qBittorrent
|
||||
# - remote: unmojang
|
||||
# packages:
|
||||
# - org.unmojang.FjordLauncher
|
||||
- remote: flathub-beta
|
||||
packages:
|
||||
- net.lutris.Lutris
|
||||
desktop_flatpak_packages_extra: []
|
||||
|
@ -1,5 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
dependencies:
|
||||
- role: flatpak
|
@ -27,16 +27,14 @@
|
||||
ansible.builtin.apt: deb="{{ item }}"
|
||||
loop: "{{ desktop_apt_debs + desktop_apt_debs_extra }}"
|
||||
when: ansible_pkg_mgr == "apt"
|
||||
- name: configure ostree
|
||||
block:
|
||||
- name: configure layered packages for ostree
|
||||
community.general.rpm_ostree_pkg: name="{{ desktop_ostree_layered_packages + desktop_ostree_layered_packages_extra }}"
|
||||
- name: configure removed base packages for ostree
|
||||
community.general.rpm_ostree_pkg: name="{{ desktop_ostree_removed_packages + desktop_ostree_removed_packages_extra }}" state=absent
|
||||
when: ansible_os_family == "RedHat" and ansible_pkg_mgr == "atomic_container"
|
||||
- name: configure pip3 packages
|
||||
ansible.builtin.pip: executable=/usr/bin/pip3 state=latest name="{{ desktop_pip3_packages + desktop_pip3_packages_extra }}"
|
||||
when: ansible_pkg_mgr == "apt"
|
||||
- name: configure installed flatpaks
|
||||
flatpak: name="{{ item.packages }}" state=present remote="{{ item.remote | default('flathub', true) }}"
|
||||
with_items: "{{ desktop_flatpak_packages + desktop_flatpak_packages_extra }}"
|
||||
when: ansible_os_family != "Gentoo"
|
||||
- name: configure flatpak
|
||||
block:
|
||||
- name: configure flatpak remotes
|
||||
flatpak_remote: name="{{ item.name }}" state=present flatpakrepo_url="{{ item.url }}"
|
||||
with_items: "{{ desktop_flatpak_remotes + desktop_flatpak_remotes_extra }}"
|
||||
- name: configure installed flatpaks
|
||||
flatpak: name="{{ item.packages }}" state=present remote="{{ item.remote | default('flathub', true) }}"
|
||||
with_items: "{{ desktop_flatpak_packages + desktop_flatpak_packages_extra }}"
|
||||
|
41
roles/docker-tmodloader14/defaults/main.yml
Normal file
41
roles/docker-tmodloader14/defaults/main.yml
Normal file
@ -0,0 +1,41 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
tmodloader_name: generic
|
||||
|
||||
# Container settings
|
||||
tmodloader_uid: 1521
|
||||
tmodloader_gid: 1521
|
||||
tmodloader_state: started
|
||||
tmodloader_image: rehashedsalt/tmodloader-docker:bleeding
|
||||
tmodloader_restart_policy: unless-stopped
|
||||
tmodloader_timezone: "America/Chicago"
|
||||
# Container network settings
|
||||
tmodloader_external_port: "7777"
|
||||
tmodloader_data_prefix: "/data/terraria/{{ tmodloader_name }}"
|
||||
|
||||
# Server configuration
|
||||
# We have two variables here; things you might not want to change and things
|
||||
# that you probably will
|
||||
tmodloader_config:
|
||||
autocreate: "3"
|
||||
difficulty: "1"
|
||||
secure: "0"
|
||||
tmodloader_config_extra:
|
||||
maxplayers: "8"
|
||||
motd: "Deployed via Ansible edition"
|
||||
password: "dicks"
|
||||
# Server configuration specific to this Ansible role
|
||||
# DO NOT CHANGE
|
||||
tmodloader_config_internal:
|
||||
port: "7777"
|
||||
world: "/terraria/ModLoader/Worlds/World.wld"
|
||||
worldpath: "/terraria/ModLoader/Worlds"
|
||||
# A list of mods to acquire
|
||||
# The default server of mirror.sgkoi.dev is the official tModLoader mod browser
|
||||
# mirror
|
||||
tmodloader_mod_server: "https://mirror.sgkoi.dev"
|
||||
# tmodloader_mods:
|
||||
# - "CalamityMod"
|
||||
# - "RecipeBrowser"
|
||||
# - "BossChecklist"
|
||||
tmodloader_mods: []
|
7
roles/docker-tmodloader14/handlers/main.yml
Normal file
7
roles/docker-tmodloader14/handlers/main.yml
Normal file
@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
- name: restart tmodloader {{ tmodloader_name }}
|
||||
docker_container:
|
||||
name: "tmodloader-{{ tmodloader_name }}"
|
||||
state: started
|
||||
restart: yes
|
76
roles/docker-tmodloader14/tasks/main.yml
Normal file
76
roles/docker-tmodloader14/tasks/main.yml
Normal file
@ -0,0 +1,76 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
- name: assure tmodloader {{ tmodloader_name }} directory structure
|
||||
ansible.builtin.file:
|
||||
state: directory
|
||||
owner: "{{ tmodloader_uid }}"
|
||||
group: "{{ tmodloader_gid }}"
|
||||
mode: "0750"
|
||||
path: "{{ item }}"
|
||||
# We recurse here since these directories and all of their contents
|
||||
# should be read-write by the container without exception.
|
||||
recurse: yes
|
||||
with_items:
|
||||
- "{{ tmodloader_data_prefix }}/backups"
|
||||
- "{{ tmodloader_data_prefix }}/data"
|
||||
- "{{ tmodloader_data_prefix }}/data/ModLoader"
|
||||
- "{{ tmodloader_data_prefix }}/data/ModLoader/Mods"
|
||||
- "{{ tmodloader_data_prefix }}/data/ModLoader/Worlds"
|
||||
- "{{ tmodloader_data_prefix }}/logs"
|
||||
- name: assure mods
|
||||
ansible.builtin.shell:
|
||||
cmd: "curl -L \"{{ tmodloader_mod_server }}\" -o \"{{ item }}.tmod\" && chown \"{{ tmodloader_uid }}:{{ tmodloader_gid }}\" \"{{ item }}.tmod\""
|
||||
chdir: "{{ tmodloader_data_prefix }}/data/ModLoader/Mods"
|
||||
creates: "{{ tmodloader_data_prefix }}/data/ModLoader/Mods/{{ item }}.tmod"
|
||||
with_list: "{{ tmodloader_mods }}"
|
||||
when: tmodloader_mods
|
||||
notify: "restart tmodloader {{ tmodloader_name }}"
|
||||
- name: enable mods
|
||||
ansible.builtin.template:
|
||||
src: enabled.json
|
||||
dest: "{{ tmodloader_data_prefix }}/data/ModLoader/Mods/enabled.json"
|
||||
owner: "{{ tmodloader_uid }}"
|
||||
group: "{{ tmodloader_gid }}"
|
||||
mode: "0750"
|
||||
when: tmodloader_mods
|
||||
notify: "restart tmodloader {{ tmodloader_name }}"
|
||||
- name: assure tmodloader {{ tmodloader_name }} files
|
||||
ansible.builtin.file:
|
||||
state: touch
|
||||
owner: "{{ tmodloader_uid }}"
|
||||
group: "{{ tmodloader_gid }}"
|
||||
mode: "0750"
|
||||
path: "{{ item }}"
|
||||
with_items:
|
||||
- "{{ tmodloader_data_prefix }}/config.txt"
|
||||
- name: assure {{ tmodloader_name }} configs
|
||||
ansible.builtin.lineinfile:
|
||||
state: present
|
||||
regexp: "^{{ item.key }}"
|
||||
line: "{{ item.key }}={{ item.value }}"
|
||||
path: "{{ tmodloader_data_prefix }}/config.txt"
|
||||
with_dict: "{{ tmodloader_config | combine(tmodloader_config_extra) | combine(tmodloader_config_internal) }}"
|
||||
notify: "restart tmodloader {{ tmodloader_name }}"
|
||||
- name: assure {{ tmodloader_name }} backup cronjob
|
||||
ansible.builtin.cron:
|
||||
user: root
|
||||
name: "terraria-{{ tmodloader_name }}"
|
||||
minute: "*/30"
|
||||
job: "tar czvf \"{{ tmodloader_data_prefix }}/backups/world-$(date +%Y-%m-%d-%H%M).tgz\" \"{{ tmodloader_data_prefix }}/data/ModLoader/Worlds\" \"{{ tmodloader_data_prefix }}/data/tModLoader/Worlds\""
|
||||
- name: assure tmodloader {{ tmodloader_name }} container
|
||||
docker_container:
|
||||
name: "tmodloader-{{ tmodloader_name }}"
|
||||
state: started
|
||||
image: "{{ tmodloader_image }}"
|
||||
restart_policy: "{{ tmodloader_restart_policy }}"
|
||||
pull: yes
|
||||
user: "{{ tmodloader_uid }}:{{ tmodloader_gid }}"
|
||||
env:
|
||||
TZ: "{{ tmodloader_timezone }}"
|
||||
ports:
|
||||
- "{{ tmodloader_external_port }}:7777"
|
||||
volumes:
|
||||
- "{{ tmodloader_data_prefix }}/data:/terraria"
|
||||
- "{{ tmodloader_data_prefix }}/config.txt:/terraria/config.txt"
|
||||
- "{{ tmodloader_data_prefix }}/logs:/terraria-server/tModLoader-Logs"
|
6
roles/docker-tmodloader14/templates/enabled.json
Normal file
6
roles/docker-tmodloader14/templates/enabled.json
Normal file
@ -0,0 +1,6 @@
|
||||
[
|
||||
{% for item in tmodloader_mods[1:] %}
|
||||
"{{ item }}",
|
||||
{% endfor %}
|
||||
"{{ tmodloader_mods[0] }}"
|
||||
]
|
@ -1,7 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
---
|
||||
flatpak_remotes:
|
||||
- name: flathub
|
||||
state: present
|
||||
url: "https://dl.flathub.org/repo/flathub.flatpakrepo"
|
||||
flatpak_remotes_extra: []
|
@ -1,17 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
- name: install flatpak on apt distros
|
||||
when: ansible_pkg_mgr == "apt"
|
||||
block:
|
||||
- name: install flatpak packages
|
||||
ansible.builtin.apt:
|
||||
state: present
|
||||
pkg:
|
||||
- flatpak
|
||||
- name: configure flatpak remotes
|
||||
with_items: "{{ flatpak_remotes + flatpak_remotes_extra }}"
|
||||
community.general.flatpak_remote:
|
||||
name: "{{ item.name }}"
|
||||
state: "{{ item.state }}"
|
||||
flatpakrepo_url: "{{ item.url }}"
|
@ -1,28 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
# What is the name of the server? This should be unique per instance
|
||||
terraria_server_name: "generic"
|
||||
# Remove this Terraria server instead of provision it?
|
||||
terraria_server_remove: no
|
||||
|
||||
# What mods should be enabled?
|
||||
terraria_mods: []
|
||||
|
||||
# Basic server configuration
|
||||
terraria_shutdown_message: "Server is going down NOW!"
|
||||
terraria_motd: "Literally playing Minecraft"
|
||||
terraria_password: "dicks"
|
||||
terraria_port: "7777"
|
||||
|
||||
terraria_world_name: "World"
|
||||
# Leaving this value blank rolls one for us
|
||||
terraria_world_seed: ""
|
||||
# 1 Small
|
||||
# 2 Medium
|
||||
# 3 Large
|
||||
terraria_world_size: "3"
|
||||
# 0 Normal
|
||||
# 1 Expert
|
||||
# 2 Master
|
||||
# 3 Journey
|
||||
terraria_world_difficulty: "1"
|
@ -1,58 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
#
|
||||
# Docs available here:
|
||||
# https://github.com/JACOBSMILE/tmodloader1.4
|
||||
#
|
||||
# If you need to run a command in this container:
|
||||
# docker exec tmodloader inject "say Hello World!"
|
||||
#
|
||||
---
|
||||
- name: set backups tmodloader - {{ terraria_server_name }}
|
||||
vars:
|
||||
backup_dirs:
|
||||
- "/data/tmodloader/{{ terraria_server_name }}/data/tModLoader/Worlds"
|
||||
backup_dest: "/data/tmodloader/{{ terraria_server_name }}/backups"
|
||||
ansible.builtin.cron:
|
||||
user: root
|
||||
name: "terraria-{{ terraria_server_name }}-backup"
|
||||
state: "{{ 'absent' if terraria_server_remove else 'present' }}"
|
||||
minute: "*/15"
|
||||
job: "tar czvf \"{{ backup_dest }}/world-$(date +\\%Y-\\%m-\\%d-\\%H\\%M).tgz\" {{ backup_dirs | join(' ') }} && find {{ backup_dest }}/ -type f -iname \\*.tgz -mtime +1 -print -delete"
|
||||
tags: [ docker, tmodloader, cron, backup, tar ]
|
||||
- name: assure backups dir tmodloader - {{ terraria_server_name }}
|
||||
ansible.builtin.file:
|
||||
path: "/data/tmodloader/{{ terraria_server_name }}/backups"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0700"
|
||||
tags: [ docker, tmodloader, file, directory, backup ]
|
||||
- name: docker deploy tmodloader - {{ terraria_server_name }}
|
||||
community.general.docker_container:
|
||||
name: tmodloader-{{ terraria_server_name }}
|
||||
state: "{{ 'absent' if terraria_server_remove else 'started' }}"
|
||||
image: jacobsmile/tmodloader1.4:latest
|
||||
env:
|
||||
TMOD_AUTODOWNLOAD: "{{ terraria_mods | sort() | join(',') }}"
|
||||
TMOD_ENABLEDMODS: "{{ terraria_mods | sort() | join(',') }}"
|
||||
TMOD_SHUTDOWN_MESSAGE: "{{ terraria_shutdown_message }}"
|
||||
TMOD_MOTD: "{{ terraria_motd }}"
|
||||
TMOD_PASS: "{{ terraria_password }}"
|
||||
TMOD_WORLDNAME: "{{ terraria_world_name }}"
|
||||
TMOD_WORLDSEED: "{{ terraria_world_seed }}"
|
||||
TMOD_WORLDSIZE: "{{ terraria_world_size }}"
|
||||
TMOD_DIFFICULTY: "{{ terraria_world_difficulty }}"
|
||||
TMOD_PORT: "7777"
|
||||
# In theory, this allows you to change how much data the server sends
|
||||
# This is in Hz. Crank it lower to throttle it at the cost of NPC jitteriness
|
||||
#TMOD_NPCSTREAM: "60"
|
||||
ports:
|
||||
- "{{ terraria_port }}:7777/tcp"
|
||||
- "{{ terraria_port }}:7777/udp"
|
||||
volumes:
|
||||
- "/data/tmodloader/{{ terraria_server_name }}/data:/data"
|
||||
- "/data/tmodloader/{{ terraria_server_name }}/logs:/terraria-server/tModLoader-Logs"
|
||||
- "/data/tmodloader/{{ terraria_server_name }}/dotnet:/terraria-server/dotnet"
|
||||
tags: [ docker, tmodloader ]
|
||||
|
@ -1,37 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
|
||||
# Core container configuration
|
||||
ingress_container_image: traefik:latest
|
||||
ingress_container_name: ingress
|
||||
|
||||
# Core service configuration
|
||||
ingress_container_tls: no
|
||||
ingress_container_dashboard: no
|
||||
|
||||
# Secondary container configuration
|
||||
ingress_container_ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
ingress_container_ports_dashboard:
|
||||
- 8080:8080
|
||||
ingress_container_timezone: America/Chicago
|
||||
ingress_container_docker_socket_location: "/var/run/docker.sock"
|
||||
|
||||
# Command args
|
||||
ingress_command_args:
|
||||
- "--api.dashboard=true"
|
||||
- "--providers.docker"
|
||||
- "--providers.docker.exposedbydefault=false"
|
||||
- "--entrypoints.web.address=:80"
|
||||
ingress_command_args_tls:
|
||||
- "--entrypoints.web.address=:443"
|
||||
- "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
|
||||
- "--certificatesresolvers.letsencrypt.acme.email=rehashedsalt@cock.li"
|
||||
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
|
||||
ingress_command_args_extra: []
|
||||
|
||||
# Network configuration
|
||||
ingress_container_networks:
|
||||
- name: web
|
||||
aliases: [ "ingress" ]
|
@ -1,16 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
- name: assure traefik container
|
||||
docker_container:
|
||||
name: "{{ ingress_container_name }}"
|
||||
image: "{{ ingress_container_image }}"
|
||||
restart_policy: unless-stopped
|
||||
command: "{{ ingress_command_args + ingress_command_args_tls + ingress_command_args_extra if ingress_container_tls else ingress_command_args + ingress_command_args_extra }}"
|
||||
env:
|
||||
TZ: "{{ ingress_container_timezone }}"
|
||||
networks: "{{ ingress_container_networks }}"
|
||||
ports: "{{ ingress_container_ports + ingress_container_ports_dashboard if ingress_container_dashboard else ingress_container_ports }}"
|
||||
volumes:
|
||||
- "{{ ingress_container_docker_socket_location }}:/var/run/docker.sock"
|
||||
- "/data/traefik/letsencrypt:/letsencrypt"
|
||||
tags: [ docker, ingress, traefik ]
|
@ -53,12 +53,9 @@ server {
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_pass {{ server.proxy_pass }};
|
||||
proxy_request_buffering off;
|
||||
}
|
||||
{% elif server.proxies is defined %}
|
||||
# Proxy locations
|
||||
@ -68,12 +65,9 @@ server {
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_pass {{ proxy.pass }};
|
||||
proxy_request_buffering off;
|
||||
}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
@ -1,18 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
---
|
||||
kodi_flatpak_name: "tv.kodi.Kodi"
|
||||
|
||||
kodi_autologin_user: "kodi"
|
||||
kodi_autologin_user_groups:
|
||||
- audio # Gotta be able to play audio
|
||||
- tty # Required to start Cage
|
||||
- video # Not sure if required, but could be useful for hw accel
|
||||
kodi_autologin_service: "kodi.service"
|
||||
|
||||
kodi_apt_packages:
|
||||
- alsa-utils # For testing audio
|
||||
- cage # A kiosk wayland compositor
|
||||
- pipewire # Audio routing
|
||||
- pipewire-pulse
|
||||
- wireplumber
|
||||
- xwayland # Required for Kodi since it's not Wayland-native
|
@ -1,8 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
- name: restart kodi
|
||||
ansible.builtin.systemd:
|
||||
name: "{{ kodi_autologin_service }}"
|
||||
state: restarted
|
||||
daemon_reload: yes
|
@ -1,5 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
dependencies:
|
||||
- role: flatpak
|
@ -1,43 +0,0 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
- name: install kodi flatpak
|
||||
community.general.flatpak:
|
||||
name: "{{ kodi_flatpak_name }}"
|
||||
state: present
|
||||
- name: configure kodi autologon
|
||||
block:
|
||||
# Set up packages
|
||||
- name: ensure ubuntu packages
|
||||
when: ansible_pkg_mgr == "apt"
|
||||
ansible.builtin.apt:
|
||||
state: present
|
||||
pkg: "{{ kodi_apt_packages }}"
|
||||
notify: restart kodi
|
||||
# Now do the whole user configuration thing
|
||||
- name: ensure kodi dedicated user
|
||||
ansible.builtin.user:
|
||||
name: "{{ kodi_autologin_user }}"
|
||||
password_lock: yes
|
||||
shell: /bin/bash
|
||||
groups: "{{ kodi_autologin_user_groups }}"
|
||||
append: yes
|
||||
notify: restart kodi
|
||||
- name: get UID for kodi dedicated user
|
||||
ansible.builtin.getent:
|
||||
database: passwd
|
||||
key: "{{ kodi_autologin_user }}"
|
||||
- name: template out systemd unit file
|
||||
ansible.builtin.template:
|
||||
src: kodi.service
|
||||
dest: "/etc/systemd/system/{{ kodi_autologin_service }}"
|
||||
mode: 0644
|
||||
owner: root
|
||||
group: root
|
||||
notify: restart kodi
|
||||
- name: enable systemd unit
|
||||
ansible.builtin.systemd:
|
||||
name: "{{ kodi_autologin_service }}"
|
||||
state: started
|
||||
enabled: yes
|
||||
daemon_reload: yes
|
@ -1,39 +0,0 @@
|
||||
[Unit]
|
||||
Description=Kodi multimedia platform
|
||||
After=systemd-user-sessions.service plymouth-quit-wait.service
|
||||
Before=graphical.target
|
||||
ConditionPathExists=/dev/tty0
|
||||
|
||||
Wants=dbus.socket systemd-logind.service
|
||||
After=dbus.socket systemd-logind.service
|
||||
|
||||
Conflicts=getty@tty1.service
|
||||
After=getty@tty1.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=kodi
|
||||
|
||||
#ExecStart=/usr/bin/dbus-run-session -- /usr/bin/cage -d -- flatpak run tv.kodi.Kodi
|
||||
ExecStart=/usr/bin/cage -d -- flatpak run tv.kodi.Kodi
|
||||
#ExecStartPost=chvt tty1
|
||||
|
||||
Environment="XDG_RUNTIME_DIR=/run/user/{{ ansible_facts.getent_passwd[kodi_autologin_user][1] }}"
|
||||
Environment="XDG_SESSION_TYPE=wayland"
|
||||
Environment="WLR_BACKENDS=drm"
|
||||
|
||||
UtmpIdentifier=tty1
|
||||
UtmpMode=user
|
||||
TTYPath=/dev/tty1
|
||||
TTYReset=yes
|
||||
TTYVHangup=yes
|
||||
TTYVTDisallocate=yes
|
||||
StandardInput=tty-fail
|
||||
StandardOutput=journal
|
||||
PAMName=cage
|
||||
|
||||
Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
Alias=display-manager.service
|
27
roles/nagios/defaults/main.yml
Normal file
27
roles/nagios/defaults/main.yml
Normal file
@ -0,0 +1,27 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
nagios_data_dir: /data/nagios
|
||||
nagios_admin_pass: foobar
|
||||
nagios_timezone: "America/Chicago"
|
||||
# nagios_contacts:
|
||||
# - name: Bob
|
||||
# host_notification_commands: notify-host-by-email
|
||||
# service_notification_commands: notify-service-by-email
|
||||
# extra:
|
||||
# - key: email
|
||||
# value: bob@mysite.example.com
|
||||
nagios_contacts: []
|
||||
# nagios_commands:
|
||||
# - name: check_thing
|
||||
# command: "$USER1$/check_thing -H $HOSTADDRESS% $ARG1$
|
||||
nagios_commands: []
|
||||
# nagios_services:
|
||||
# - name: HTTP
|
||||
# command: check_http
|
||||
# hostgroup: tag-nagios-checkhttp
|
||||
# - name: SSH
|
||||
# command: check_ssh
|
||||
# - name: Docker
|
||||
# command: foo
|
||||
# hostgroup: "!tag-no-docker"
|
||||
nagios_services: []
|
4
roles/nagios/handlers/main.yml
Normal file
4
roles/nagios/handlers/main.yml
Normal file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
- name: restart nagios
|
||||
docker_container: name=nagios state=started restart=yes
|
41
roles/nagios/tasks/main.yml
Normal file
41
roles/nagios/tasks/main.yml
Normal file
@ -0,0 +1,41 @@
|
||||
# vim:ft=ansible:
|
||||
- name: assure data directory for nagios
|
||||
ansible.builtin.file: path="{{ nagios_data_dir }}" state=directory mode=0755
|
||||
tags: [ nagios ]
|
||||
- name: docker deploy nagios
|
||||
docker_container:
|
||||
name: nagios
|
||||
#image: jasonrivers/nagios
|
||||
image: manios/nagios:latest
|
||||
pull: yes
|
||||
restart_policy: unless-stopped
|
||||
state: started
|
||||
env:
|
||||
NAGIOSADMIN_USER: admin
|
||||
NAGIOSADMIN_PASS: "{{ nagios_admin_pass }}"
|
||||
NAGIOS_TIMEZONE: "{{ nagios_timezone }}"
|
||||
NAGIOS_FQDN: nagios.desu.ltd
|
||||
networks:
|
||||
- name: web
|
||||
aliases: [ "nagios" ]
|
||||
volumes:
|
||||
- "{{ nagios_data_dir }}/etc:/opt/nagios/etc"
|
||||
- "{{ nagios_data_dir }}/var:/opt/nagios/var"
|
||||
- "{{ nagios_data_dir }}/plugins:/opt/Custom-Nagios-Plugins"
|
||||
- "{{ nagios_data_dir }}/nagiosgraph/var:/opt/nagiosgraph/var"
|
||||
- "{{ nagios_data_dir }}/nagiosgraph/etc:/opt/nagiosgraph/etc"
|
||||
- /dev/null:/opt/nagios/bin/send_nsca
|
||||
tags: [ docker, nagios ]
|
||||
- name: template out scripts for nagios
|
||||
ansible.builtin.template: src="{{ item }}" dest="{{ nagios_data_dir }}/plugins/{{ item }}" owner=root group=root mode=0755
|
||||
with_items:
|
||||
- notify-by-matrix
|
||||
tags: [ nagios, template, plugins ]
|
||||
- name: template out config for nagios
|
||||
ansible.builtin.template: src=nagios-ansible-inventory.cfg.j2 dest="{{ nagios_data_dir }}/etc/objects/ansible.cfg" owner=100 group=101 mode=0644
|
||||
tags: [ nagios, template ]
|
||||
notify: restart nagios
|
||||
- name: assure config file is loaded
|
||||
ansible.builtin.lineinfile: path="{{ nagios_data_dir }}/etc/nagios.cfg" line='cfg_file=/opt/nagios/etc/objects/ansible.cfg'
|
||||
tags: [ nagios, template ]
|
||||
notify: restart nagios
|
153
roles/nagios/templates/nagios-ansible-inventory.cfg.j2
Normal file
153
roles/nagios/templates/nagios-ansible-inventory.cfg.j2
Normal file
@ -0,0 +1,153 @@
|
||||
# {{ ansible_managed }}
|
||||
|
||||
# Templates
|
||||
define host {
|
||||
name ansible-linux-server
|
||||
check_period 24x7
|
||||
check_interval 10
|
||||
retry_interval 3
|
||||
max_check_attempts 10
|
||||
check_command check-host-alive
|
||||
notification_period 24x7
|
||||
notification_interval 120
|
||||
hostgroups ansible
|
||||
check_period 24x7
|
||||
contacts salt
|
||||
register 0
|
||||
}
|
||||
define service {
|
||||
use generic-service
|
||||
name ansible-generic-service
|
||||
max_check_attempts 10
|
||||
check_interval 10
|
||||
retry_interval 2
|
||||
register 0
|
||||
}
|
||||
|
||||
# Default hostgroup
|
||||
define hostgroup {
|
||||
hostgroup_name ansible
|
||||
alias Ansible-managed Hosts
|
||||
}
|
||||
|
||||
# Additional timeperiods for convenience
|
||||
define timeperiod {
|
||||
timeperiod_name ansible-not-late-at-night
|
||||
alias Not Late at Night
|
||||
sunday 07:00-22:00
|
||||
monday 07:00-22:00
|
||||
tuesday 07:00-22:00
|
||||
wednesday 07:00-22:00
|
||||
thursday 07:00-22:00
|
||||
friday 07:00-22:00
|
||||
saturday 07:00-22:00
|
||||
}
|
||||
|
||||
{% if nagios_contacts is defined %}
|
||||
# Contacts
|
||||
# Everything here is defined in nagios_contacts
|
||||
{% for contact in nagios_contacts %}
|
||||
define contact {
|
||||
contact_name {{ contact.name }}
|
||||
alias {{ contact.alias | default(contact.name, true ) }}
|
||||
host_notifications_enabled {{ contact.host_notifications_enabled | default('1', true) }}
|
||||
host_notification_period {{ contact.host_notification_period | default('24x7', true) }}
|
||||
host_notification_options {{ contact.host_notification_options | default('d,u,r,f', true ) }}
|
||||
host_notification_commands {{ contact.host_notification_commands }}
|
||||
service_notifications_enabled {{ contact.service_notifications_enabled | default('1', true) }}
|
||||
service_notification_period {{ contact.service_notification_period | default('24x7', true) }}
|
||||
service_notification_options {{ contact.service_notification_options | default('w,c,r,f', true ) }}
|
||||
service_notification_commands {{ contact.service_notification_commands }}
|
||||
{% if contact.extra is defined %}
|
||||
{% for kvp in contact.extra %}
|
||||
{{ kvp.key }} {{ kvp.value }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
# And a contactgroup
|
||||
define contactgroup {
|
||||
contactgroup_name ansible
|
||||
alias Ansible notification contacts
|
||||
members nagiosadmin
|
||||
}
|
||||
|
||||
{% if nagios_commands is defined %}
|
||||
# Commands
|
||||
# Everything here is defined in nagios_commands
|
||||
{% for command in nagios_commands %}
|
||||
define command {
|
||||
command_name {{ command.name }}
|
||||
command_line {{ command.command }}
|
||||
{% if command.extra is defined %}
|
||||
{% for kvp in command.extra %}
|
||||
{{ kvp.key }} {{ kvp.value }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
{% if nagios_services is defined %}
|
||||
# Services
|
||||
# Everything here is defined in nagios_services
|
||||
{% for service in nagios_services %}
|
||||
define service {
|
||||
use ansible-generic-service
|
||||
service_description {{ service.name }}
|
||||
check_command {{ service.command }}
|
||||
hostgroup_name {{ service.hostgroup | default('ansible', true) }}
|
||||
contact_groups ansible
|
||||
{% if service.extra is defined %}
|
||||
{% for kvp in service.extra %}
|
||||
{{ kvp.key }} {{ kvp.value }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
# Hostgroups
|
||||
{% for role in query('netbox.netbox.nb_lookup', 'device-roles', api_endpoint='https://netbox.desu.ltd', token=netbox_token) %}
|
||||
# Device Role: {{ role.value.name }}
|
||||
# Description: {{ role.value.description }}
|
||||
# Created: {{ role.value.created }}
|
||||
# Updated: {{ role.value.last_updated }}
|
||||
define hostgroup {
|
||||
hostgroup_name role-{{ role.value.slug }}
|
||||
alias {{ role.value.display }}
|
||||
}
|
||||
{% endfor %}
|
||||
{% for tag in query('netbox.netbox.nb_lookup', 'tags', api_endpoint='https://netbox.desu.ltd', token=netbox_token) %}
|
||||
# Tag: {{ tag.value.name }}
|
||||
# Description: {{ tag.value.description }}
|
||||
define hostgroup {
|
||||
hostgroup_name tag-{{ tag.value.slug }}
|
||||
alias {{ tag.value.display }}
|
||||
}
|
||||
{% endfor %}
|
||||
{% for type in query('netbox.netbox.nb_lookup', 'device-types', api_endpoint='https://netbox.desu.ltd', token=netbox_token) %}
|
||||
# Type: {{ type.value.display }}
|
||||
define hostgroup {
|
||||
hostgroup_name device-type-{{ type.value.slug }}
|
||||
alias {{ type.value.display }}
|
||||
}
|
||||
{% endfor %}
|
||||
|
||||
# Inventory Hosts and related services
|
||||
{% for host in groups['tags_nagios'] %}
|
||||
{% set vars = hostvars[host] %}
|
||||
{% if vars.tags is defined %}
|
||||
define host {
|
||||
use ansible-linux-server
|
||||
host_name {{ host }}
|
||||
alias {{ host }}
|
||||
address {{ vars.ansible_host }}
|
||||
hostgroups ansible{% for tag in vars.tags %},tag-{{ tag }}{% endfor %}{% for role in vars.device_roles %},role-{{ role }}{% endfor %}{% if vars.device_types is defined %}{% for type in vars.device_types %},device-type-{{ type }}{% endfor %}{% endif %}
|
||||
|
||||
contact_groups ansible
|
||||
}
|
||||
{% endif %}
|
||||
{% endfor %}
|
187
roles/nagios/templates/nagios-ansible.cfg.j2
Normal file
187
roles/nagios/templates/nagios-ansible.cfg.j2
Normal file
@ -0,0 +1,187 @@
|
||||
# {{ ansible_managed }}
|
||||
|
||||
# Templates
|
||||
define host {
|
||||
name ansible-linux-server
|
||||
check_period 24x7
|
||||
check_interval 10
|
||||
retry_interval 3
|
||||
max_check_attempts 10
|
||||
check_command check-host-alive
|
||||
notification_period 24x7
|
||||
notification_interval 120
|
||||
hostgroups ansible
|
||||
check_period 24x7
|
||||
contacts salt
|
||||
register 0
|
||||
}
|
||||
define service {
|
||||
use generic-service
|
||||
name ansible-generic-service
|
||||
max_check_attempts 10
|
||||
check_interval 10
|
||||
retry_interval 2
|
||||
register 0
|
||||
}
|
||||
|
||||
# Default hostgroup
|
||||
define hostgroup {
|
||||
hostgroup_name ansible
|
||||
alias Ansible-managed Hosts
|
||||
}
|
||||
|
||||
# Additional timeperiods for convenience
|
||||
define timeperiod {
|
||||
timeperiod_name ansible-not-late-at-night
|
||||
alias Not Late at Night
|
||||
sunday 07:00-22:00
|
||||
monday 07:00-22:00
|
||||
tuesday 07:00-22:00
|
||||
wednesday 07:00-22:00
|
||||
thursday 07:00-22:00
|
||||
friday 07:00-22:00
|
||||
saturday 07:00-22:00
|
||||
}
|
||||
|
||||
{% if nagios_contacts is defined %}
|
||||
# Contacts
|
||||
# Everything here is defined in nagios_contacts
|
||||
{% for contact in nagios_contacts %}
|
||||
define contact {
|
||||
contact_name {{ contact.name }}
|
||||
alias {{ contact.alias | default(contact.name, true ) }}
|
||||
host_notifications_enabled {{ contact.host_notifications_enabled | default('1', true) }}
|
||||
host_notification_period {{ contact.host_notification_period | default('24x7', true) }}
|
||||
host_notification_options {{ contact.host_notification_options | default('d,u,r,f', true ) }}
|
||||
host_notification_commands {{ contact.host_notification_commands }}
|
||||
service_notifications_enabled {{ contact.service_notifications_enabled | default('1', true) }}
|
||||
service_notification_period {{ contact.service_notification_period | default('24x7', true) }}
|
||||
service_notification_options {{ contact.service_notification_options | default('w,c,r,f', true ) }}
|
||||
service_notification_commands {{ contact.service_notification_commands }}
|
||||
{% if contact.extra is defined %}
|
||||
{% for kvp in contact.extra %}
|
||||
{{ kvp.key }} {{ kvp.value }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
# And a contactgroup
|
||||
define contactgroup {
|
||||
contactgroup_name ansible
|
||||
alias Ansible notification contacts
|
||||
members nagiosadmin
|
||||
}
|
||||
|
||||
{% if nagios_commands is defined %}
|
||||
# Commands
|
||||
# Everything here is defined in nagios_commands
|
||||
{% for command in nagios_commands %}
|
||||
define command {
|
||||
command_name {{ command.name }}
|
||||
command_line {{ command.command }}
|
||||
{% if command.extra is defined %}
|
||||
{% for kvp in command.extra %}
|
||||
{{ kvp.key }} {{ kvp.value }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
{% if nagios_services is defined %}
|
||||
# Services
|
||||
# Everything here is defined in nagios_services
|
||||
{% for service in nagios_services %}
|
||||
define service {
|
||||
use ansible-generic-service
|
||||
service_description {{ service.name }}
|
||||
check_command {{ service.command }}
|
||||
hostgroup_name {{ service.hostgroup | default('ansible', true) }}
|
||||
contact_groups ansible
|
||||
{% if service.extra is defined %}
|
||||
{% for kvp in service.extra %}
|
||||
{{ kvp.key }} {{ kvp.value }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
# Hostgroups
|
||||
{% for role in query('netbox.netbox.nb_lookup', 'device-roles', api_endpoint='https://netbox.desu.ltd', token=netbox_token) %}
|
||||
# Device Role: {{ role.value.name }}
|
||||
# Description: {{ role.value.description }}
|
||||
# Created: {{ role.value.created }}
|
||||
# Updated: {{ role.value.last_updated }}
|
||||
define hostgroup {
|
||||
hostgroup_name role-{{ role.value.slug }}
|
||||
alias {{ role.value.display }}
|
||||
}
|
||||
{% endfor %}
|
||||
{% for tag in query('netbox.netbox.nb_lookup', 'tags', api_endpoint='https://netbox.desu.ltd', token=netbox_token) %}
|
||||
# Tag: {{ tag.value.name }}
|
||||
# Description: {{ tag.value.description }}
|
||||
define hostgroup {
|
||||
hostgroup_name tag-{{ tag.value.slug }}
|
||||
alias {{ tag.value.display }}
|
||||
}
|
||||
{% endfor %}
|
||||
{% for type in query('netbox.netbox.nb_lookup', 'device-types', api_endpoint='https://netbox.desu.ltd', token=netbox_token) %}
|
||||
# Type: {{ type.value.display }}
|
||||
define hostgroup {
|
||||
hostgroup_name device-type-{{ type.value.slug }}
|
||||
alias {{ type.value.display }}
|
||||
}
|
||||
{% endfor %}
|
||||
|
||||
# Hosts
|
||||
{% for host in query('netbox.netbox.nb_lookup', 'devices', api_filter='status=active', api_endpoint='https://netbox.desu.ltd', token=netbox_token) + query('netbox.netbox.nb_lookup', 'virtual-machines', api_filter='status=active', api_endpoint='https://netbox.desu.ltd', token=netbox_token)%}
|
||||
{% if host.value.primary_ip %}
|
||||
{% for tag in host.value.tags %}
|
||||
{% if tag.slug == "nagios" %}
|
||||
# {{ host }}
|
||||
define host {
|
||||
use ansible-linux-server
|
||||
host_name {{ host.value.name }}
|
||||
alias {{ host.value.display }}
|
||||
address {{ host.value.primary_ip.address.split('/',1)[0] }}
|
||||
hostgroups ansible{% for tag in host.value.tags %},tag-{{ tag.slug }}{% endfor %}{% if host.value.device_role is defined -%},role-{{ host.value.device_role.slug }}{% endif %}{% if host.value.role is defined %},role-{{ host.value.role.slug }}{% endif %}{% if host.value.device_type is defined %},device-type-{{ host.value.device_type.slug }}{% endif %}
|
||||
|
||||
contact_groups ansible
|
||||
}
|
||||
{% if host.value.config_context.extra_checks is defined %}
|
||||
{% for check in host.value.config_context.extra_checks %}
|
||||
define service {
|
||||
# Config Context check
|
||||
use ansible-generic-service
|
||||
service_description {{ check.description }}
|
||||
check_command {{ check.command }}
|
||||
host_name {{ host.value.name }}
|
||||
contact_groups ansible
|
||||
}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{# #}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
# Services unique to hosts
|
||||
{% for service in query('netbox.netbox.nb_lookup', 'services', api_endpoint='https://netbox.desu.ltd', token=netbox_token) %}
|
||||
{% if service.value.device %}
|
||||
{% set host_name = service.value.device.name %}
|
||||
{% elif service.value.virtual_machine %}
|
||||
{% set host_name = service.value.virtual_machine.name %}
|
||||
{% endif %}
|
||||
{% if host_name is defined %}
|
||||
# {{ host_name }} - {{ service.value.display }}
|
||||
# Description: {{ service.value.description }}
|
||||
# Created: {{ service.value.created }}
|
||||
# Updated: {{ service.value.last_updated }}
|
||||
{% for tag in service.value.tags %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
81
roles/nagios/templates/notify-by-matrix
Normal file
81
roles/nagios/templates/notify-by-matrix
Normal file
@ -0,0 +1,81 @@
|
||||
#! /bin/sh
|
||||
#
|
||||
# notify-by-matrix
|
||||
# Copyright (C) 2021 Vintage Salt <rehashedsalt@cock.li>
|
||||
#
|
||||
# Distributed under terms of the MIT license.
|
||||
#
|
||||
|
||||
set -e
|
||||
|
||||
# Set our Matrix-related vars here
|
||||
MX_TOKEN="{{ nagios_matrix_token }}"
|
||||
MX_SERVER="{{ nagios_matrix_server }}"
|
||||
MX_ROOM="{{ nagios_matrix_room }}"
|
||||
|
||||
# Get a TXN to prefix this particular message with
|
||||
MX_TXN="$(date "+%s")$(( RANDOM % 9999 ))"
|
||||
|
||||
# Read the first line from STDIN
|
||||
# This is supposed to be the NOTIFICATIONTYPE
|
||||
read notiftype
|
||||
prefix=""
|
||||
# https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/macrolist.html#notificationtype
|
||||
case "$notiftype" in
|
||||
PROBLEM)
|
||||
# Large Red Circle (U+1F534)
|
||||
prefix="🔴"
|
||||
;;
|
||||
RECOVERY)
|
||||
# Large Green Circle (U+1F7E2)
|
||||
prefix="🟢"
|
||||
;;
|
||||
ACKNOWLEDGEMENT)
|
||||
# Symbol For Acknowledge (U+2406)
|
||||
prefix="␆"
|
||||
;;
|
||||
FLAPPINGSTART)
|
||||
# Large Orange Circle (U+1F7E0)
|
||||
prefix="🟠"
|
||||
;;
|
||||
FLAPPINGSTOP)
|
||||
# Large Green Circle (U+1F7E2)
|
||||
prefix="🟢"
|
||||
;;
|
||||
FLAPPINGDISABLED)
|
||||
# Bell with Cancellation Stroke (U+1F515)
|
||||
prefix="🔕"
|
||||
;;
|
||||
DOWNTIMESTART)
|
||||
# Bell with Cancellation Stroke (U+1F515)
|
||||
prefix="🔕"
|
||||
;;
|
||||
DOWNTIMEEND)
|
||||
# Bell (U+1F514)
|
||||
prefix="🔔"
|
||||
;;
|
||||
DOWNTIMECANCELLED)
|
||||
# Bell (U+1F514)
|
||||
prefix="🔔"
|
||||
;;
|
||||
*)
|
||||
prefix="$notiftype - "
|
||||
;;
|
||||
esac
|
||||
|
||||
# Read a message from STDIN
|
||||
# NOTE: This is dangerous and stupid and unsanitized
|
||||
read message
|
||||
while read line; do
|
||||
message="${message}\n${line}"
|
||||
done
|
||||
|
||||
# Push it to the channel
|
||||
curl -X PUT \
|
||||
--header 'Content-Type: application/json' \
|
||||
--header 'Accept: application/json' \
|
||||
-d "{
|
||||
\"msgtype\": \"m.text\",
|
||||
\"body\": \"$prefix $message\"
|
||||
}" \
|
||||
"$MX_SERVER/_matrix/client/unstable/rooms/$MX_ROOM/send/m.room.message/$MX_TXN?access_token=$MX_TOKEN"
|
@ -24,7 +24,6 @@
|
||||
community.docker.docker_container:
|
||||
name: prometheus
|
||||
image: prom/prometheus:latest
|
||||
restart_policy: unless-stopped
|
||||
user: 5476:5476
|
||||
env:
|
||||
TZ: "America/Chicago"
|
||||
@ -56,7 +55,6 @@
|
||||
community.docker.docker_container:
|
||||
name: prometheus-blackbox
|
||||
image: quay.io/prometheus/blackbox-exporter:latest
|
||||
restart_policy: unless-stopped
|
||||
user: 5476:5476
|
||||
command:
|
||||
- '--config.file=/config/blackbox.yml'
|
||||
|
@ -2,7 +2,6 @@
|
||||
---
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
scrape_timeout: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
@ -45,12 +44,6 @@ scrape_configs:
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{# #}
|
||||
{% if tag.slug == "nagios-checkhttp" %}
|
||||
{% for port in service.ports %}
|
||||
- "http://{{ service.name }}:{{ port }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{# #}
|
||||
{% if tag.slug == "nagios-checkmatrix" %}
|
||||
{% for port in service.ports %}
|
||||
- "https://{{ service.name }}:{{ port }}/health"
|
||||
@ -90,46 +83,6 @@ scrape_configs:
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
|
||||
# This job takes in information from Netbox on the generic "prom-metrics" tag
|
||||
# It's useful for all sorts of stuff
|
||||
- job_name: "generic"
|
||||
scheme: "https"
|
||||
static_configs:
|
||||
- targets:
|
||||
{% for host in groups['tags_nagios'] %}
|
||||
{% set vars = hostvars[host] %}
|
||||
{% for service in vars.services %}
|
||||
{% for tag in service.tags %}
|
||||
{# #}
|
||||
{% if tag.slug == "prom-metrics" %}
|
||||
{% for port in service.ports %}
|
||||
- "{{ service.name }}:{{ port }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{# #}
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
|
||||
# This one does the same thing but for HTTP-only clients
|
||||
- job_name: "generic-http"
|
||||
scheme: "http"
|
||||
static_configs:
|
||||
- targets:
|
||||
{% for host in groups['tags_nagios'] %}
|
||||
{% set vars = hostvars[host] %}
|
||||
{% for service in vars.services %}
|
||||
{% for tag in service.tags %}
|
||||
{# #}
|
||||
{% if tag.slug == "prom-metrics-http" %}
|
||||
{% for port in service.ports %}
|
||||
- "{{ service.name }}:{{ port }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{# #}
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
# These two jobs are included for every node in our inventory
|
||||
- job_name: "node-exporter"
|
||||
static_configs:
|
||||
|
@ -1,6 +1,7 @@
|
||||
# vim:ft=ansible:
|
||||
|
||||
zerotier_repo_deb_key: "https://raw.githubusercontent.com/zerotier/ZeroTierOne/master/doc/contact%40zerotier.com.gpg"
|
||||
zerotier_repo_deb: "deb http://download.zerotier.com/debian/bionic bionic main"
|
||||
#zerotier_networks_join:
|
||||
# - 38d1594bb4e73da3
|
||||
zerotier_networks_join: []
|
||||
|
@ -1,13 +1,9 @@
|
||||
#!/usr/bin/env ansible-playbook
|
||||
# vim:ft=ansible:
|
||||
---
|
||||
- name: check for zerotier in /usr/bin
|
||||
- name: check for zerotier
|
||||
stat: path=/usr/bin/zerotier-cli
|
||||
register: zerotier_cli_path
|
||||
- name: check for zerotier in /usr/sbin
|
||||
stat: path=/usr/sbin/zerotier-cli
|
||||
register: zerotier_cli_path
|
||||
when: not zerotier_cli_path.stat.exists
|
||||
- name: install zerotier if we're joining networks
|
||||
block:
|
||||
- name: configure zerotier for apt
|
||||
@ -15,7 +11,7 @@
|
||||
- name: ensure zerotier repo key
|
||||
ansible.builtin.apt_key: url="{{ zerotier_repo_deb_key }}"
|
||||
- name: ensure zerotier repo
|
||||
ansible.builtin.apt_repository: repo="deb http://download.zerotier.com/debian/{{ ansible_distribution_release }} {{ ansible_distribution_release }} main"
|
||||
ansible.builtin.apt_repository: repo="{{ zerotier_repo_deb }}"
|
||||
- name: update apt cache
|
||||
ansible.builtin.apt: update_cache=yes cache_valid_time=86400
|
||||
- name: ensure packages
|
||||
|
Loading…
x
Reference in New Issue
Block a user