A series of Ansible scripts to manage my infrastructure.
Go to file
Salt 0c1fab838f Run test plays on scheduled jobs
It makes sense to skip the test on a pipeline since it just
signifies an application update or a re-run and probably wants
to complete quickly. It does not make sense to get rid of our
safeguards on a job that runs at 1AM every night.
2021-08-05 01:10:37 -05:00
.templates Fix template 2020-10-17 00:27:46 -05:00
contrib Move requirements.yml to root 2021-08-01 21:39:36 -05:00
handlers Move handlers to global handlers 2020-12-20 05:02:17 -06:00
inventory Genericize vars to device roles rather than tags 2021-08-02 15:47:18 -05:00
playbooks Spin botaniapack2 back up 2021-08-03 16:04:10 -05:00
roles Move requirements.yml to root 2021-08-01 21:39:36 -05:00
.ansible-lint Skip instead of warning on package-latest violations 2021-08-01 13:43:58 -05:00
.gitignore Add gitignore entry for cache 2021-02-11 13:16:21 -06:00
.gitlab-ci.yml Run test plays on scheduled jobs 2021-08-05 01:10:37 -05:00
.gitmodules Move fedi1 over, add Pleroma role 2021-01-25 22:19:31 -06:00
ansible.cfg Use /tmp as remote tmp 2021-03-16 21:20:35 -05:00
README.md Add note to README about zt 2021-02-19 02:15:43 -06:00
reboot-home.yml Add pi-storage-2 2021-05-25 15:35:50 -05:00
reboot-prod.yml Rework reboot scripts 2021-02-23 04:11:07 -06:00
requirements.yml Add netbox collection 2021-08-01 21:43:54 -05:00
site.yml Also, only those hosts that use ansible-pull should pull the repo to /etc/ansible 2021-08-01 15:07:02 -05:00

Salt's Ansible Repository

Useful for management across all of 9iron, thefuck, and desu.

TODO

  • Figure out a good monitoring solution that doesn't suck ass

  • Port over configs for Nextcloud on web1.9iron.club

Initialization

Clone the repo, cd in. Done.

Deployment

Adding a new server will require the following be fulfilled:

  • The server is accessible from the Ansible host;

  • The server has a user named ansible which:

    • Accepts the public key located in contrib/desu.pub; and

    • Has passwordless sudo capabilities as root

  • The server is added to inventory/hosts.yml in an appropriate place;

  • DNS records for the machine are set; and

  • The server is running Ubuntu 20.04 or greater

From there, running the playbook site.yml should get the machine up to snuff. To automate the host-local steps, use the script file contrib/bootstrap.sh.

Zerotier

A lot of my home-network side of things is connected together via ZeroTier; initial deployment/repairs may require specifying an ansible_host for the inventory item in question to connect to it locally. Subsequent plays will require connectivity to my home ZeroTier network.

Cloud-managed devices require no such workarounds.

Ad-Hoc Commands

The inventory is configured to allow for ad-hoc commands with very little fuss. For example:

ansible -m shell -a 'systemctl is-failed ansible-pull.service' all

These commands must be run from the root of the repo.

Ansible Galaxy

Several of the roles in this repository are sourced from Ansible Galaxy. They're mirrored here for both easy compatibility with ansible-pull and in case the sources go down. Despite this, they're still managed in roles/requirements.yml for ease of management, source tracking, and updating. Any forks or deviations from these sources should be thoroughly documented.

Should you need to reinitialize them, the following command (run from the root of the repo) will initialize all Galaxy assets:

ansible-galaxy install -r roles/requirements.yml