In the first part of this blog post, I showed how to automate the installation process of the core operating system. When this process is done, we should have a minimum operating system installation with a bit of basic configuration (mainly access related). As already said, this will be the point were Ansible will perform the final configuration. I won’t provide a manual on how to use Ansible in this post (you can find the complete documentation at https://docs.ansible.com/ansible/latest/index.html), instead, I would show some basic concepts how the tool is used for its purpose. To get an idea of how things are organized in Ansible, let’s first take a look at the folder structure of the proof of concept:
├── group_vars │ ├── all.yml │ ├── datacenter1.yml │ └── location1.yml ├── host_vars │ └── server1.domain.local │ └── network.yml ├── playbooks │ ├── add_ssh_fingerprints.yml │ └── reboot_hosts.yml ├── roles │ ├── common │ │ ├── tasks │ │ │ ├── environment.yml │ │ │ ├── localtime.yml │ │ │ ├── main.yml │ │ │ ├── motd.yml │ │ │ ├── networking.yml │ │ │ ├── repositories.yml │ │ │ ├── tools.yml │ │ │ ├── upgrade.yml │ │ │ └── user.yml │ │ └── templates │ │ ├── custom_sh.j2 │ │ ├── ifcfg-interface.j2 │ │ ├── motd.j2 │ │ ├── network.j2 │ │ └── route-interface.j2 │ └── web │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ ├── index_html.j2 │ └── nginx_conf.j2 ├── common.yml ├── production ├── site.yml ├── staging └── web.yml
Ok, let’s go through this tree step by step. The first two directories, group_vars and host_vars contain the variables used in the configuration scripts. The idea is to define the scope of the variables. They can apply to either all hosts, to a data center, a region/location or only to one specific host. Variables that apply to all managed hosts can be for example local administrative user IDs. Variables that apply to a region/location can be for example settings in which time-server should be used by the hosts in that region. Datacenter related variables can be related to the routing for example. And last but not least, host-specific variables can be related to IPs, hostnames and so on and so forth. This enables the administrator to easily set up global infrastructures without much effort.
Before I move on to the playbooks I would like to highlight two files in the root directory. Those are staging and production. Both files contain the servers and groups (region, location, role) where the servers belong to. The distinction between staging and production reflects the role of those (here you can see the staging file):
[hamburg-hosts] node_1.local.domain ansible_ssh_host=172.20.0.4 ansible_user=root node_2.local.domain ansible_ssh_host=172.20.0.5 ansible_user=root node_3.local.domain ansible_ssh_host=172.20.0.6 ansible_user=root [hamburg-web] node_4.local.domain ansible_ssh_host=172.20.0.7 ansible_user=root node_5.local.domain ansible_ssh_host=172.20.0.8 ansible_user=root node_6.local.domain ansible_ssh_host=172.20.0.9 ansible_user=root node_7.local.domain ansible_ssh_host=172.20.0.10 ansible_user=root [berlin-web] node_8.local.domain ansible_ssh_host=172.20.0.11 ansible_user=root node_9.local.domain ansible_ssh_host=172.20.0.12 ansible_user=root node_10.local.domain ansible_ssh_host=172.20.0.13 ansible_user=root [web:children] hamburg-web berlin-web [hamburg:children] hamburg-hosts hamburg-web [berlin:children] berlin-web
The hosts in the staging file can be addressed by the names in the brackets. The exception is the colon entries. Those represent a collection of groups for dedicated hosts, for example, all groups located in Hamburg as shown with the entry [hamburg:children]. The hosts can be accessed with the keyword hamburg. The group names are in a relationship with the variables and this, in the end, closes the circle. Now that we know how things are controlled, we can follow up with the answer to the question of how the things are done.