Recently, ok, it’s roughly one year ago, I posted about bootstrapping your box with Ansible:
The posts where about the automated setup of Linux machines to get rid of the traditional checklists.The posts where about the automated setup of Linux machines to get rid of the traditional checklists. This is already quite nice but unleashes just a small part of the power that Ansible is able to provide. This post is about the next step using Ansible as a configuration instance with some kind of a role concept to provision server. This means in short, all servers with the role web server, for example, get everything installed that is required to act as a web server. As I wrote already, Ansible doesn’t require agents on the target hosts. So the only thing that is required is a controller node that has access to the target nodes (for details look at the first part of this blog series). In this example, the controller node is my MacBook and the target nodes are some virtual machines with minimal installations.
Note 1: This example should work well in private and semi-professional use cases where you don’t have to rely on specific SLAs and where you still able to do the administration manually if required. If you plan to use this kind of automation in an enterprise scenario you should consider evaluating Ansible Tower. The Ansible core engine will stay more or less the same but Ansible Tower enriches the core engine with enterprise functionality like web-based dashboards, RBAC, SSO, audit trail, cloud connectors, support and so on and so forth. Note 2: Automation ease your life, it makes things predictable, it harmonize things, it let you administer big server farms, but, set up wrongly it can mess up your entire IT landscape (imagine you set a wrong default route pointing to nowhere on all servers). Therefore it is essential to test all automation before you deploy them in production.
Ok, back to the example. On the controller node, we need to create a directory first that holds our ansible configuration. You can choose whatever you want, in this example I use
~/Development/ansible as the root for my configuration. The first thing that is required, is the inventory file. Till now this file was based on the INI-file syntax you often see in the Windows world but since Ansible 2.something it is also possible to use the YAML format for this file. Although I’m not the biggest YAML enthusiast, it makes sense from my point of view as the rest of the configuration in Ansible uses this format as well. The inventory file contains the target nodes, optional parameters for that target nodes and, if required, some custom grouping based on for example location, role or the like. This is the inventory I used in this example:
--- # Production inventory all: hosts: bb-8.fritz.box: ansible_ssh_host: 10.1.1.1 ansible_user: root r2-d2.fritz.box: ansible_ssh_host: 10.1.1.2 ansible_user: root c-3po.fritz.box: ansible_ssh_host: 10.1.2.1 ansible_user: root r4-p17.fritz.box: ansible_ssh_host: 10.1.2.2 ansible_user: root children: webservers: hosts: bb-8.fritz.box: c-3po.fritz.box: dbservers: hosts: r2-d2.fritz.box: r4-p17.fritz.box: hamburg: hosts: bb-8.fritz.box: r2-d2.fritz.box: berlin: hosts: c-3po.fritz.box: r4-p17.fritz.box: prod: children: hamburg: test: children: berlin:
You can use more than one inventory to, for example, separate the production from the test environment. The next step is to create the required directories on the top-level. In this example, we will use four directories, group_vars, host_vars, playbooks, and roles. Group_vars and host_vars contain the variables that either belong to a group or to a host. An example would be a network configuration that belongs to a specific host like this:
host_vars ├── bb-8.fritz.box │ └── network.yml ├── c-3po.fritz.box │ └── network.yml ├── r2-d2.fritz.box │ └── network.yml └── r4-p17.fritz.box └── network.yml
The group_vars contain a special file called all.yml which contains variables that belong to the whole landscape, for example, the user configuration for the administrators or packages that should be installed on all nodes.
With the configuration files in place, we can now start to explore the two remaining directories. The first one is the playbooks. I have already introduced this directory in my previous posts about Ansible (see the beginning of the document). Long story short, here you find the one-off checklists as well as other one-off tasks like reboot a specific node set or all nodes.
The second directory is the role directory. Here we group tasks, handlers, variables and so on and so forth around roles that can be assigned to single hosts or groups of hosts. By taking the example of the common role, I will explain this in the next part of this post.