OK, before we dig deeper into the logical units defined in the previous post, we need to talk about infrastructure as code. Infrastructure as code is a concept that introduces workflows known from the software development to the system administration. Instead of logging into the server and modify configuration files or clicking around to bring services up, a description language is used to describe what configuration changes need to be done to bring up the service. An agent on the server will use this description to do the changes.
Using this concept will change the way administrators are working. So most administrators complain that this will make the work more complicated, that they aren’t programmers and that it is much faster to simply log in and do the change. This is true, doing a configuration once directly on the system is faster compared to writing code for an agent to do the configuration. But, under normal circumstances, you won’t do a configuration only once. you will do it many times. So instead of doing it multiple times on different servers, you can recycle the work you’ve already done and rolled it out on as many servers as you like or need.
So what are the benefits of using infrastructure as code:
There are even more benefits, but this list should already show the potential that infrastructure as code provide.
So, how does infrastructure as code look like in the wild? I’ll show an example based on Puppet http://puppetlabs.com/, the tool I’ll use in this series of blog articles as a configuration tool. Let’s assume we would like to create a file with a specific content inside:
The example will create a file called “/opt/hello_world.txt” with the content “Hello world!”. It will ensure that the file got the defined permission and is owned by root. Puppet will ensure this on all servers associated with that class, so no matter if I would like to have this file on a single host or on three thousand hosts, the effort is equal. This example is quite trivial, but it is possible to do much more complicated configuration with Puppet. For those who can not wait to see what is possible and who want to get an introduction to Puppet now, feel free to check out http://www.puppetcookbook.com/ for an introduction to Puppet.
So, why is this important to this blog series? Quite simple, all components in the open source data center will be deployed and bootstrapped using Puppet modules freely available on the net, in this case on the Puppet forge http://forge.puppetlabs.com/.
With the next post in this blog series I will start with setting up the controller node, so stay tuned.