DevOps Automation Cookbook - Sample Chapter
DevOps Automation Cookbook - Sample Chapter
ee
Starting off with the fundamental command-line tools, you will learn about the Ansible tool. You will explore
how to build hosts automatically and interactive pre-seed. You will also delve into the concept of manipulating
guests with ESXi. Following this, you will venture into the application of Docker, learn how to build containers
in Jenkins, and deploy apps using a combination of Ansible, Docker, and Jenkins. You will also discover how
to filter data with Grafana and use InfluxDB along with unconventional log management. Finally, you will
employ the Heroku and Amazon AWS platforms.
and problems
problems efficiently
real-world problems
$ 44.99 US
28.99 UK
P U B L I S H I N G
P U B L I S H I N G
Michael Duffy
This book takes a collection of some of the coolest software available today and shows you how to use it to
create impressive changes to the way you deliver applications and software.
Sa
pl
e
DevOps Automation
Cookbook
Over 120 recipes covering key automation techniques through
code management and virtualization offered by modern
Infrastructure-as-a-Service solutions
Michael Duffy
Preface
DevOps has created a lot of excitement in recent years and looks certain to make the same
impact as Agile software development on the software industry. This is not entirely surprising;
DevOps has largely been born from the frustration of Agile developers trying to work within
the traditional confines of infrastructure support and delivery. Their attempts to find more
efficient ways to deliver reliable, performant, and secure software to the end user has led us
to DevOps.
DevOps initially came to people's attention in 2008 when the first DevOps day conference
was held. It was organized by Patrick Debois; it brought together like-minded people for the
first time to discuss how the delivery of infrastructure could be made more agile. Originally,
the preferred term for what eventually became DevOps was Agile Infrastructure but the
portmanteau of Development and Operations made for a friendlier Twitter tag and the term
stuck. From here, the attention and interest in DevOps grew and today there are DevOps day
conferences worldwide.
DevOps breaks down the barriers between the operations and development teams and
allows a tight collaboration between these traditionally firewalled areas. The resulting crossfunctional team will be able to react faster to the changes in the software requirements and
deliver the best of breed solutions. This has led to a renaissance in areas such as monitoring
and deployment, where the development team may once have lobbed a tarball over the
corporate firewall to the operations department to install. The developers instead created a
robust set of automated provisioning scripts to manage installations themselves. Likewise,
monitoring has started to cease to be an exercise in testing if a port is available or if the
server has run out of disk space (although this is still essential) and has become a holistic
approach that takes into account the health of the infrastructure, load on the application,
number of errors generated, and so on. This is only possible if you have a team that is truly
cross-functional and with a deep understanding of the software they manage.
Preface
Defining what can be considered as a DevOps tool is incredibly difficult but the rapid
increase of companies utilizing DevOps techniques has led to an explosion of new tools with
a particular focus on automation, monitoring, and testing. Tools, such as Puppet, Chef, CF
Engine, and Ansible have grown massively in popularity, thus allowing developers to truly
define the underlying infrastructure using the code. Likewise, new monitoring tools, such as
Sensu, have appeared that take up the challenge of monitoring ephemeral infrastructures,
such as cloud-based services.
This book is different from most of the other technical cookbooks. Rather than keeping a
laser-like focus on a single technology, this cookbook serves as an introduction to many
different tools. Each chapter offers recipes that show you how to install and utilize tools that
tackle some of the key areas that a team using DevOps techniques will encounter. Using
it, you can quickly get up to speed with diverse areas, such as Automation with Ansible,
Monitoring with Sensu, and log analyses with LogStash. By doing the further reading outlined
with each recipe, you can find pointers to gain a deeper insight into these fantastic tools.
Preface
Chapter 9, Log Management, demonstrates how to use powerful tools to centralize, collect,
and analyze valuable log data.
Chapter 10, Monitoring with Sensu, covers using this powerful, scalable, and customizable
monitoring system to demonstrate how to install, configure, and manage Sensu.
Chapter 11, IAAS with Amazon AWS, covers recipes that demonstrate how to set up
infrastructure using the powerful AWS Infrastructure-as-a-Service. It also covers topics, such
as EC2 servers, DNS management, and security.
Chapter 12, Application Performance Monitoring with New Relic, introduces the NewRelic
application performance monitoring tool and demonstrates how to use it to monitor servers,
applications, and more.
Sections
In this book, you will find several headings that appear frequently (Getting ready, How to do it,
How it works, There's more, and See also).
To give clear instructions on how to complete a recipe, we use these sections as follows:
Getting ready
This section tells you what to expect in the recipe and describes how to set up any software or
any preliminary settings required for the recipe.
vii
Ad Hoc Tasks
with Ansible
In this chapter, we are going to cover the following recipes:
Introduction
There is a growing number of automation tools available to DevOps Engineers, each with
its individual strengths and weaknesses. Puppet, Chef, SaltStack, Ansible; the list seems to
grow on a daily basis, as do the capabilities that they offer. Configuration management has
become one of the core techniques that help define DevOps engineering, and is one of the
key benefits of adding DevOps techniques to a team.
Configuration management is not a new concept, and there have been various tools to
support automatic configuration management, with the granddaddy of them all being
CFEngine. First developed by Mark Burgess in 1993 to solve the problems of managing his
own infrastructure, CFEngine has since grown to be a fully featured commercial product used
by a great number of companies.
25
26
Chapter 2
Getting ready
For this recipe, you need an instance/install of Ubuntu 14.04.
How to do it
There is a Personal Package Archive (PPA) that is available for installation of ansible on
Ubuntu; you can use the following steps to install the latest stable release (1.9.4 at the time
of writing):
1. First, you need to install the PPA repository on the Ansible node:
$ sudo apt-add-repository ppa:ansible/ansible
You may be prompted to add the repository; simply hit enter if you are.
2. Now you have the PPA repository installed, you need to update the apt repositories
with the following command:
$ sudo apt-get update
4. You can test if Ansible is installed correctly using the version switch, as shown in the
following example:
$ ansible --version
This should return the version of Ansible that you have installed.
27
See also
You can find out more about how to set up the Ansible control node using the Ansible
documentation at http://docs.ansible.com/intro_installation.html.
Getting ready
For this recipe, you need an instance of CentOS 7.
How to do it
Let's install an Ansible control node on CentOS:
1. We need to install the Extra Packages for Enterprise Linux (EPEL) repository before
we install Ansible. You can install it with the following command:
$ sudo yum -y install https://dl.fedoraproject.org/pub/epel/epelrelease-latest-7.noarch.rpm
3. You can test if Ansible is installed correctly using the version switch, as shown in the
following example:
$ ansible --version
This should return the version of Ansible that you have installed.
See also
You can find out more about how to set up the Ansible control node using the Ansible
documentation at http://docs.ansible.com/intro_installation.html.
28
Chapter 2
Getting ready
For this recipe, you need to have Ansible installed on the machine you intend to use as a
control node and a target node to run your actions against. The examples use six different
target hosts, but this is not mandatory; all you need to do is simply adjust the inventory to
match your requirements.
How to do it
The inventory file is formatted as an ini file and is essentially a simple text file that can store
your catalog. Let's assume that we have a small infrastructure that resembles the following:
Function
haproxy
httpd
mysql
Name
haproxy01
web01 through to web04
mysql01
Let's create our first Ansible inventory. Using your favorite editor, edit the file located at /etc/
ansible called hosts:
1. Let's start by creating a basic inventory. Insert the following code:
haproxy01
web01
web02
web03
web04
mysql01
Ensure that the names that you enter into your inventory can be resolved by
their names, either using DNS or a host's file entry.
29
3. We now have an inventory file that can be used to control our hosts using Ansible;
however, we have lost the ability to send commands to all hosts at once due to
grouping. For that, we can add a final group that is, in fact, a group of groups. This will
take our groups and form a new group that includes all of the groups in once place,
allowing us to easily manipulate all our hosts at once, whilst still retaining the ability
to distinguish between individual groups of nodes. To accomplish this, open your
Ansible inventory and add the following to the bottom of the file:
[all:children]]
loadbalancer
web
database
4. The children keyword signifies that the entries that belong to this group are, in
fact, groups themselves. You can use the children keyword to make sub-collections
and not just collect all groups. For instance, if you have two different data centers,
you can use groups called [dca:children] and [dcb:children] to list the
appropriate servers under each.
5. We now have everything that we need to address our servers, but there is one last
trick left to make it more compact and readable. Ansible inventory files understand
the concept of ranges, and since our servers have a predictable pattern, we can use
this to remove some of the entries and Do not repeat yourself (DRY) the file up a
little. Again, open the file in /etc/ansible/hosts and change the code to reflect
the following:
[loadbalancer]
haproxy01
[web]
web01:04
30
Chapter 2
[database]
mysql01
[all:children]]
loadbalancer
web
As you can see, we have replaced the four manual entries with a range; very useful
when you have to manage a large infrastructure.
Although it's recommended, you don't need to install the inventory into /
etc/ansible - you can have it anywhere and then use the -i option
on the Ansible command to point to its actual location. This makes it
easier to package the inventories along with Playbooks.
See also
You can find out more about the Ansible inventory at the Ansible documentation site; the
following link in particular contains some interesting details at http://docs.ansible.
com/intro_inventory.html.
Getting ready
All you need to use in this recipe is a configured Ansible control node and an Ansible inventory
describing your target nodes.
31
How to do it
Let's use a raw module to install python-simplejson:
1. Use the following command to install the simple-python module:
ansible all --sudo --ask-sudo-pass -m raw -a 'sudo apt-get -y
install python-simplejson'
In the preceding command, we have used several options. The first two, --sudo and
--ask-sudo-pass, tell Ansible that we are employing a user that needs to invoke
sudo to issue some of the commands and using --ask-sudo-pass prompts us for
the password to pass onto sudo. The -m switch tells Ansible which module we wish
to use; in this case, the raw module. Finally, the -a switch is the argument we wish to
send to the module; in this case, the command to install the python-simplejson
package.
You can find further information about the switches that
Ansible supports using the command ansible --help.
2. Alternatively, if you manage a CentOS server, you can use the raw module to install
the python-simplejson package on these servers using the following command:
ansible all --sudo --ask-sudo-pass -m raw -a 'sudo yum -y install
python-simplejson'
See also
You can find the details of the raw module at http://docs.ansible.com/raw_module.
html.
Chapter 2
Getting ready
For this recipe, you will need to have a configured Ansible inventory. If you haven't already
configured one, use the recipe in this chapter as a guide to configure it. You will also need
either a Centos or an Ubuntu server as a target.
How to do it...
Let's install packages with Ansible:
1. To install a package on an Ubuntu server we can make use of the apt module. When
you specify a module as part of an ad hoc command, you will have access to all the
features within that particular module. The following example installs the httpd
package on the [web] group within your Ansible inventory:
ansible web -m apt -a "name=apache2 state=present"
You can find more details of Ansible modules using the ansibledoc command. For instance, ansible-doc apt will give you
the full details of the apt module.
2. Alternatively, you might want to use this technique to install a certain version of a
package. The next example commands every node to install a certain version of
Bash:
$ ansible all -m apt -a "name=bash=4.3 state=present"
3. You can even use the apt module to ask the target nodes to update all installed
software using the following command:
$ ansible all -m apt -a "upgrade=dist"
4. You can use the yum module to install software on RHEL-based machines using the
following command:
$ ansible all -m yum -a "name=httpd state=present"
5. Just like the example for Ubuntu servers, you can use Ansible to update all the
packages on your RHEL-based servers:
$ ansible all -m yum -a "name=* state=latest"
See also
You can find more details of the Ansible apt module, including the available modules,
at http://docs.ansible.com/apt_module.html
You can find more details of the Yum module at http://docs.ansible.com/
ansible/yum_module.html
33
Getting ready
You'll need an inventory file before you try this, so if you have not got it already, go ahead and
set one up. The following examples are based on the inventory set out in the preceding recipe,
so you'll need to change the examples to match your environments.
How to do it
To restart a service, we can use the Ansible service module. This supports various activities
such as starting, stopping, and restarting services:
Alternatively, you can also use the service module to start a service:
ansible mysql -m service -a "name=mysql state=started""
See also
You can find more details about the service module from the Ansible documentation at
http://docs.ansible.com/service_module.html.
34
Chapter 2
Getting ready
You'll need to an inventory file before you try this, so if you don't have it already, go ahead and
set one up. You can use the recipe of this chapter, Creating an Ansible inventory, as a guide.
How to do it
The command is simple and takes the following form:
ansible <ansible group> -a "<shell command>"
For example, you can issue the following command to reboot all the members of the db group:
ansible mysql -a "reboot -now"
It's important to keep an eye on parallelism when you have many hosts. By
default, Ansible will send the command to five servers. By adding a -f flag to
any command in this chapter, you can increase or decrease this number.
Getting ready
All you need to use for this recipe is a configured Ansible control node and an Ansible
inventory describing your target nodes.
How to do it
Let's configure ansible user to manage some users:
1. You can use the following command to add a user named gduffy to a group called
users on every node within your Ansible inventory:
$ ansible all -m user -a "name=gduffy" comment="Griff Duffy"
group=users password="amadeuppassword"
35
3. We can also easily amend users. Issue the following command from your control
node to change the user Beth to use the Korn shell and to change her home directory
to /mnt/externalhome on all nodes:
ansible all -m user -a "name=beth shell=/bin/ksh home=/mnt/
externalhome"
See also
The preceding examples make use of the Ansible User module. You can find the
documentation for this module at http://docs.ansible.com/user_module.html.
Getting ready
All you need to use for this recipe is a configured Ansible control node and an Ansible
inventory describing your target nodes. You should also have a SSH key, both public and
private that you wish to manage.
How to do it...
Let's use SSH keys to manage Ansible:
1. The first thing we might want to do is create a user and simultaneously create a key
for them. This is especially useful if you use a network jump box, as it means that you
have no dependency on the user supplying a key; it's an integral part of the process.
Run the following command to create a user called Meg with an associated key:
ansible all -m user -a "name=meg generate_ssh_key=yes"
36
Chapter 2
2. Often, a user either has an existing key they wish to use, or needs to change an
installed key. The following command will allow you to attach a key to a specified
account. This assumes that your key is located in a directory called keys within the
working directory from which you run the following command:
ansible all -m copy -a "src=keys/id_rsa dest="/home/beth/.ssh/id_
rsa mode=0600
3. Once a user has their Private key setup, you need to push their public key out to each
and every server that they wish to access. The following command adds my public key
to all web servers defined within the Ansible inventory. This uses the lookup Ansible
function to read the local key and send it to the remote nodes:
ansible web_servers -m authorized_key -a "user=michael key="{{
lookup('file', '/home/michael/.ssh/id_rsa.pub') }}"
See also
The preceding examples use a mix of Ansible modules to achieve the end result. You can
find the documentation for the following modules at:
37
www.PacktPub.com
Stay Connected: