Cacti Howto
Cacti Howto
net/book/export/html/70
Prerequisites
This Chapter will guide you through some of the pre-requisites for successfully setting up your cacti site
Setting Up SNMP
This HowTo will explain how to install and configure the Net-SNMP agent. At time of writing, the latest version available is 5.4
(published on 12/06/2006).
Version History
Linux
Usually every Linux distribution comes with Net-SNMP packages :
RedHat / Fedora : install the net-snmp, net-snmp-libs and net-snmp-utils packages
Debian / Ubuntu: install the libsnmp-base, libsnmp5, snmp and snmpd packages
SuSE : install the net-snmp package
Gentoo : simply emerge the net-snmp ebuild
Mandriva : install the libnet-snmp5, net-snmp and net-snmp-utils packages.
AIX
Packages are available in the University of California repository:
release 5.0.6 for AIX 4.1
release 5.0.6 for AIX 4.2
For older Solaris releases, packages are available in the Sunfreeware repository :
For these packages to work, OpenSSL and GCC libraries need to be installed also.
These tarballs have to be extracted from / has they contain absolute paths.
Files are copied to /usr/local/share/snmp, /usr/local/libs, /usr/local/include/net-snmp, /usr/local/man, /usr/local/bin and
/usr/local/sbin
HP-UX
Tarballs are available from the Net-SNMP main site :
release 5.4 for HP-UX 11.11 PA-RISC
release 5.4 for HP-UX 11.00 PA-RISC
release 5.4 for HP-UX 10.20 PA-RISC
These tarballs have to be extracted from / has they contain absolute paths. Beware that the binaries are not stripped in these
tarballs, this waste a lot space.
Files are copied to /usr/local/share/snmp, /usr/local/libs, /usr/local/include/net-snmp, /usr/local/man, /usr/local/bin and
/usr/local/sbin
FreeBSD
Net-SNMP is available through the ports
Here's how to get the configure options of an already running Net-SNMP agent :
Code:
Code:
Code:
$ ./configure --with-your-options
$ make
# mkdir /usr/local/dist
# make install prefix=/usr/local/dist/usr/local exec_prefix=/usr/local/dist/usr/local
# cd /usr/local/dist
# tar cvf /tmp/net-snmp-5.3.1-dist.tar usr
# gzip /tmp/net-snmp-5.3.1-dist.tar
# rm -rf /usr/local/dist
You can then copy the /tmp/net-snmp-5.3.1-dist.tar.gz file to other servers, and uncompress it from the root directory (everything
will get extracted to /usr/local).
Please note that you need to restart (or send the HUP signal) the snmpd daemon whenever you modify snmpd.conf
Code:
rocommunity public
This will enable SNMP version 1/2 read-only requests from any host, with the community name public.
With this minimal configuration, you'll be able to graph CPU usage, load average, network interfaces, memory / swap usage, logged
in users and number of processes.
You can restrict from which hosts SNMP queries are allowed :
Code:
Code:
agentaddress 10.20.30.40:10000
Code:
agentaddress tcp:161
For those who want some more security, you can use the SNMP version 3 protocol, with MD5 or SHA hashing :
Code:
Code:
In Cacti, add your device, choose SNMP version 3, and fill the username and password fields :
Now that you're done with access control, add these 2 lines in snmpd.conf to indicate the location and contact name of your device :
Code:
syslocation Bat. C2
syscontact [email protected]
Code:
dontPrintUnits true
Next step is to graph filesystems in Cacti; the easyest way is to add this line in snmpd.conf :
Code:
includeAllDisks
When you'll run the "ucd/net - Get Monitored Partitions" Data Query, all the mounted filesystems will show up :
If you want a filesystem not to be listed here, add this line to snmpd.conf :
Code:
ignoredisk /dev/rdsk/c0t2d0
Unfortunatly, some older versions of Net-SNMP do not fully work with the includeAllDisks keyword :-?
You'll then have to list explicitly all filesystems you want to graph :
Code:
disk /
disk /usr
disk /var
disk /oracle
Please note that the Net-SNMP agent can only report filesystems which where mounted before its start.
If you manually mount filesystems later, you'll have to reload the Net-SNMP agent (send the HUP signal).
Code:
proc httpd
In our example, the number of Apache processes will be available under the .1.3.6.1.4.1.2021.2.1.5 OID
Code:
Code:
Code:
indicates that either the agent is not started, or that the community string is incorrect, or that this device is unreachable.
Check your community string, add firewall rules if necessary, etc.
If using SNMP version 3, specifying an unknown user will result in this error message :
Code:
Code:
Code:
If the answer is empty, usually it means the includeAllDisks is not supported by your Net-SNMP agent (you'll have to list each
filesystem you want to graph as explained in previous chapter).
Code:
Code:
$ /tmp/foo.sh -arg1
123
Code:
The result of your script will be accessible under the ucdavis.extTable.extEntry tree :
Code:
Now let's run this second script, which returns more than one result :
Code:
$ /tmp/bar.sh
456
789
Another way to call scripts from snmpd.conf is by specifying an OID, like this :
Code:
Code:
First line returned by the script will be available at .1.3.6.1.4.1.2021.555.2, and so on.
You can then use the "SNMP - Generic OID Template" in Cacti (one Data Source per OID).
Let's say you want to count the number of entries in a log file.
Add this to snmpd.conf :
Code:
the global count of matches will be available under the .1.3.6.1.4.1.2021.16.2.1.5.1 OID
the "Regex match counter" (which is reset with each file rotation) will be available under the .1.3.6.1.4.1.2021.16.2.1.7.1 OID
Code:
Code:
The Squid SNMP tree will be available under the .1.3.6.1.4.1.3495.1 branch.
Code:
And here's the Squid part (this specific OID returns the Squid version) :
Code:
One linux machine - ( can be any depending on your choice try and keep atleast 256mb ram , i have tested cacti on a 1.4ghz
processor and 256 mb ram, with centos as the operating system and with around 350 monitored devices, and it has run without a
hitch for more than a month, currently cacti resides on a 1.8ghz machine with 512 mb ram, which is also running a proxy server -
squid, plus a intranet web ftp server )
RRDtool - This is the defacto package used by 95% of all NMS tools out on the net for graphing, details can be found at
http://oss.oetiker.ch/rrdtool/
Xampp - now the reason i am going in for Xampp is because it lets maintain a lot of things very easily, ( the apache webserver -
mysql database , php programming language and all needed dependencies ) of course we can do it without xampp but you can search
for those doc's on the net.
Ubuntu installation :-
1: The First step now would be to install Linux on our machine, for this example we will download ubuntu ios from the following
site http://cdimage.ubuntu.com/releases/gutsy/tribe-5/gutsy-desktop-i386.iso this is the current latest version of ubuntu.
note: there is a server version of ubuntu which is also available but we will not be going in for that due to lack of gui.
2: once downloaded burn it on to a cd and then boot the machine you have decided to make as your server with the same CD
3: once the machine boots up, you will notice that is running in live cd mode, i.e the hard disk is not being used, you will find a
Install icon on the top left of your screen, double click on it and go ahead with the install, the only problem you might have is during
partitioning as we are going in for a separate machine its best that we go in for auto partitioning ( a detailed ubuntu install guide is
not really possible right now but it can found here https://help.ubuntu.com/6.10/ubuntu/installation-guide/i386/index.html.
Also remember to type in your username to be used as deadwait ( im taking deadwait just for this example )
4: Once ubuntu is installed - all further steps will now occur from within ubuntu itself, the next thing to do is see to it that it is
updated, for that we have to have access to the internet hence you network card will have to configured, i hope you remember the
root password you had supplied during the installation.
4.1 : click on
and in the wired connection tab supply your needed ip address/ subnet mask and gateway
4.3 : open up a terminal window ( Applications -> accessories -> terminal ) and type in the foll. commands
and then
4.4 : Once the build essential is over you are set to install Xampp , RRDtool and Cacti
5: We will begin with Xampp - to know in detail what xampp is check out their website
http://www.apachefriends.org/en/xampp.html , Now we need to download Xampp for linux directly clicking on this link
http://jaist.dl.sourceforge.net/sourceforge/xampp/xampp-linux-1.6.3b.tar.gz . This is the current latest version of Xampp. remember
to download it to /opt . ( the reason im going in for /opt is because the same is mentioned in the website, but remember you can have
it downloaded and installed anywhere .)
5.1: Now assuming you have downloaded the file in /opt you need to do the foll next. ( im going to be guiding you in command line
mode - but it can be done in GUI, the reason i haven't mentioned the GUI method is because i get confused in GUI mode, as these
DOCS will be open for editing later anyone who wishes to update the GUI mode can do so )
as usual click on
type in
cd /opt
what this does is unzip/unpack the file into its own directory, if you type int the command dir you will now see that a new directory
(lampp) has been created, if you want to delete the xampp-linux-1.6.3b.tar.gz file or not is upto you, if you want to delete it the
command is
what i do is usually move these files in to my home folder, assuming my login name is tulip then my home folder becomes /tulip, the
command to move the file is as follows
Now for the cool part -> int the same terminal type in
cd lampp
then type in
and your webserver along with mysql and ftp will have started, to check the same, open up firefox and type in http://localhost, you
should get the Xampp screen.
5.2: Now lets clear some basic stuff we need to do - you see Cacti needs a database which we have already installed using xampp,
now what we need to do is create cacti's own database. now since you have opened up http://localhost in firefox, xampp will ask you
its language preference, click on english, then on the left pane you will see a link for phpmyadmin, click on it, what you see now is
an web based administration tool for MySql, on the first page itself you will see an option named Create Database in the field
below type in cacti since this is the name we will use for our database ( ofcourse you could name it whatever you want ). then go on
to the next step.
6: So then, one part of our work is done next thing to do is to install rrdtool. your going to love this, in a terminal box type in the
magic commands
and thats it rrdtool is installed ( Now for a bit of history, we can install entire cacti along with webserver php and mysql by doing
sudo aptitude install cacti, but we havent done that because if you are not comfortable with linux it could lead to a lot of confusion as
to where the files are installed, also the package could break if an upgrade takes place)
At the same time lets install one more tool needed which snmp with the same command
and then
what is important to remember is rrdtool gets installed in /usr/bin/rrdtool, we will need this path later.
first we need to download the cacti package which we can do so from this link http://www.cacti.net/downloads/cacti-0.8.6j.tar.gz
save the link on your eg: Desktop then open up a terminal and navigate to your deskto, the commands are (assuming your user login
is tulip)
cd /home/deadwait/Desktop
remember linux is case-sensitive so desktop wont work it will be Desktop, once we are in Desktop type in the foll commands
which will extract the foll in a directory named cacti-0.8.6j for purposes of our ease lets rename it to just cacti with the foll command
now that the directory is renamed lets move it in our lampp directory so that we can access it via our webserver, to do so run the foll
command
now our cacti directory is copied on to lampp's webroot directory, remember we had created a database in mysql named cacti, now
we need to populate tis database, which you should not worry about if you dont understand, just follow the foll steps
on the left pane select the database which we have created, in our case its cacti.
then on the right pane select import -> then click on browse -> navigate to the directory /opt/lampp/htdocs/cacti in which you have to
select the file cacti.sql and then click on go.
cd /opt/lampp/htdocs/cacti/include
then type in
nano is an editor which will open up the file config.php, in the beginning you will see these options
$database_type = "mysql";
$database_default = "cacti";
$database_hostname = "localhost";
$database_username = "cactiuser";
$database_password = "cactiuser";
$database_port = "3306";
you need to change the username and password so that it looks like this
$database_type = "mysql";
$database_default = "cacti";
$database_hostname = "localhost";
$database_username = "root";
$database_password = "";
$database_port = "3306";
7.4: Now open up firefox and type int the address bar the following http://localhost/cacti
you will be greeted with a screen which will be the beginning of the installation , just click on next
on the next screen you will asked if its a new install, which of course it is,confirm whether the database user and the database name
mentioned are correct, go back to step 7.3 and check, click on next
7.5 : when we click it shows us the base paths of all needed files we will notice that the path for php is marked in red because the
/usr/bin/php
/opt/lampp/bin/php
It will open up to the cacti homepage and ask you a username and password type in
admin
and password as
admin
it will then force you to change the password , type in the new password that you decide on and log in using the new password.
We need to do a bit more stuff, you see cacti works by polling the devices which we set it up for, so lets set the poller for every 5
minutes, open up terminal and type in the foll command
this will open up the crontab file, now at the end type in the foll.
then press ctrl x and come out now all along we have assumed the username to login to your machine is tulip, hence the tulip is
added above, now we need to do one last thing type in the following command in a terminal
Basic Usage
This chapter may help understanding cacti's basic usage principles. Let me say a word about the general way cacti works. But
"theory" is quickly followed by some examples that may help settings up the first graphs.
Have fun!
Data Retrieval
First task is to retrieve data. Cacti will do so using its Poller. The Poller will be executed from the operating system's scheduler, e.g.
crontab for Unix flavored OSes.
In current IT installations, you're dealing with lots of devices of different kind, e.g. servers, network equipment, appliances and the
like. To retrieve data from remote targets/hosts, cacti will mainly use the Simple Network Management Protocol SNMP. Thus, all
devices capable of using SNMP will be eligible to be monitored by cacti.
Later on, we demonstrate how to extend cacti's capabilities of retrieving data to scripts, script queries and more.
Data Storage
There are lots of different approaches for this task. Some may use an (SQL) database, others flat files. Cacti uses rrdtool to store
data.
RRD is the Acronym for Round Robin Database. RRD is a system to store and display time-series data (i.e. network bandwidth,
machine-room temperature, server load average). It stores the data in a very compact way that will not expand over time, and it can
create beautiful graphs. This keeps storage requirements at bay.
Data Presentation
One of the most appreciated features of rrdtool is the built-in graphing function. This comes in useful when combining this with
some commonly used webserver. Such, it is possible to access the graphs from merely any browser on any plattform.
Graphing can be done in very different ways. It is possible, to graph one or many items in one graph. Autoscaling is supported and
logarithmic y-axis as well. You may stack items onto another and print pretty legends denoting characteristics such as minimum,
average, maximum and lots more.
Cacti
Cacti glues all this together. It is mainly written in php, a widely-used general-purpose scripting language that is especially suited for
Web development and can be easily embedded into HTML.
Cacti provides the Poller and uses RRDTool for Storage and Graphing. All administrative information are stored in a MySQL
database.
Data Source Item A RRD file may hold data for more than one single variable.
data source (ds)
as part of a Data Template Each one is named "data source" in RRDTool speech
Graph
as a real RRDTool graph statement, RRDTool graph
Whole statement, including all options and graph elements
created when applying a Data Template to statement
a Device
You may be put off by all those template stuff. If you like a more practical approach, just skip to Why Templates?.
Now let's create the very first graph. I won't stick to the host cacti is running on, because this is a very special one. So I'm assuming
you're running at least one other device. As cacti's roots are network monitoring with SNMP, I will use some SNMP capable device.
In this case, I choose the router of my home network. But you may of course choose any device that is SNMP enabled.
But let's start from the very beginning. Assuming you've just logged in, you'll see a page like this:
Choose either of those marked links to access the Devices page. Add a new Device like:
Description
Give this host a meaningful description.
Hostname
Fill in the fully qualified hostname for this device. Personally, I love to use DNS Names instead of IP Adresses. But you may
choose any of them
Host Template
Choose what type of host, host template this is. The host template will govern what kinds of data should be gathered from this
type of host.
The magic of templates is explained later
SNMP Community
Fill in the SNMP read community for this device. If you don't know, use the string "public" as a start.
Please notice the information already retrieved from this device. Of course, this output pertains to my special device. The text may
vary for your equipment. In case you see:
there is an error with the SNMP Community String that must be fixed prior to graph generation. When scrolling down, you should
see some more information, that was provided by assigning this device to the given Host Template. I'm aiming at SNMP -
Interface Statistics:
Now, back to the top of the page, select Create Graphs for this Host and find the following:
Check the box next to an interface you want to get data for. A good choice is a row, where a Hardware Address (aka: MAC
Address) or the like is shown. From the dropdown, select a graph template of your liking. But remember, that 64 bit graphs are only
supported with SNMP V2 (and some more conditions). Finally, Create to get:
You want to see your work immediately? So, here is the answer: You have to be patient. Assuming you did not forget to configure
your cacti host's scheduler to run the poller every 5 minutes, you'll have to wait at least 10 minutes to see anything. Then, please
move to Graph Management:
and select the newly generated graph. Please notice, that I've filtered for the device. This was for demonstrating purpose only and to
suppress all devices from the list I've already created.
The last steps are not the recommended way to handle this. Later on, I'll show how to use the Graph tab and all the magic within.
Please perform this procedure a second time, choosing Unicast Packets this time:
and Create:
Now, again, have a cup of coffee. It takes two polling cycles, before these new graphs get filled. As there are three graphs now,
question arise how to handle the graphs display in a more conveniant manner. Please follow me to the next chapter to see the Graphs
Tab in action!
If you click the Graphs Tab right after generating some graph, you won't see anything yet. So lets fill it first. This can be done from
the Devices page, when using cacti 0.8.6h. Select your device by entering a Search pattern. Then, please select the checkbox to the
right. From the Choose an Action dropdown, select Place on a Tree (Default Tree) to see:
Accept this by selecting Yes and you're done. Now lets look at the results by selecting the blue Graphs Tab. You'll have to select
your Device, my own routing device in this case.
Notice the four new Tabs to the right, one of them, the Graphs Tree Tab, being display all in red. One other thing to pay attention to
is the little magnifying glass next to each graph. We'll explain this in a minute.
You will have noticed, that this view displays all currently defined graphs for this host. In fact, as soon as you add more graphs to
this host, they will automagically show up in this view. In this case, we've added the whole Host to the Graph Tree, but there are
other options as well.
But first, please select the Graph itself by clicking anywhere on it. Now you'll see (by default) four new graphs. Each of them
showing a different timespan, from Daily to Yearly. The next image show the two topmost of them:
Now to the magnifying glass. You've seen in in the previous graph, and now in again appears next to each of the four graphs. Lets
click to see:
The little red square was drawn by placing the cursor at one corner and dragging it to the diagonally placed corner. Thus you define
the area to be magnified. In this case, only the x-axis takes effect. You'll see:
The is the second to last tab on the right side. Find the Filter by Host accompanied by an additional text field that allows for freetext
filtering. I've selected the will know router to find all three recently defined Graphs. From the Headings, you may learn how many
Graphs are in the result set after filtering. There may be more than one page.
Now, I've selected the first and the third row. Selecting View yields following result:
The display now shows both graphs side-to-side. Notice, that the Legends are suppressed. The layout is defined by the user-specific
values to be found under the Settings Tab. You may play with those values to design the layout to your likings.
Please also notice, that the Tab changed from List View to Preview Mode automatically. To get more details of a specific view, you
may again click on one of the graphs to see:
Lets have a look at all those filtering capabilities. Most of those will hold for other lists as well. Lets start with the explicit selection
of a host via Filter by Host:
Notice the text field to the right to the Filter by Host. Text entered here will be searched in all existing Graph Titles:
Be aware of the fact, that this text shows up in an SQL SELECT clause. If you remember your SQL skills, the percent (%) sign is
used to make up partly qualified SQL SELECT clauses (wildcard). So look at the next image
Why Templates?
You've surely seen all those Template stuff and may have asked yourself, "Why Templates". You may compare them to Macros or
Subroutines of commonly known programming languages.
Imagine, you would have to define all rrdtool create parameters to define the logical layout of each and every rrd file. And you
would have to define all rrdtool graph parameters to create those nice graphs, for every new graph. Well, this would yield
maximum flexibility. But maximum effort, too.
But in most installations, there are lots of devices of the same kind. And there are lots of data of the same kind, e.g. traffic
information is needed for almost every device. Therefor, the parameters needed to create a traffic rrd file are defined by a Data
Template, in this case known as Interface - Traffic. These definitions are used by all Traffic-related rrd files.
The same approach is used for defining Graph Templates. This burden is done only once. And all parameters defined within such a
Graph Template are copied to all Graphs that are created using this Template.
The last type of Templates are the Host Templates. They are not related to some rrdtool stuff. The purpose of Host Templates is to
group all Graph Templates and Data Queries (these are explained later) for a given Device type. So you will make up a Host
Template e.g. for a specific type of router, switch, host and the like. By assigning the correct Host Template to each new Device,
you'll never forget to create all needed Graphs.
Well, nice stuff, isn't it? But here comes the bad news. Unlike a Subroutine, Templates are not invoked at runtime:
Graph Templates
Good News! Almost every setting of a Graph Template is propagated to all related Graphs when saving the changes. But you
may encounter problems when checking the Use Per-Graph Value (Ignore this Value) checkbox. When creating new
Graphs, the latest defintions are taken into account.
Data Templates
No change of a Data Template is propagated to already existing rrd files. But most of them may be changed by using rrdtool
tune from command line. Pay attention to not append new Data Source Items to already existing rrd files. There's no rrdtool
command to achieve this!
Host Templates
No change of a Host Template is propagated to already existing Devices. But when creating a new one, latest definitions are
taken into account. But there's an easy (bit tedious, perhaps) way to apply changes to already existing Devices: First, change
the Host Template to None, then change it back to the desired one. All new items are now associated with this Device.
Attention! No items are deleted by this procedure.
Code:
As cacti does not use the MIBs but pure ASN.1 OIDs, let's search the OID used as udpInDatagrams:
Code:
The needed OID is .1.3.6.1.2.1.7.1.0. Now learn how to enter this into a new Cacti Data Template:
Please proceed to Data Templates and filter for SNMP. Check the SNMP - Generic OID Template
After clicking Go, you're prompted with a new page to enter the name for the new Data Template:
Due to the filter defined above, you won't see the new Template at once, so please enter udp as a new filter to find:
Now select this entry to change some definitions according to the following images:
for the lower one. Please pay attention to change the MAXIMUM value to 0 to prevent data suppression for values exceeding 100.
And you saw the OID .1.3.6.1.2.1.7.1.0 from above, didn't you? Please copy another one for OID .1.3.6.1.2.1.7.4.0, using the
description udpOutDatagrams
Name
The Title of the Data Source will be derived from this. If Use Per-Data Source Value (Ignore this Value) is unchecked, the
string entered here is take literally. Checking this box allows for target-specific values by substituting cacti's built-in variables
(|host_description| will be substituted by the description of the host this Data Template will be associated with.)
where
Name
The Name for this Graph Template. Find this in the Graph Templates List
Title
The Title to be displayed on Graphs generated from this Template. There are some cacti-specific variables allowed. One of
this is |host_description|, which takes the hosts description from the Devices definition to generate the Title
Vertical Label
You may specify a string as a label for the y-axis of the graph
Now let's add some Graph Template Items. They will specify, which Data Sources defined by some Data Template should be
displayed on the Graph. Please click Add as shown on the last image:
Data Source
Select the needed Data Source from the Dropdown List: udpInDatagrams
Color
Find a nice color from the Dropdown for this item
Graph Item Type
Graph Items may be of type AREA or of LINEx, where x is the thickness of the line
Text Format
This string is printed as part of the Legend
I always appreciate some nice legends to see the numbers for e.g. maximum, average and last value. There's a shortcut for this:
Now let's turn to the second data source. This works very much the same way. So see all four images in sequence:
Please scroll down to the bottom of the page and Save your whole work.
Now, you may add this new Graph Template to any hosts that responds to those udp OIDs. But in this case, please wait a moment.
Let's first proceed to the Host Templates and use this new Graph template for our first own Host Template.
Now you'll find two sections added. First, let's deal with Associated Graph Templates. The Add Graph template select box holds
all defined Graph Templates. Select the one we've just created
Using Templates
Using Templates
and Save. Then scroll down to see the Assoiated Graph Template and the Associated Data Query:
Now select Create Graphs for this Host from the top of the page. You'll be presented with a new page to select the wanted Graphs:
Select our new UDP thingy and some Traffic Graph Template for an interesting interface and Create. The result is displayed with
the next page:
You'll have to wait for two polling cycles for data to be filled.
and Add:
and Create:
Please notice, that not only a creation message appears. The Graph Template just selected is grayed out, the checkbox disappeared.
This is to make clear, what Graph Templates already were chosen to prevent unwanted duplication.
Please select Edit this Host again, to see what changed in the Associated Graph Templates section:
The Status of this Graph Template has changed to Is Being Graphed. You may Edit to jump to Graph Management and see your
graph:
Advanced Magic
This chapter shows how to extend cacti's build-in capabilities with scripts and queries. Some of them are of course part of the
standard cacti distribution files.
Scripts and Queries extend cacti's capabilities beyond SNMP. They allow for data retrieval using custom-made code. This is not
even restricted to certain programming languages; you'll find php, perl, shell/batch and more.
These scripts and queries are executed locally by cacti's poller. But they may retrieve data from remote hosts by different protocols,
e.g.
Data Input Methods for querying single or multiple, but non-indexed readings
temperature, humidity, wind, ...
cpu, memory usage
number of users logged in
IP readings like ipInReceives (number of ip packets received per host)
TCP readings like tcpActiveOpens (number of tcp open sockets)
UDP readings like udpInDatagrams (number of UDP packets received)
...
Data Queries for indexed readings
network interfaces with e.g. traffic, errors, discards
other SNMP Tables, e.g. hrStorageTable for disk usage
you may even create Data Queries as scripts e.g. for querying a name server (index = domain) for requests per domain
By using the Exporting and Importing Facilities, it is possible to share your results with others.
Common Tasks
In principle, it is possible to divide the following tasks into three different parts:
Code:
#!/usr/bin/perl
$ping = `ping -c 1 $ARGV[0] | grep icmp_seq`;
$ping =~ m/(.*time=)(.*) (ms|usec)/;
print $2;
To define this script as a Data Input Method to cacti, please go to Data Input Methods and click Add. You should see:
Please fill in Name, select Script/Command as Input Type and provide the command that should be used to retrieve the data. You
may use
as a symbolical name for the path_to_your_cacti_installation. Those commands will be executed from crontab; so pay attention to
providing full path to binaries if required (e.g. /usr/bin/perl instead of perl). Enter all Input Parameters in <> brackets. Click create
to see:
Now lets define the Input Fields. Click Add as given above to see:
The DropDown Field [Input] contains one single value only. This is taken from the Input String above. Fill Friendly Name to
serve your needs. The Special Type Code allows you to provide parameters from the current Device to be queried. In this case, the
hostname will be taken from the current device.
Click create to see:
At least, define the Output Fields. Again, click Add as described above:
Provide a short Field [Output] name and a more meaningful Friendly Name. As you will want to save those data, select Update
RRD File. Create to see:
Fill in the Data Templates Name with a reasonable text. This name will be used to find this Template among others. Then, please fill
in the Data Source Name. This is the name given to the host-specific Data Source. The variable |host_description| is taken from the
actual Device. This is to distinguish data sources for different devices. The Data Input Method is a DropDown containing all
known scripts and the like. Select the Data Input Method you just created. The Associated RRA's is filled by default. At the moment
there's no need to change this. The lower part of the screen looks like:
The Internal Data Source Name may be defined at your wish. There's no need to use the same name as the Output Field of the Data
Input Method, but it may look nicer.
Click create to see:
Notice the new DropDown Output Field. As there is only one Output Field defined by our Data Input Method, you'll see only this.
Here's how to connect the Data Source Name (used in the rrd file) to the Output Field of the Script. Click Save and you're done.
Fill in Name and Title. The variable |host_description| will again be filled from the Device's definition when generating the Graph.
Keep the rest as is and Create. See:
Now click Add to select the first item to be shown on the Graphs:
Select the correct Data Source from the DropDown, fill in a color of your liking and select AREA as a Graph Item Type. You
want to fill in a Text Format that will be shown underneath the Graph as a legend. Again, Create:
Notice, that not only an entry was made under Graph Template Items, but under Graph Item Inputs as well. Don't bother with
that now. Lets fill some more nice legends, see:
Notice, that the Data Source is filled in automagically. Select LEGEND as Graph Item Type (it is not really a Graph Item Type in
rrdtool-speak, but a nice time-saver), and click Create to see:
Wow! Three items filled with one action! You may want to define a Vertical Label at the very bottom of the screen and Save.
Select your newly created Graph template from the Add Graph Template DropDown. Click Add to see:
The Template is added and shown as Not Being Graphed. On the top of the page you'll find the Create Graphs for this Host link.
Click this to see:
Check the box that belongs to the new template and Create. See the results:
create the needed Graph Description from the Graph Template. As you may notice from the success message, this Graph takes
the hosts name in it: router - Test ping (router is the hosts name of this example).
create the needed Data Sources Description from the Data Template. Again, you will find the Hosts name replaced for
|host_description|
create the needed rrd file with definitions from the Data Template. The name of this file is derived from the Host and the Data
Template in conjunction with an auto-incrementing number.
create an entry to the poller_table to instruct cacti to gather data on each polling cycle
You'll have to wait at least for two polling cycles to find data in the Graph. Find your Graph by going to Graph Management,
filtering for your host and selecting the appropriate Graph (there are other methods as well). This may look like:
More Scripts
It is not only possible to operate scripts with one but with many input and output parameters. As an example, lets create a script
version of the UDP Packets In/Out stuff. The solution using the SNMP - Generic OID Template was already shown in Why
Templates?
Code:
#!/usr/bin/perl -w
# --------------------------------------------------
# ARGV[0] = <hostname> required
# ARGV[1] = <snmp port> required
# ARGV[2] = <community> required
# ARGV[3] = <version> required
# --------------------------------------------------
use Net::SNMP;
# usage notes
if (
( ! defined $in_hostname ) ||
( ! defined $in_port ) ||
( ! defined $in_community ) ||
( ! defined $in_version )
) {
print "usage:\n\n
$0 <host> <port> <community> <version>\n\n";
exit;
}
# on error: exit
if (!defined($session)) {
printf("ERROR: %s.\n", $error);
exit 1;
}
# on error: exit
if (!defined($result)) {
printf("ERROR: %s.\n", $session->error);
$session->close;
exit 1;
}
# print results
printf("udpInDatagrams:%s udpOutDatagrams:%s", # <<< cacti requires this format!
$result->{$udpInDatagrams},
$result->{$udpOutDatagrams},
);
$session->close;
Output:
Where "public" may be replaced by your community string. Of course, the numbers will vary.
Enter the name of the new Data Input Method, select Script/Command and type in the command to call the script. Please use the
full path to the command interpreter. Instead of entering the specific parameters, type <symbolic variable name> for each
parameter the script needs. Save:
Now Add each of the input parameters in the Input Fields section, one after the other. All of them are listed in sequence, starting
with <host>:
<port>
<community>
<version>
We've used some of cacti builtin parameters. When applied to a host, those variables will be replaced by the hosts actual settings.
Then, this command will be stored in the poller_command table. Now Save your work to see
After having entered all Input Fields, let's now turn to the Output Fields, respectively. Add the first one, udpInDatagrams:
Now udpOutDatagrams:
Be careful to avoid typos. The strings entered here must exactly match those spitted out by the script. Double check Output Fields!
Now, results should be like
Data queries are not a replacement for data input methods in Cacti. Instead they provide an easy way to query, or list data based
upon an index, making the data easier to graph. The most common use of a data query within Cacti is to retrieve a list of network
interfaces via SNMP. .... While listing network interfaces is a common use for data queries, they also have other uses such as listing
partitions, processors, or even cards in a router.
One requirement for any data query in Cacti, is that it has some unique value that defines each row in the list. This concept follows
that of a 'primary key' in SQL, and makes sure that each row in the list can be uniquely referenced. Examples of these index values
are 'ifIndex' for SNMP network interfaces or the device name for partitions.
There are two types of data queries that you will see referred to throughout Cacti. They are script queries and SNMP queries. Script
and SNMP queries are virtually identical in their functionality and only differ in how they obtain their information. A script query
will call an external command or script and an SNMP query will make an SNMP call to retrieve a list of data.
All data queries have two parts, the XML file and the definition within Cacti. An XML file must be created for each query, that
defines where each piece of information is and how to retrieve it. This could be thought of as the actual query. The second part is a
definition within Cacti, which tells Cacti where to find the XML file and associates the data query with one or more graph templates.
Code:
<direction>input</direction>
At last, you will have to define those fields, that will be queried for the readings, e.g. ifInOctets, ifOutOctets, ifInErrors, ... The XML
file knows them as
Code:
<direction>output</direction>
Lets have an example: standard Interface MIB with the corresponding part of the
/resources/snmp_queries/interfaces.xml file are displayed using the following table:
and see the corresponding table structure when defining New Graphs for that device (my laptop):
Index: IF-MIB::ifIndex
Status: IF-MIB::ifOperStatus
Description: IF-MIB::ifDescr
Type: IF-MIB::ifType
Speed: IF-MIB::ifSpeed
All those are input Parameters. They serve as descriptive information to each row to help you identify the proper interface to use.
Those parameters of output can be compared to output parameters of a script (see ping.pl script above). These are the readings from
the device. By selecting the appropriate row (the one greyed out had been selected by me), you tell cacti to retrieve data from the
interface defined by this index. But how does cacti know, what output parameters it shall retrieve? See the Select a Graph type
DropDown. It specifies a Graph Template defined for this Data Query. The Graph Template in turn references a Data Template
which incorporates the needed output parameters as Data Sources. This works quite the same way as defined for a Data Input
Method.
To sum up: the SNMP XML file is somehow a replacement for the Data Input Method described above to be used on indexed
values. It tells cacti, what data it should retrieve (direction: output). To help you identifying the relevant indexes, the XML defines
descriptive parameters (direction: input) to be displayed in the selection table.
A walkthrough for this is given now. It is based on the already supplied interfaces.xml XML file.
Here, we are using the already existing interface.xml file. Select Get SNMP Data (Indexed) as Data Input Method. Create to see:
See, that cacti found the XML file. Don't bother with the Associated Graph Templates at the moment. The success message does not
include checking of the XML file's content. Not lets proceed to the next definitions.
This is the exact copy of the definitions made above. So I do not repeat everything here. Data Input Method must be selected as
Get SNMP Data (Indexed). As this data source is a COUNTER type, select this as the Data Source Type. But after saving the new
Data Source definition, you may want to define a second Data Source to the same Data Template. To do so, select New from the
Data Source Item heading to see:
The name of the Data Source (ifOutOctets) is not replaced in the Tab until you save your work. By default, Maximum Value is set
to 100. This is way too low for an interface. All readings above this value will be stored as NaN by rrdtool. To avoid this, set to 0 (no
clipping) or to a reasonable value (e.g. interface speed). Don't forget to specify COUNTER! You will have noticed, that the name of
the data source does not match the Name in the interface.xml. Don't worry, the solution to this is given later on.
This is specific to indexed SNMP Queries. You will have to check the last three items to make indexing work. All other items should
be left alone, there values will be taken from the appropriate device definitions. Now Save and you're done with this step.
Now you want to tell cacti, how to present the data retrieved from SNMP Query. Again, this is done by merely copying the
procedure described above. When selecting the Data Source, pay attention to select from the just defined data sources.
The next step is new and applies only to Data Queries:
Now it's time to re-visit our Data Query. Remember the Associated Graph Template we've left alone in the very first step? Now it
will get a meaning. Go to Data Queries and select our new one. Then Add a new Associated Graph Template:
Select the correct Data Source, pay attention to checking the checkboxes of each row. Apply a name to the Data Template and a
title to the Graph Template. Use cacti variables as defined in Chapter 15. Variables - Data Query Fields. You may use all XML
fields defined as input; in this example the fields and of the interface.xml were used. Add those Suggested Values. They will be
used to distinguish Data Sources and Graphs for the same device; without this they all would carry the same name. At last: Save:
Click Add and then Create Graphs for this Host to see:
Now select the wanted interface and Create to generate the Traffic Graph. As long as there's only one Associated Graph Template
for that Data Query, here will be now Select a Graph Type DropDown.
Code:
<direction>input</direction>
At last, you will have to define those fields, that will be queried for the readings, e.g. ifInOctets, ifOutOctets, ifInErrors, ... The XML
file knows them as
Code:
<direction>output</direction>
Lets have an example: standard Interface MIB with the corresponding part of the
/resources/snmp_queries/interfaces.xml file are displayed using the following table:
and see the corresponding table structure when defining New Graphs for that device (my laptop):
Index: IF-MIB::ifIndex
Status: IF-MIB::ifOperStatus
Description: IF-MIB::ifDescr
Type: IF-MIB::ifType
Speed: IF-MIB::ifSpeed
All those are input Parameters. They serve as descriptive information to each row to help you identify the proper interface to use.
Those parameters of output can be compared to output parameters of a script (see ping.pl script above). These are the readings from
the device. By selecting the appropriate row (the one greyed out had been selected by me), you tell cacti to retrieve data from the
interface defined by this index. But how does cacti know, what output parameters it shall retrieve? See the Select a Graph type
DropDown. It specifies a Graph Template defined for this Data Query. The Graph Template in turn references a Data Template
which incorporates the needed output parameters as Data Sources. This works quite the same way as defined for a Data Input
Method.
To sum up: the SNMP XML file is somehow a replacement for the Data Input Method described above to be used on indexed
values. It tells cacti, what data it should retrieve (direction: output). To help you identifying the relevant indexes, the XML defines
descriptive parameters (direction: input) to be displayed in the selection table.
A walkthrough for this is given now. It is based on the already supplied interfaces.xml XML file.
Here, we are using the already existing interface.xml file. Select Get SNMP Data (Indexed) as Data Input Method. Create to see:
See, that cacti found the XML file. Don't bother with the Associated Graph Templates at the moment. The success message does not
include checking of the XML file's content. Not lets proceed to the next definitions.
The name of the Data Source (ifOutOctets) is not replaced in the Tab until you save your work. By default, Maximum Value is set
to 100. This is way too low for an interface. All readings above this value will be stored as NaN by rrdtool. To avoid this, set to 0 (no
clipping) or to a reasonable value (e.g. interface speed). Don't forget to specify COUNTER! You will have noticed, that the name of
the data source does not match the Name in the interface.xml. Don't worry, the solution to this is given later on.
This is specific to indexed SNMP Queries. You will have to check the last three items to make indexing work. All other items should
be left alone, there values will be taken from the appropriate device definitions. Now Save and you're done with this step.
Select the correct Data Source, pay attention to checking the checkboxes of each row. Apply a name to the Data Template and a
title to the Graph Template. Use cacti variables as defined in Chapter 15. Variables - Data Query Fields. You may use all XML
fields defined as input; in this example the fields and of the interface.xml were used. Add those Suggested Values. They will be
used to distinguish Data Sources and Graphs for the same device; without this they all would carry the same name. At last: Save:
Click Add and then Create Graphs for this Host to see:
Now select the wanted interface and Create to generate the Traffic Graph. As long as there's only one Associated Graph Template
for that Data Query, here will be now Select a Graph Type DropDown.
Code:
This given, the first step will be the definition of an xml file based on those OIDs. So change to your
<path_cacti>/resources/snmp_queries directory and create a file named hrStorageTable.xml. You may of course choose your own
name, but for me it seems appropriate to take the name of the SNMP Table itself. Before doing so, it is necessary to identify the
Index of that table. Without looking at the MIB file, simply perform
Code:
<name>Index</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.1</oid>
</hrStorageIndex>
</fields>
</interface>
name
Short Name; chose your own one if you want
description
Long Name
index_order_type
numeric instead of alphabetic sorting
oid_index
the index of the table
There are more header elements, but for sake of simplification, we'll stick to that for now.
Lets turn to the fields. They correspond to the columns of the snmptable. For debugging purpose it is recommended to start with the
Index field first. This will keep the XML as tiny as possible. The section contains one or more fields, each beginning with and
ending with . It is recommended but not necessary to take the textual representation of the OID or an abbreviation of that.
name
Short Name
method
walk or get (representing snmpwalk or snmpget to fetch the values)
source
value = take the value of that OID as the requested value. Sounds ugly, but there are more options that we won't need for the
purpose of this Howto
direction
input (for values that may be printed as COMMENTs or the like)
output (for values that shall be graphed, e.g. COUNTERs or GAUGEs)
oid
the real OID as numeric representation
Now save this file and lets turn to cacti to implement this one. First, goto Data Queries to see
snmptable-dq-01
snmptable-dq-02
Fill in Short and Long Names at your wish. Enter the file name of the XML file and don't forget to choose Get SNMP Data
(indexed). Create to see
snmptable-dq-03
It has now Successfully located XML file. But this does not mean that there are no errors. So lets go on with that. Turn to the
Device you want to query and add the new Data Query as shown:
snmptable-dev-01
Index Count Changed was chosen on purpose to tell cacti to re-index not only on rebbot but each time the Index Count (e.g.
number of partitions) changed. When done, see the results as
snmptable-dev-02
You'll notice, that on my laptop there are 11 indices = 11 partitions. So the XML worked up to now! To make this clear, select
Verbose Query to see:
snmptable-dev-03
hrStorageType
hrStorageDescr
hrStorageAllocationUnits
I like to take the XML field names from the snmptable output, but this is not a must.
Code:
<interface>
<name>Get hrStoragedTable Information</name>
<description>Get SNMP based Partition Information out of hrStorageTable</description>
<index_order_type>numeric</index_order_type>
<oid_index>.1.3.6.1.2.1.25.2.3.1.1</oid_index>
<fields>
<hrStorageIndex>
<name>Index</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.1</oid>
</hrStorageIndex>
<hrStorageType>
<name>Type</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.2</oid>
</hrStorageType>
<hrStorageDescr>
<name>Description</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.3</oid>
</hrStorageDescr>
<hrStorageAllocationUnits>
<name>Allocation Units (Bytes)</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.4</oid>
</hrStorageAllocationUnits>
</fields>
</interface>
The <name></name> information will later show up as a column heading. Don't forget to provide the correct base OIDs. Remember,
that the Index will always be appended to those OIDs, e.g. the first Description will be fetched from OID = .1.3.6.1.2.1.25.2.3.1.3.1
(that is base OID = .1.3.6.1.2.1.25.2.3.1.3 together with the appended index .1 will form the complete OID .1.3.6.1.2.1.25.2.3.1.3.1.
Please notice, that all fields that will yield descriptive columns only take <direction>input</direction>
If you have completed your work, turn to the cacti web interface and select your host from the Devices list to see:
snmptable-dev-10
Select the little green circle next to our SNMP XML to update your last changes. Then you'll see sth like:
snmptable-dev-11
snmptable-dev-12
snmptable-dev-13
You're not supposed to really create graphs at this moment, cause the XML is not yet complete. And you'll notice, that the second
column does not present very useful information. So it may be omitted in later steps.
Code:
<interface>
<name>Get hrStoragedTable Information</name>
<description>Get SNMP based Partition Information out of hrStorageTable</description>
<index_order_type>numeric</index_order_type>
<oid_index>.1.3.6.1.2.1.25.2.3.1.1</oid_index>
<fields>
<hrStorageIndex>
<name>Index</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.1</oid>
</hrStorageIndex>
<hrStorageDescr>
<name>Description</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.3</oid>
</hrStorageDescr>
<hrStorageAllocationUnits>
<name>Allocation Units (Bytes)</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.4</oid>
</hrStorageAllocationUnits>
<hrStorageSize>
<name>Total Size (Units)</name>
<method>walk</method>
<source>value</source>
<direction>output</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.5</oid>
</hrStorageSize>
<hrStorageUsed>
<name>Used Space (Units)</name>
<method>walk</method>
<source>value</source>
<direction>output</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.6</oid>
</hrStorageUsed>
</fields>
</interface>
Now we may proceed as said above: Pressing the green circle runs that XML definitions against the host and updates the
rows/columns. You will notice the "missing" second column only when Create Graphs for this Host is selected.
Don't forget to set <direction>output</direction> for all variables/fields, that should be stored in rrd files and be graphed!. This is
the mistake that occurs most often.
operation, please see Common Tasks. Please goto Data Templates and Add:
snmptable-dt-01
Define the Name of the Data Template. When defining the Name of the Data Source, do not forget to check the Use Per-Data
Source Value (Ignore this Value) checkbox. This will come in useful later. Data Input Method will read Get SNMP Data
(Indexed). Select Associated RRAs as usual (don't bother with my settings):
snmptable-dt-02
Now to the Data Source Items. I like giving them the names of the MIB OIDs, see:
snmptable-dt-03
snmptable-dt-04
Please pay attention to setting the Maximum Value to 0 (zero). Else, all values exceeding the pre-defined value of 100 would be
stored as NaN. Now scroll down to the bottom of the page and check Index Type, Index Value and Output Type Id
snmptable-dt-05
snmptable-gt-01
Fill in the header names and don't forget to check the Use Per-Graph Value (Ignore this Value) for the Graph Template Title:
snmptable-gt-02
and Create.
snmptable-gt-03
snmptable-gt-04
snmptable-gt-05
snmptable-gt-06
snmptable-dq-10
Now Add the Associated Graph Templates and fill in a meaningsful name. Select the newly created Graph Template to see:
snmptable-dq-11
Create:
snmptable-dq-12
Select the correct Data Sources and check the boxes on the right. Save. Now fill in some useful Suggested Values, at first for the
Data Template:
snmptable-dq-13
snmptable-dq-14
snmptable-dq-15
snmptable-dev-20
snmptable-dev-21
snmptable-ds-01
As you can see, the Suggested Values of the Data Query defined the Name of the Data Template. So lets go to Graph
Management:
snmptable-gm-01
to see the title defined by the Suggested Values. When turning to the Graphs, you may see something like
snmptable-graph-01
This might be the end of the show. While it should be enough to define some "easy" SNMP XML based Data Queries, there are
some tricks and hints left to explain.
As you may have noticed, the quantities defined by this example are counted in Units, not Bytes. This is somewhat inconvinient but
may be changed. Lets wait for the next Chapter ...
Code:
<hrStorageAllocationUnits>
<name>Allocation Units (Bytes)</name>
<method>walk</method>
<source>value</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.4</oid>
</hrStorageAllocationUnits>
by
Code:
<hrStorageAllocationUnits>
<name>Allocation Units (Bytes)</name>
<method>walk</method>
<source>VALUE/REGEXP:([0-9]*) Bytes</source>
<direction>input</direction>
<oid>.1.3.6.1.2.1.25.2.3.1.4</oid>
</hrStorageAllocationUnits>
To proove this, goto your device and again Verbose Query our Data Query to see:
snmptable-dev-30
Now select Create Graphs for this Host and notice the change of the column Allocation Units (Bytes). The string "Bytes" has
gone:
snmptable-dev-31
snmptable-cdef-01
Notice, that with recent releases of cacti, it is possible to use |query_*| values within CDEFs. Finally, goto Graph Templates and
use this CDEF with all Graph Items:
snmptable-gt-10
Change the Base Value to 1024 for Bytes -> kBytes and the y-axis description to Bytes:
snmptable-gt-11
snmptable-graph-10
The example uses php. Why php? First, it's easier to copy stuff from already existing php scripts. Second, it would be possible to use
cacti functions. It should be possible to imagine, how this works with other programming languages. Strictly speaking, I'm not that
php expert. So be patient with me.
Please pay attention. This HowTo will not explain how to write a Script Server Data Query (yes, there is such a thing!). It would
not introduce that many changes. But this will be left to some other HowTo.
Personally, my primary goal was to use an example, that all users should be able to copy to execute each and every step on its own.
Unfortunately, there seems to be no example, that is common enough and interesting at the same time. So I'm sorry to announce, that
this HowTo will show "Interface Traffic Data Gathering". Yes, I know, this is not that new. And surely, it will not be as fast as pure
SNMP. So, to my shame, I suppose that this will never make it into any production environment. But, again, this is not the primary
goal.
Before starting the work, I feel encouraged to point out a drawback of this approach. Cacti will start a php instance, each time it has
to fetch a value from the target device. This is not that fast, obviously. And it will not prosper from the performance boost when
switching over from cmd.php to cactid. Of course, even cactid will need to start php! And that's exactly, where the thingy called
Script Server Data Query drops in. But let's leave this for the next main chapter.
It will show interface indices only for the given target host. The script takes two parameters as input, the hostname of the target and
the string index. You have to implement the index method, as OO programmers would say. In this case, there's an "if" clause to
process index requests.
Code:
<?php
# -------------------------------------------------------------------------
# main code starts here
#
# snmp walk will not be provided with snmp_user and snmp_password
# so this will not work for SNMP V3 hosts
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
# script MUST respond to index queries
# the command for this is defined within the XML file as
# <arg_index>index</arg_index>
# you may replace the string "index" both in the XML and here
# -------------------------------------------------------------------------
# php -q <script> <parms> index
# will list all indices of the target values
# e.g. in case of interfaces
# it has to respond with the list of interface indices
# -------------------------------------------------------------------------
if ($cmd == "index") {
# retrieve all indices from target
$return_arr = reindex(cacti_snmp_walk($hostname, $snmp_community, $oids["index"],
$snmp_version, $snmp_user, $snmp_pw, $snmp_port,
$snmp_timeout, $snmp_retries));
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
} else {
print "Invalid use of script query, required parameters:\n\n";
print " <hostname> <cmd>\n";
}
function reindex($arr) {
$return_arr = array();
for ($i=0;($i<sizeof($arr));$i++) {
$return_arr[$i] = $arr[$i]["value"];
}
return $return_arr;
}
?>
Code:
Discussion: function_reindex
You may wonder why this function drops in. Well, lets have a look at cacti_snmp_walk. This function is part of cacti itself and
eases the use of SNMP. That's why I call it here. But unfortunately, it's output looks like
Code:
Array
(
[0] => Array
(
[oid] => 1.3.6.1.2.1.2.2.1.1.1
[value] => 1
)
(
[oid] => 1.3.6.1.2.1.2.2.1.1.4
[value] => 4
)
The values of interest are stored in $return_arr[$i] = $arr[$i]["value"];. The function_reindex gets them all.
Code:
<interface>
<name>Get Interface Traffic Information</name>
<script_path>|path_php_binary| -q |path_cacti|/scripts/query_interface_traffic.php</script_path>
<arg_prepend>|host_hostname|</arg_prepend>
<arg_index>index</arg_index>
<fields>
<ifIndex>
<name>Index</name>
<direction>input</direction>
<query_name>index</query_name>
</ifIndex>
</fields>
</interface>
name:
Short Name; chose your own one if you want
script_path:
Whole command to execute the script from cli. |path_php_binary| is a cacti builtin variable for /the/full/path/to/php.
|path_cacti| in turn gives the path of the current
cacti installation directory.
arg_prepend:
All arguments passed to the script go here. There are some builtin variables, again. |host_hostname| represents the hostname
of the device this query will be associated to.
arg_index:
The string given here will be passed just after all <arg_prepend> to the script for indexing requests. Up to now, this
is the only method our script will answer to.
fields:
All fields will be defined in this section. Up to now, only the index field is defined
name:
The name of this very field
direction:
input defines all fields that serve as a descriptive information to a specific table
index. These values will not be graphed but may be printed in e.g.graph titles by means of |query_<name>|
output defines all fields that will yield a number that should be stored in some rrd
file
query_name:
Name of this field when performing a query or a get request (will be shown later, don't worry now).
Now save this file and lets turn to cacti to implement this one. First, goto Data Queries to see
Fill in Short and Long Names at your wish. Enter the file name of the XML file and don't forget to choose Get Script Data
(indexed). Create to see
It has now Successfully located XML file. But this does not mean that there are no errors. So lets go on with that. Turn to the
Device you want to query and add the new Data Query as shown:
Index Count Changed was chosen on purpose to tell cacti to re-index not only on reboot but each time the Index Count (e.g.
number of interfaces) changed. When done, see the results as
Code:
<?php
The next step removes all the builtin "magic strings" and replaces them by parameters. We'll have to change the XML template for
that (see: later on). cacti supports "snmp_retries" since version 0.8.6i. This is a global config option, access to those is available
using "read_config_option".
Code:
$snmp_timeout = $_SERVER["argv"][5];
$snmp_user = $_SERVER["argv"][6];
$snmp_pw = $_SERVER["argv"][7];
$cmd = $_SERVER["argv"][8];
if (isset($_SERVER["argv"][9])) { $query_field = $_SERVER["argv"][9]; };
if (isset($_SERVER["argv"][10])) { $query_index = $_SERVER["argv"][10]; };
Code:
# -------------------------------------------------------------------------
# script MUST respond to index queries
# the command for this is defined within the XML file as
# <arg_index>index</arg_index>
# you may replace the string "index" both in the XML and here
# -------------------------------------------------------------------------
# php -q <script> <parms> index
# will all indices of the target values
# e.g. in case of interfaces
# it has to respond with the list of interface indices
# -------------------------------------------------------------------------
if ($cmd == "index") {
# retrieve all indices from target
$return_arr = reindex(cacti_snmp_walk($hostname, $snmp_community, $oids["index"],
$snmp_version, $snmp_user, $snmp_pw, $snmp_port,
$snmp_timeout, $snmp_retries));
Code:
#
Last option is the get function
# -------------------------------------------------------------------------
# script MUST respond to query requests
#
Code: the command for this is defined within the XML file as
# <arg_query>query</arg_query>
# you may replace the string "query" both in the XML and here
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
php -q <script> <parms> query <function>
# script MUST respond
where <function> is to get requests
a parameter that tells this script,
# which the command
target valuefor thisbe
should isretrieved
defined within the XML file as
# e.g. in
<arg_get>get</arg_get>
case of interfaces, <function> = ifdescription
# you
it may
has toreplace
respondthe string
with "get"
the list ofboth in the XML and here
# -------------------------------------------------------------------------
interface indices along with the description of the interface
# -------------------------------------------------------------------------
php -q <script> <parms> get <function> <index>
# where <function>
}elseif is a parameter
($cmd == "query") { that tells this script,
# which target value
$arr_index = should be retrieved
reindex(cacti_snmp_walk($hostname, $snmp_community, $oids["index"],
# and <index> is the index that should be queried
$snmp_version, $snmp_user, $snmp_pw, $snmp_port,
# e.g. in case of interfaces, <function> = ifdescription
$snmp_timeout, $snmp_retries));
# <index> = 1
$arr = reindex(cacti_snmp_walk($hostname, $snmp_community, $oids[$query_field],
# it has to respond with $snmp_version, $snmp_user, $snmp_pw, $snmp_port,
# the description of the interface for interface #1
$snmp_timeout, $snmp_retries));
# -------------------------------------------------------------------------
}elseif ($cmd == "get") {
for ($i=0;($i<sizeof($arr_index));$i++) {
print (cacti_snmp_get($hostname, $snmp_community,
print $arr_index[$i] . $xml_delimiter $oids[$query_field]
. $arr[$i] . "\n"; .
} ".$query_index", $snmp_version, $snmp_user, $snmp_pw,
Code:
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
} else {
print "Invalid use of script query, required parameters:\n\n";
print " <hostname> <community> <version> <snmp_port> <timeout> <user> <pw> <cmd>\n";
}
function reindex($arr) {
$return_arr = array();
for ($i=0;($i<sizeof($arr));$i++) {
$return_arr[$i] = $arr[$i]["value"];
}
return $return_arr;
}
?>
You may want to copy all those fragments together and replace the basic script. Now, lets have a try using the command line. The
"index" option was already shown, but is repeated here
Output:
[me@gandalf scripts]$ php -q query_interface_traffic.php <target> <community> 1 161 500 "" "" index
1
2
3
4
Now, lets test the "query" option. The keyword "query" must be given along with the variable, that should be queried. The script
now will scan all indices and report the contents of the given variable as follows:
Output:
[me@gandalf scripts]$ php -q query_interface_traffic.php <target> <community> 1 161 500 "" "" query iftype
1!ethernetCsmacd(6)
2!0
3!0
4!ethernetCsmacd(6)
The output reports the index, followed by the chosen delimiter. Then, the content of the requested variable is printed
Last, the "get" option is shown. The keyword "get" is required, followed again by the variable (see above). Last needed option is the
index, for which the "get" should be performed. Contrary to the "query" option, only one index is scanned. So the index number is
not required and will nort be printed.
Output:
[me@gandalf scripts]$ php -q query_interface_traffic.php <target> <community> 1 161 500 "" "" get iftype 1
ethernetCsmacd(6)
Of course, we now will have to complete the XML file given in Chapter II. Find it at
<path_cacti>/resources/script_queries/ifTraffic.xml.
Code:
<interface>
<name>Get Interface Traffic Information</name>
<script_path>|path_php_binary| -q |path_cacti|/scripts/query_interface_traffic.php</script_path>
<arg_prepend>|host_hostname| |host_snmp_community| |host_snmp_version| |host_snmp_port| |host_snmp_timeout| "|host_snmp_username|" "|host_snmp_password|"</arg_prepend>
<arg_index>index</arg_index>
<arg_query>query</arg_query>
<arg_get>get</arg_get>
<output_delimeter>!</output_delimeter>
<index_order>ifIndex</index_order>
<index_order_type>numeric</index_order_type>
<index_title_format>|chosen_order_field|</index_title_format>
arg_prepend
some more parameters were added to provide all necessary values for the script. They are position-dependant. You may notice
the strange tics I've added for host_snmp_username and host_snmp_password. If you're not using those SNMP V3 parameters,
they must be quoted, else the script would fail because two parameters would be missing. Unfortunately, I don't have any
SNMP V3 capable system. So I was not able to test this version.
arg_query
The string passed to the query to perform query requests is given here. So you may modify it to your liking (in this case, the
script has to be modified accordingly).
arg_get
Some as above for get requests
output_delimiter
The delimiter used for query requests to separate index and value
index_order (optional)
Cacti will attempt to find the best field to index off of based on whether each row in the query is unique and non-null. If
specified, Cacti will perform this check on the fields listed here in the order specified. Only input fields can be specified and
multiple fields should be delimited with a comma.
index_order_type (optional)
For sorting purposes, specify whether the index is numeric or alphanumeric.
numeric: The indexes in this script query are to be sorted numerically (ie. 1,2,3,10,20,31)
alphabetic: The indexes in this script query are to be sorted alphabetically (1,10,2,20,3,31).
index_title_format (optional)
Specify the title format to use when representing an index to the user. Any input field name can be used as a variable if
enclosed in pipes (|). The variable |chosen_order_field| will be substituted with the field chosen by Cacti to index off of (see
index_order above). Text constants are allowed as well
Code:
<fields>
<ifIndex>
<name>Index</name>
<direction>input</direction>
<query_name>index</query_name>
</ifIndex>
<ifstatus>
<name>Status</name>
<direction>input</direction>
<query_name>ifstatus</query_name>
</ifstatus>
<ifdescription>
<name>Description</name>
<direction>input</direction>
<query_name>ifdescription</query_name>
</ifdescription>
<ifname>
<name>Name</name>
<direction>input</direction>
<query_name>ifname</query_name>
</ifname>
<ifalias>
<name>Alias</name>
<direction>input</direction>
<query_name>ifalias</query_name>
</ifalias>
<iftype>
<name>Type</name>
<direction>input</direction>
<query_name>iftype</query_name>
</iftype>
<ifspeed>
<name>Speed</name>
<direction>input</direction>
<query_name>ifspeed</query_name>
</ifspeed>
<ifHWaddress>
<name>HWaddress</name>
<direction>input</direction>
<query_name>ifHWaddress</query_name>
</ifHWaddress>
<ifInOctets>
<name>InOctets</name>
<direction>output</direction>
<query_name>ifInOctets</query_name>
</ifInOctets>
<ifOutOctets>
<name>OutOctets</name>
<direction>output</direction>
<query_name>ifOutOctets</query_name>
</ifOutOctets>
</fields>
</interface>
Attention: The query_name strings must match the OID names exactly!
Please notice, that all but the last two fields use direction input. All variables representing numeric values to be graphed must be
defined as direction output instead.
Output:
Read it carefully, and you'll notice, that all XML fields were scanned and the output shown. All? No, not all. The direction output
fields are missing! But this is on purpose as those won't make sense as header fields but will be written to rrd files.
script_query-data_template-add-01
and find:
script_query-data_template-add-02
fill in Data Template Name, Data Source Name, and, most important, select Data Input Method to read Get Script Data
(Indexed). Leave Associated RRAs as is.
When creating the data template and graph template, you SHOULD check the "Use Per Data Source Value" checkbox for name &
title.
When you first create graphs using the data query, it will use the "Suggested Values" to name the templates. But then if you ever edit
the templates and leave the "Use Per Data Source Value" unchecked, then saving will overwrite all the data source and graph
names. (comment: thanks to user goldburt)
script_query-data_template-add-03
enter the Internal Data Source Name. You may select this name freely. There's no need to match it to any of the XML field names.
As the OID is a COUNTER, the Data Source Type must be selected appropriately. Save.
script_query-data_template-add-04
script_query-data_template-add-05
Again, fill in the Data Source Name. Pay attention to set the maximum value to 0 to avoid clipping it off during updating of the rrd
file. COUNTER has to be set as done above.
Important! You have to select the marked Index fields! Now, save again and you're done.
Select the Data Source from our Data Template, take the color and select AREA, enter some text
Save and add the next graph item. Now, we're going to use the "LEGEND" timesaver again:
For the next step, it's necessary to remove the newline added with the last action. Please select the 4th item as follows
Now lets add the same data source again, but as a LINE1, MAXimum with a slightly changed color. Newline is checked this time
Pooh. Now lets apply the same procedure for the Outgoing Traffic. Personally, I love those outgoing stuff tp be presented on the
negative y-axis. So we'll have to apply some CDEF magic to some items. Lets see
Please pay attention when adding the "LEGEND" stuff. No CDEF to be applied in this case (else, legends will show negative values)
and add a new LINE1, MAXimum, "Make Stack Negative" CDEF with some text and a newline
Hoping, you've got all those steps correctly, finally Save your work. Take a cup of coffee to get your brains free again, kiss your
wife, hug your children and/or pet your dog; sequence is arbitrary.
So, let's revisit the Data Query. Remember the lower part on Associated Graph Templates. Click Add
fill in a name for your choice and select the Graph Template that we have created in the last step.
Create to see
First, let's have a look at the upper half of the screen. The red box to the left show the Internal Data Source Names names taken
from the Data Template that is associated with the Graph template we've just added.
The red box to the middle has a dropdown for each data source item. The dropdown list contains all output fields taken from the
XML file. In our case, there are only two of them.
The red box to the right must be checked on each line, to make the association valid. Now, lets turn to the lower half of the screen,
denoted Suggested Values
The example shows |host_description| - Traffic - |query_ifdescription| entered both for name of the Data Template and title of the
Graph Template. Click Add, one by one
Notice the second title I've added here. If more than one entry is present, they are applied from top to bottom, until a match is found.
Match means, that all variables present are filled. Of course, you may add more than one variable taken from the XML file. But pay
attention, that not all devices will fill all those variables. So my router does, sigh.
You may use all input variables listed in the XML file. A <variable> may be listed as |query_<variable>|, e.g. for ifalias write
|query_ifalias| and so forth.
Click Save, and find the new Graph Template added to the list of Associated Graph Templates.
You may continue to add more Graph Templates, each of them may be related to other output field of the XML file. Find, as an
example, lots of graph templates associated to the standard Interface Statistics Data Query to get an idea what I'm talking about
Don't worry about the first two entries; they are home-made.
to see
I've left the standard Interface Statistics in the screenshot. So you may compare both Queries. Our PHP Interface Traffic stuff has
two more header items, Name and Alias. But all data seen equals the standard SNMP Data Query; not that bad, eh?
and Create
You'll have to wait a bit, at least two polling cycles. Then, you may notice some data in your new graph. The next image shows both
our new graph (the first one) and a standard interface traffic graph. The latter one holds more data in this example, don't worry about
that.
Having a closer view, you may notice a difference in magnitude (y-axis). But please compare the units used. The first graph uses
Bytes, the latter one uses Bits. For comparison, it would be necessary to multiply the first one with 8. This may be done using a
CDEF Turn Bytes into Bits, applied to all items of the Graph Template. This task is left to you.
Summing Up
In chapter Common Tasks, I've shown some basic principles of operation. The graph shown should demonstrate the underlying
structure, but it was a bit incomplete. To be more precise, cacti's tasks sum up as following:
You'll notice the association of Graph Templates to the Data Query as a last step. And a new theme has popped up, the Host
Template. This one is for grouping Graph Templates and Data Queries with Associated Graph Templates together as a single
Host Template. You may associate each Host to one of those Host Templates. This will ease the burden of associating endless lists of
Graph Templates to dozens of hosts.
Maintenance
Solution:
You have a connectivity problem with php and mysql. If you're running MySQL 4.1 or 5, then you will need to apply the old
password trick for user authentication to work with Cacti. Add the following to the [mysqld] sub-section:
Code:
#Use old password encryption method (needed for 4.0 and older clients).
old-passwords.
(Courtesy "BSOD2600)
Run
Code:
to find the mysql sock file (MYSQL_SOCKET), e.g. at /var/lib/mysql/mysql.sock rather than /tmp/mysql.sock (which is the
default location for mysqld).
In this case, create a symlink from /var/lib/mysql/mysql.sock to /tmp/mysql.sock or edit /etc/my.cnf to solve this issue
(Courtesy "doctor_octagon")
Please have a look at your cacti log file. Usually, you'll find it at <path_cacti>/log/cacti.log. Else see Settings, Paths. Check
for this kind of error:
Code:
CACTID: Host[...] DS[....] WARNING: SNMP timeout detected [500 ms], ignoring host '........'
For "reasonable" timeouts, this may be related to a snmpbulkwalk issue. To change this, see Settings, Poller and lower the
value for The Maximum SNMP OID's Per SNMP Get Request. Start at a value of 1 and increase it again, if the poller starts
working. Some agent's don't have the horsepower to deliver that many OID's at a time. Therefore, we can reduce the number
for those older/underpowered devices.
For scripts, run them as cactiuser from cli to check basic functionality. E.g. for a perl script named your-perl-script.pl with
parameters "p1 p2" under *nix this would look like:
Code:
su - cactiuser
/full/path/to/perl your-perl-script.pl p1 p2
... (check output)
For snmp, snmpget the _exact_ OID you're asking for, using same community string and snmp version as defined within cacti.
For an OID of .1.3.6.1.4.something, community string of "very-secret" and version 2 for target host "target-host" this would
look like
Code:
First make sure that crontab always shows poller.php. This program will either call cmd.php, the PHP based poller _or_
cactid, the fast alternative, written in C. Define the poller you're using at "Settings" -> "Poller". Cactid has to be implemented
seperately, it does not come with cacti by default.
Then, change "Settings -> Poller Logging Level" to DEBUG for _one_ polling cycle. You may rename this log as well to
avoid more stuff added to it with subsequent polling cycles.
Now, find the host/data source in question. The Host[<id>] is given numerically, the <id> being a specific number for that
host. Find this <id> from the Devices menue when editing the host: The url contains a string like &id=<id>.
Check, whether the output is as expected. If not, check your script (e.g. /full/path/to/perl). If ok, proceed to next step
This procedure may be replaced by running the poller manually for the failing host only. To do so, you need the <id>, again. If
you're using cmd.php, set the DEBUG logging level as defined above and run
Code:
If you're using cactid, you may override logging level when calling the poller:
Code:
All output is printed to STDOUT in both cases. This procdure allows for repeated tests without waiting for the next polling
interval. And there's no need to manually search for the failing host between hundreds of lines of output.
In most cases, this step make be skipped. You may want to return to this step, if the next one fails (e.g. no rrdtool update to be
found)
From debug log, please find the MySQL update statement for that host concerning table poller_output. On very rare
occasions, this will fail. So please copy that sql statement and paste it to a mysql session started from cli. This may as well be
done from some tool like phpmyadmin. Check the sql return code.
Code:
You should find exactly one update statement for each file.
RRD files should be created by the poller. If it does not create them, it will not fill them either. If it does, please check your
Poller Cache from Utilities and search for your target. Does the query show up here?
If rrd files were created e.g. with root ownership, a poller running as cactiuser will not be able to update those files
Code:
cd /var/www/html/cacti/rra
ls -l localhost*
-rw-r--r-- 1 root root 463824 May 31 12:40 localhost_load_1min_5.rrd
-rw-r--r-- 1 cactiuser cactiuser 155584 Jun 1 17:10 localhost_mem_buffers_3.rrd
-rw-r--r-- 1 cactiuser cactiuser 155584 Jun 1 17:10 localhost_mem_swap_4.rrd
-rw-r--r-- 1 cactiuser cactiuser 155584 Jun 1 17:10 localhost_proc_7.rrd
-rw-r--r-- 1 cactiuser cactiuser 155584 Jun 1 17:10 localhost_users_6.rrd
Code:
will help.
You're perhaps wondering about this step, if the former was ok. But due to data sources MINIMUM and MAXIMUM
definitions, it is possible, that valid updates for rrd files are suppressed, because MINIMUM was not reached or MAXIMUM
was exceeded.
Code:
and look at the last 10-20 lines. If you find NaN's there, perform
Code:
Code:
ds[loss].min = 0.0000000000e+00
ds[loss].max = 1.0000000000e+02
In this example, MINIMUM = 0 and MAXIMUM = 100. For a ds.[...].type=GAUGE verify, that e.g. the number returned by
the script does not exceed ds[...].MAX (same holds for MINIMUM, respectively).
If you run into this, please do not only update the data source definition within the Data Template, but perform a
Code:
At this step, it is wise to check step and heartbeat of the rrd file as well. For standard 300 seconds polling intervals
(step=300), it is wise to set minimal_heartbeat to 600 seconds. If a single update is missing and the next one occurs in less
than 600 seconds from the last one, rrdtool will interpolate the missing update. Thus, gaps are "filled" automatically by
interpolation. Be aware of the fact, that this is no "real" data! Again, this must be done in the Data Template itself and by using
rrdtool tune for all existing rrd files of this type.
Last resort would be to check, that the corract data sources are used. Goto Graph Management and select your Graph. Enable
DEBUG Mode to find the whole rrdtool graph statement. You should notice the DEF statements. They specify the rrd file
and data source to be used. You may check, that all of them are as wanted.
9. Miscellaneous
Up to current cacti 0.8.6h, table poller_output may increase beyond reasonable size.
This is commonly due to php.ini's memory settings of 8MB default. Change this to at least 64 MB.
To check this, please run following sql from mysql cli (or phpmyadmin or the like)
Code:
Code:
As of current SVN code for upcoming cacti 0.9, I saw measures were taken on both issues (memory size, truncating
poller_output).
Most rpm installations will setup the crontab entry now. If you've followed the installation instructions to the letter (which you
should always do ;-) ), you may now have two poller running. That's not a good thing, though. Most rpm installations will
setup cron in /etc/cron.d/cacti.
Now, please check all your crontabs, especially /etc/crontab and crontabs of users root and cactiuser. Leave only one poller
entry for all of them. Personally, I've chosen /etc/cron.d/cacti to avoid problems when updating rpm's. Mosten often, you won't
remember this item when updating lots of rpm's, so I felt more secure to put it here. And I've made some slight modifications,
see
Code:
prompt> vi /etc/cron.d/cacti
*/5 * * * * cactiuser /usr/bin/php -q /var/www/html/cacti/poller.php > /var/local/log/poller.log 2>&1
This will produce a file /var/local/log/poller.log, which includes some additional informations from each poller's run, such as
rrdtool errors. It occupies only some few bytes and will be overwritten each time.
If you're using the crontab of user "cactiuser" instead, this will look like
Code:
Pay attention to custom scripts. It is required, that external commands called from there are in the $PATH of the cactiuser
running the poller. It is therefor recommended to provide /full/path/to/external/command.
User "criggie" reported an issue with running smartctl. It was complaining "you are not root" so a quick chmod +s on the
script fixed that problem.
Secondly, the script was taking several seconds to run. So cacti was logging a "U" for unparseable in the debug output, and
was recording NAN. So my fix there was to make the script run faster - it has to complete in less than one second, and the age
of my box make that hard.
Logrotate cacti.log
Requirements
By default, cacti uses the file /log/cacti.log for logging purpose. There's no automatic cleanup of this file. So, without further
intervention, there's a good chance, that this file reaches a file size limit of your filesystem. This will stop any further polling
process.
For *NIX type systems, logrotate is a widely known utility that solves exactly this problem. The following descriptions assumes
you've set up a standard logrotate environment.
The examples are based on a Fedora 6 environment. I assume, that Red Hat type installations will work the same way. I hope, but am
not sure, that this howto is easily portable for Debian/Ubuntu and stuff and hopefully even for *BSD.
Code:
# logrotate cacti.log
/var/www/html/cacti/log/cacti.log {
# keep 7 versions online
rotate 7
# rotate each day
daily
Descriptions are given inline. Copy those statements from above into /etc/logrotate.d/cacti. This is the recommended file for
application-specific logrotate files.
Test
logrotate configuration files are tested by running
Code:
Handling 1 logs
This is a dry run, no rotation is actually performed. Option -f forces log rotation even if the rotate criterium is not fulfilled. Option -d
issues debug output but will suppress any real log rotation. Verify by listing the log directory: nothing has changed at all!
Code:
Code:
ls -l /var/www/html/cacti/log
-rw-r--r-- 1 cactiuser cactiuser 0 4. Okt 21:35 cacti.log
-rw-r--r-- 1 cactiuser cactiuser 228735 4. Okt 21:35 cacti.log-20071004
Of course, the date extension on the file will change accordingly. Please notice, that a new cacti.log file was created. If you issue the
command again, nothing will happen:
Code:
Handling 1 logs
If you want to see all those 7 rotations on one single day, remove the dateext directive temporarily from the configuration file.
Requirements
By default, cacti uses the MySQL database named cacti. You may want to consider dumping this database on regular intervals for
failsafe reason. For a single dump, you will usually enter this dump command directly into crontab.
It is possible, to mis-use logrotate to create daily dumps, append dateext-like timestamps to each dump and keep a distinct number of
generations online. For a basic setup, see Logrotate cacti.log,
The examples are based on a Fedora 6 environment. I assume, that Red Hat type installations will work the same way. I hope, but am
not sure, that this howto is easily portable for Debian/Ubuntu and stuff and hopefully even for *BSD.
Code:
You may add this configuration to /etc/logrotate.d/cacti, even if the logrotate of cacti.log is already given there. Prior to testing this
configuration, don't forget to
Code:
touch /var/www/html/cacti/log/cacti_dump.sql
Code:
Handling 1 log
Code:
ls -l /var/www/html/log/cacti_dump*
-rw-r--r-- 1 cactiuser cactiuser 0 4. Okt 22:10 cacti_dump.sql
-rw-r--r-- 1 cactiuser cactiuser 318441 4. Okt 22:10 cacti_dump.sql-20071004
RRDTool Stuff
But I'm not that rrdtool guru. So I apologize for errors in this document.
Example
Attached, you will find a perl script that generates two separate rrd's and will generate a single graph based on both of them. Inline,
you will find several constants to play with. The script fills both of them with data generated by a loop. The base value is 2. For each
following data point, the value will be incremented by 0.1. After 40 iterations, the value will have increased to 6.
First, lets define some constants needed for rrd file creation
Code:
#---------------------------------------------------------------------------------
# create first DB
#---------------------------------------------------------------------------------
# name of rrd file for test data
my $db1 = "/tmp/rrddemo1.rrd";
my $interval = 300; # time between two data points (pdp's)
my $heartbeat = 2*$interval; # heartbeat
my $xff = 0.5; # pdp's necessary to form one cdp
The timespan for this file will be dynamically computed from current timestamp
Code:
By default, it contains 2 rra's for 4 consolidation functions (AVERAGE, MAX, MIN, LAST).
Code:
The first rra holds 5 data points (pdp's). The second one holds 9 data points, that are generated automatically by rrdtool by
consolidating 5 pdp's each. So you will have 2*4=8 rra's.
Code:
The rrd file will be created by means of the perl module RRDs.pm:
Code:
RRDs::create(
$db1,
"--step=$interval",
"--start=" . ($start-10),
# define datasource
"DS:load:GAUGE:$heartbeat:U:U",
# consolidation function 1
"RRA:$CF1:$xff:$rra1step:$rra1rows",
"RRA:$CF1:$xff:$rra2step:$rra2rows",
# consolidation function 2
"RRA:$CF2:$xff:$rra1step:$rra1rows",
"RRA:$CF2:$xff:$rra2step:$rra2rows",
# consolidation function 3
"RRA:$CF3:$xff:$rra1step:$rra1rows",
"RRA:$CF3:$xff:$rra2step:$rra2rows",
# consolidation function 4
"RRA:$CF4:$xff:$rra1step:$rra1rows",
"RRA:$CF4:$xff:$rra2step:$rra2rows",
) or die "Cannot create rrd ($RRDs::error)";
This rrd contains exactly one rra only. There is enough space for all (default:40) data points generated by this script. There is no
need for consolidation.
Code:
#---------------------------------------------------------------------------------
# create second DB
# it will hold all data in its first rra
# without consolidation
# (therefor it is much bigger than the first one)
#---------------------------------------------------------------------------------
# name of rrd file for test data
my $db2 = "/tmp/rrddemo2.rrd";
RRDs::create(
$db2,
"--step=$interval",
"--start=" . ($start-10),
# define datasource
"DS:load:GAUGE:$heartbeat:U:U",
# consolidation function 1
"RRA:$CF1:$xff:$rra1step:$no_iter",
) or die "Cannot create rrd ($RRDs::error)";
You may run the script without any parameter. In this case, it will create the 2 rrd files, fill them and generate one png file:
Code:
#------------------------------------------
# generate rrd graph
#------------------------------------------
my $graph = "/tmp/rrddemo1.png";
RRDs::graph("$graph",
"--title=RRDtool Test: consolidation principles",
"--start=" . $start,
"--end=" . $end,
"--width=" . $width,
"--height=" . $height,
"DEF:demo2=$db2:load:$CF1",
"DEF:demo11=$db1:load:$CF1",
"DEF:demo12=$db1:load:$CF2",
"DEF:demo13=$db1:load:$CF3",
"DEF:demo14=$db1:load:$CF4",
"COMMENT:raw data as follows, filesize=$db2size\\n",
"LINE1:demo2#CCCCCC:RAW DATA, no consolidation\\n",
"COMMENT:Consolidated data as follows, filesize=$db1size\\n",
"LINE1:demo11#FF0000:CF=AVERAGE\\n",
"LINE1:demo12#00FF00:CF=MAX equals CF=LAST in this case\\n",
"LINE1:demo13#0000FF:CF=MIN\\n",
# "LINE1:demo14#000000:CF=LAST\\n",
) or die "graph failed ($RRDs::error)";
Code:
firefox file:///tmp/rrddemo1.png
consolidation rrddemo1
One of the basic principles of rrd's is, that they will not grow in space while storing additional data. Let us look at this more
carefully. Remember that the script increments each value by 0.1 for each data point. But the first rra will hold only 5 data points, e.g
the values 2.0, 2.1, 2.2, 2.3, 2.4. But what happens, if the next value, 2.5, is added? This is where the CONSOLIDATION
FUNCTIONS comes in, e.g. AVERAGE. In this case, the average of all 5 values (2.2 in this case) will be stored in the second rra.
So, there is a consolidation of the data, only 1 consolidated data point is stored instead of 5 originally entered ones. As a result, you
will loose "some information". There is no chance to identify, that the average 2.2 was build out of these 5 values above. It may have
been build out of 1.0, 1.5, 2.2, 2.9, 3,4 as well. This is why people often want to increase the size of the first rra to store more data
points.
But remember, there are more consolidation functions. Use of MAX yields 2.4 in the case above. MIN yields 2.0 and LAST results
in 2.4 (the last value of all 5 primary data points). Yes, even in this case it is not possible to rebuild the originally entered data. But
you will have an idea at least for MIN, MAX, AVERAGE and even LAST.
On the long run, this saves lots of disk space and is VERY fast in processing. And even if you "loose" the original data, you will see
the range between MIN and MAX and the AVERAGE.
In this example, AVERAGEs were graphed using an AREA, whereas MAXimums uses LINE1 in a slightly darker shade of the
corresponding color. This gives nice graphes even for daily view, IMHO.
The example uses an additional feature, a CDEF=CURRENT_DATA_SOURCE,-1,* to mirror outbound traffic to the negative side
consolidation traffic
Please notice, that MAX does not always match AVERAGES, which is not that surprising from the mathematical point of view.
AVERAGEs show Volume based information whereas MAXimums show Peak Usage. Both informations are useful.
If you would like to see, what's going on when running the script, you may call it by
Code:
Code:
update: 1160814900:2
update: 1160815200:2.1
update: 1160815500:2.2
update: 1160815800:2.3
update: 1160816100:2.4
update: 1160816400:2.5
update: 1160816700:2.6
update: 1160817000:2.7
update: 1160817300:2.8
update: 1160817600:2.9
update: 1160817900:3
update: 1160818200:3.1
...
update: 1160826300:5.8
update: 1160826600:5.9
update: 1160826900:6
Last 5 minutes CF AVERAGE:
1160825400: 5.6
1160825700: 5.7
1160826000: 5.8
1160826300: 5.9
1160826600: 6
Last 6*5 minutes CF AVERAGE:
1160817900: 3
1160819400: 3.5
1160820900: 4
1160822400: 4.5
...
Last 30 minutes CF LAST:
1160817900: 3.2
1160819400: 3.7
1160820900: 4.2
1160822400: 4.7
1160823900: 5.2
1160825400: 5.7
1160826900: N/A
Filesize of rrdfile 1 at /tmp/rrddemo1.rrd: 2336
Filesize of rrdfile 2 at /tmp/rrddemo2.rrd: 864
Attention: in this very case, the filesize of the rrd using consolidation is bigger. But for real world rrd's it is the other way round.
Now, you may study all rrd file values in detail.
Here we go!
As an attachment to the forum entry you will find a Graph Template that contains all items discussed here. It is a modified Traffic
Graph Template. Things discussed here will of course apply to other Graphs as well.
You'll notice, that Outbound Traffic is displayed on the negative side. This is often done; there are lots of those graphs on the
forum. It is simply done by a CDEF named Turn bytes into bits, make negative (include in the Template below) that works
like
Code:
cdef=CURRENT_DATA_SOURCE,8,*,-1,*
You'll see both a deeper green and a deeper blue line that fits exactly to the AREA definitions.
You'll notice a black line that does some TRENDing (there's a nice forum post on that, I've copied from there) for Inbound and
Outbound Traffic
As usual, you'll see Current, Average and Maximum legend entries
consolidated-view-01
Well, you'll notice that my laptop wasn't online the whole day ...
Example:
After consolidation, there is still knowledge about what was MAXimum! And this may be graphed as well, see:
consolidated-view-02
So you do not only notice the Graph Overall Maximum from the legend (that is: 29.40 k for Inbound) but also when it occurred and
additionally the whole timeseries for that MAXimum. (Well, whether TRENDing is helpful here may be answered by yourself)
In this case, CONSOLIDATION took place for 6 data points each, so each AVERAGE value displayed here stands for 6 original
data points. You will see this if zooming a little deeper:
consolidated-view-03
consolidated-view-04
The minimum resolution now is 2 hours. But the MAXimum values plotted still represent the biggerst of those consolidated values.
Conclusion
When you look at your rrd's, you will notice that often MAXimum is already defined. To display these values, nothing has to be
modified at those rrd's. And there is no additional disk space required compared to methods, that keep data without consolidation.
While graphing the MAXimum values along with the AVERAGE ones, you'll be able to discover the strength of the rrdtool
principles.
Be warned!
You won't really do that! Why? One of the inherent features of rrd's is: they never grow in space. In other words: When creating a
new rrd, it is allocated with all space needed. See rrd-beginners tutorial. As usually, you may use the information given here at your
own risk.
RRDTool defines different levels of consolidation only. It does not define timespans explicitely. It only defines the AMOUNT OF
DATAPOINTS for each consolidation level (known as rows in rrdtool lingo).
Assuming you are trying to keep only one level of consolidation, this is defined by step in the rra definition. And, if you want to
omit consolidation, this equals to step=1.
By default, all rrd files will have 4 levels of consolidation, step=1,6,24,288, respectively. Forget about the last three ones (well, they
will use some amount of space; but forget about this for the time being). So lets deal with the first rra (step=1) only.
If you want to extend this rra to span a longer time, you have to deal with the number of rows. You will have to increase the number
of rows until the wanted timespan is reached. You may compute the timespan by multiplying rows * step.
Here we go!
Cacti's logic to generate rrd files works as follows:
Name:
you may choose your own
Consolidation function:
AVERAGE needed
X-Files Factor:
always 0.5
Steps:
1 (that is the number of data points to use for consolidation, 1 says: no consolidation at all)
Rows:
115200 = 400 days with 24 hours and 12 data points per hour (= 5 min interval)
Timespan:
used for displaying 33,053,184 seconds = about 382 days (taken from other cacti rra)
Then scroll to the bottom of the page, select Duplicate and Go.
Of course, you may choose your own name here. Now it is time to modify this template:
Please leave the rest as is; SAVE. Of course, you may define a new data template from scratch. The only thing to keep in mind is to
select the appropriate RRA. The data template is now done.
Please pay attention to the next steps! You will have to delete both Graph Item Inputs, as they refer to the wrong data source.
Please select the red X to the right of Inbound Data Source as well as Outbound Data Source.
Then you will have to add the newly generated data sources. In order to do that, please select each item of the list of Graph Items,
one after the other. This will look like:
As Data Source you will choose the appropriate data source you generated in the previous step. Don't forget to do this for each and
every item of the Graph Item list. When you're done, scroll to the bottom of the Graph Template definition and SAVE.
If you have chosen some other Graph Template, e.g. ucd/net Load Average, you will skip this step.
The Data Query goes like this. Goto Data Queries and select SNMP – Interface Statistics.
Goto Devices and select your favorite device to see the rra in action. If you have modified the SNMP Interface Statistics Data
Query, you may immediately select Create Graphs for this Host to see the following:
Select the interface as you would have done for any Traffic Graph. Then Select a graph type from the dropdown list (of course our
newly defined Graph Template!) and CREATE. As usually, you will have to wait at least two polling cycles to get the graph
generated and filled with the first value. Don't be impatient! Let it run for awhile.
Well, this looks like usual, doesn't it? You may wonder about the Outbound traffic displayed negative. Well, this is a little CDEF but
is of no matter here. And of course, for the first two days you will not notice anything unusual. This is because the default cacti rra
configuration keeps all data points without consolidation for 600 intervals (about 2 days).
Some advice:
Please do not click onto the graph too fast. I had to wait some time (don't remember exactly) before clicking gave a result like the
next one:
This is already a zoomed image. You will notice, that my personal laptop isn't online for the whole day .
But you may zoom in at any place and will reach down to the 5 min intervals. This is, what had to be proved (q.e.d as the old romans
said).
the number of data sources needed (e.g. traffic in and traffic out form two data sources)
the number of rra's needed (e.g. one archive for storing original data points, a second one to hold averaged data points for
some weeks, a third for holding averaged data points for some months ...)
the number of data points to be stored in each rra
some header space
If you omit consolidation (that is: averaging out some data points), you won't loose data. But you will loose space!
Example:
Store data every 300 seconds for a whole year. This leads to 12 (data points each hour) * 24 (hours per day) * 365 (days per year)
data point (= 105120). Each data point holds 8 bytes, so the whole rrd will occupy about 840,960 Bytes (plus some header space) for
each single data source.
Code:
They belong to following rrd definitions (see Data Source Debug of that data source)
Code:
/usr/bin/rrdtool create \
/var/www/html/cacti-0.8.6f/rra/gandalf_traffic_in_17.rrd \
--step 300 \
DS:traffic_in:COUNTER:600:0:100000000 \
DS:traffic_out:COUNTER:600:0:100000000 \
RRA:AVERAGE:0.5:1:600 \
RRA:AVERAGE:0.5:6:700 \
RRA:AVERAGE:0.5:24:775 \
RRA:AVERAGE:0.5:288:797 \
RRA:MIN:0.5:1:600 \
RRA:MIN:0.5:6:700 \
RRA:MIN:0.5:24:775 \
RRA:MIN:0.5:288:797 \
RRA:MAX:0.5:1:600 \
RRA:MAX:0.5:6:700 \
RRA:MAX:0.5:24:775 \
RRA:MAX:0.5:288:797 \
RRA:LAST:0.5:1:600 \
RRA:LAST:0.5:6:700 \
RRA:LAST:0.5:24:775 \
RRA:LAST:0.5:288:797 \
and respectively:
Code:
/usr/bin/rrdtool create \
/var/www/html/cacti-0.8.6f/rra/gandalf_traffic_in_71.rrd \
--step 300 \
DS:traffic_out:COUNTER:600:0:100000000 \
DS:traffic_in:COUNTER:600:0:100000000 \
RRA:AVERAGE:0.5:1:115200 \
As you will notice, the newly generated rrd is about 20 times the size of the original one (and this one spreads two years, not only
400 days). So please pay attention, before using this widely. The performance impact for updating and displaying such rrd's in a
large installation may not be desired.
I hope, the informations given below are at least helpful to understand rrdtool operation.
Be warned!
Here we go!
At the bottom of this page, please find a perl script resize.pl. It is necessary, to customize the /path/to/the/rrd/binary, e.g
/usr/bin/rrdtool.
Help!
Put resize.pl wherever you want. There's no need to put it into the rrd working directory. But you will need some scratch space here
for all rrds to be resized (due to the way rrdtool resize works). The user that runs this script must have
The script does not care about space provided. To get help, simply type
Code:
perl resize.pl -h
Code:
Dry run
You may want to have a look at your rrds before resizing them. Specially for the required parameter -r (denoting the rra to be
resized), you will want to have a look at those rras, that are defined in the rrd in question. Example (linefeeds only for ease of
reading):
Code:
Code:
Of course, you may also enter a partly qualified dataset name. But it makes sense to take only those rrd's, that belong to the same
datasource (e.g. with the same rrd file structure).
Code:
Code:
-- RRDTOOL RESIZE localhost_uptime_57.rrd RRA (0) growing 8000.. (95328).. RRA#0.. (159328).. Done.
The first parenthesis contain the file size before resizing, the second one after resizing.
Code:
to result in
Code:
-- RRDTOOL RESIZE localhost_uptime_57.rrd RRA (0 4 8 12) growing 8000.. (95328).. RRA#0#4#8#12.. (351328).. Done.
Code:
to result in
Code:
-- RRDTOOL RESIZE router_uptime_59.rrd RRA (0 4 8 12) growing 8000.. (95328).. RRA#0#4#8#12.. (351328).. Done.
-- RRDTOOL RESIZE gandalf_uptime_58.rrd RRA (0 4 8 12) growing 8000.. (95328).. RRA#0#4#8#12.. (351328).. Done.
-- RRDTOOL RESIZE localhost_uptime_57.rrd RRA (0 4 8 12) growing 8000.. (95328).. RRA#0#4#8#12.. (351328).. Done.
overrides the -r parameter, cause all relevant rra's will be calculated from the current rrd definition. This is useful if you're working
on a list of files with different rrd structure (e.g. different Data Templates)
Code:
to result in
Code:
Please notice the last line of output, which reports the rrdtool runtime. If -s is given so that no rowsize of any rra will match, the
corresponding rrd file is skipped:
Code:
Sometimes people call this feature "Importing external rrds" to cacti. But what I'm going to explain is not an automated function. It
will require some manual interaction.
Of course, the webserver must have at least read access to the required rrd file(s). For sake of easiness, I'll assume the file to be
located in cacti's default ./rra/ directory. In my examples, this file is called example.rrd.
Code:
From the ds[.....] statements, the names of the ds' are taken. In this case the data sources are named
external_ds1,
external_ds2,
external_ds3
respectively. While this is not that a meaningful name, it should show you the principles when dealing with multi-ds rrds. Alongside
with this, it is good to know the ds[...].type, ds[...].min and ds[...].max for correct definition of the data sources. In this case, all data
sources are of type GAUGE. This will not affect data gathering nor graphing, but to me it seems to be advantageous to create the
correct data sources.
All other paramaters (step, heartbeat, xff, rra[..].rows) are assumed to be standard settings.
tell cacti, how to create the correct rrd file with all parameters
store the name of the data sources for later use with Graph Templates
While the first goal is not needed in this context, the second one is crucial.
So lets define a new Data Template. Goto Data Templates and Add a new one:
If you want to associate this to a certain host, you may use |host_description| as a placeholder as usual. As this external.rrd file is
updated externally, you must set the Data Input Method to None.
Select the Associated RRA's as they will define the Detailed Graph Views (usually Daily, Weekly, Monthly, Yearly). And you'll
have to uncheck the Data Source Active checkbox. This will prevent cacti from actually gathering data for this Data Template. Now
add the first Data Source:
Minimum Value
Fill in the ds[...].min value from rrdtool info above. In this case, use 0. This is not really needed, but for sake of consistency I
recommend this.
Maximum Value
Fill in the ds[...].max value from rrdtool info above. In this case, use 500. This is not really needed, but for sake of consistency I
recommend this.
Heartbeat
Fill in the ds[...].minimal_heartbeat value from rrdtool info above. In this case, use 600. This is not really needed, but for sake of
consistency I recommend this.
Repeat this for all other data sources of the example.rrd (use the New function of Data Source Items):
You will have noticed, that no Custom Data is given as defined by the Data Input Method set to NONE. You'll see the result as:
to end up in
Of course you will use more meaningful input for Text Format. That's all for now.
the Host already exists (perhaps you're polling some other data from this host) and the status is up
the Host does not yet exist in cacti's tables and shall never been polled for other data by cacti's own poller
The first approach does not need any additional changes to the Devices list.
The second approch will be more common. You will need a Host entry in the Devices list even for this host. So we will create kind
of a dummy entry. Please goto the Devices list and Add a new one:
Fill in Description and Name as usual. To deactivate all checks, please check Disable Host and leave SNMP Community empty.
Create
and Create. You will be prompted to fill in the full path to your external.rrd file. If this resides in cacti's default ./rra directory,
you may use <path_rra> for this. Remember, that the web server must have at least read access to that file.:
and Create. Now select all needed Data Source [...] and Save:
Please select this Graph again and Turn on Graph Debug Mode to see
And nfs may not be the best choice for all cases. But with rrdtool 1.2.x there's a new feature, rrd server. Pasted from rrdtool
homepage:
Quote:
RRD Server
If you want to create a RRD-Server, you must choose a TCP/IP Service number and add them to /etc/services
like this:
Attention: the TCP port 13900 isn't officially registered for rrdsrv. You can use any unused
port in your services file, but the server and the client system must use the same port,
of course.
With this configuration you can add RRDtool as meta-server to /etc/inetd.conf. For example:
Don't forget to create the database directory /var/rrd and reinitialize your inetd.
If all was setup correctly, you can access the server with perl sockets, tools like netcat,
or in a quick interactive test by using 'telnet localhost rrdsrv'.
NOTE: that there is no authentication with this feature! Do not setup such a port unless
you are sure what you are doing.
For my local setup (RHEL 4.0), I had to modify this a bit. My /etc/services
Code:
Code:
# default: off
# description: RRDTool as a service
service rrdsrv
{
disable = no
socket_type = stream
protocol = tcp
wait = no
user = cactiuser
server = /usr/bin/rrdtool
server_args = - /var/www/html/cacti/rra
}
as /etc/xinetd.d/rrdsrv.
Code:
assuming, that the rrd file "external.rrd" used in this howto is located in the ./rra directory.
Now its time for some remote script to use this new feature. As an example, see
Code:
#!/usr/bin/perl
use IO::Socket;
my $_cmd = "update " . $rrd . " N:" . int(rand(10)) . ":" . int(rand(10)) . ":" . int(rand(10));
print $socket $_cmd . "\n";
close $socket;
Of course,
Code:
my $_cmd = "update " . $rrd . " N:" . int(rand(10)) . ":" . int(rand(10)) . ":" . int(rand(10));
is only an example prepared for the "external.rrd" of our example. To use this for updating your own rrd files, this must fit to the
data source definitions of your special rrd file. In our example, I put
Code:
Disadvantages
This handling has the great disadvantage, that you must configure the rrd file name to each single updating script. This rrd file name
on the remote system must match the one on cacti's host. There's a good chance to mess things up when used for lots of rrds. But for
some few files this may be appropriate.