Admin-UK 21 14 DigiSub
Admin-UK 21 14 DigiSub
04 LTS
Free CD
Managing Memory
FREE CD
ADMIN
ADMIN Network & Security
INSIDE!
MANAGING AV A
HIGH TY
IL ABILI
WITHOUKER
T
MEMORY
Smoothing out bottlenecks
PACEMA
look for
intruders in a
memory dump
Hyper-V
What does the new SMB 3 protocol
mean for MS virtualization
Small Server Suite CloudFront
A full server distro Supercharge
in only 30MB your website
Exploring Google
Rex KVM Security Compute Engine
Automated management Stop an attacker from
without a client agent breaking out of jail
Admin
Issue 21 £7.99
21
Windows • Linux • Unix • Solaris
9 772045 070003
www.admin-magazine.com
DOMAINS | MAIL | HOSTING | eCOMMERCE | SERVERS
BETTER, SAFER
FASTER!
WITH AMD OPTERONTM 6338P
69
From
1&1 VALUE
SERVER L2
SALE!
From
only 19.99
£ per month*
excl. VAT
NEW! 1&1 Dedicated Server o12A-32: Latest AMD Opteron™ 6338P 12-core processor,
2 x 2 TB hard disk with software RAID and 100% server hardware!
1 TRIAL
TRY FOR
30 DAYS 1 CLICK
UPGRADE OR
DOWNGRADE 1 CALL
SPEAK TO
AN EXPERT
Supported
for 5 year
18 Malware Analysis
82 Rex
This configuration management tool
s
We look at a known malware variant uses SSH and Perl to run standard
and use the Redline and Volatility tasks on all clients in parallel, without
tools to find subtle clues to its a client agent program or the need to
existence in memory dumps. learn a specialized language.
Management
80 C
ollectd 86 H
A without Pacemaker
Collectd 4.3 is a comprehensive Managing your cluster could be so w Open vSwitch 2.0
monitoring tool with a removable simple if it weren’t so complicated. The with full kernel
plugin architecture. difficulty is often a single component:
integration
Pacemaker. Luckily, other open source
tools offer alternatives for HA. w Tomcat (v7)
w MySQL (v5.5)
w Puppet ((v3.0)
w Qemu (v2.0)
82 ex Automated Management
R 92 anglia
G
Rex doesn’t need agents or a HPC, Big Data, and even cloud systems
special language to describe the use the Ganglia monitoring framework.
tasks it performs on remote Learn how to install and configure
computers. Ganglia and get it up and running on a
simple two-node system.
Service
3 Welcome
4 Table of Contents
8 On the CD
98 Call for Papers
Stay Informed
For more timely technical articles on system
administration and advanced IT, subscribe
to the ADMIN Update newsletter:
On the CD
The 64-bit server install image on this month’s CD host Windows Server 2012 and Windows Server 2008
is for computers with the AMD64 or EM64T archi- R2 as guests and is now compatible with the Cloud-
tecture (e.g., Athlon64, Opteron, EM64T Xeon, Core Foundry PaaS platform.
2). Ubuntu Server emphasizes scale-out computing, Ubuntu Server 14.04 supports every HP Moonshot
whether you are administering an OpenStack cloud, server cartridge and includes cloud tools such as
a Hadoop cluster, or a massive render farm. MAAS for automated provisioning and Juju for
New to 14.04 LTS is guest certification on AWS, Mi- service orchestration. The powerful array of deploy-
crosoft Azure, Joyent, IBM, and HP Cloud; updates to ment and management tools included on this CD
a number of key packages (e.g., Qemu, libvirt, LXC, will help you bring up and maintain hyperscale
MySQL, Docker); and Open vSwitch 2.0 with full ker- workloads with minimal complexity.
nel integration. OpenStack is certified by Microsoft to
Resources
[1] U
buntu 14.04 “Trusty Tahr”:
[http://releases.ubuntu.com/14.04/]
[2] R elease notes: [https://wiki.ubuntu.com/TrustyTahr/
ReleaseNotes#Ubuntu_Server]
[3] Ubuntu Server: [http://www.ubuntu.com/server]
[4] U
buntu hyperscale:
[http://www.ubuntu.com/server/hyperscale]
[5] U
buntu server management:
[http://www.ubuntu.com/server/hyperscale]
[6] H
P Moonshot project: DEFECtIVE DVD?
[http://www.ubuntu.com/partners/hp/moonshot] Defective discs will be replaced,
email: [email protected]
un-conference
25th October • LONDON
Intermediate &
Advanced Perl Tutorials
11th, 12th, 13th, & 14th November • LONDON
All event details can be found on our web site at: http://flossuk.org/Events
If you are not a member and want to be kept informed of future events please subscribe UKUUG - Silver Sponsor
to ‘[email protected] ’ see: http://lists.ukuug.org/mailman/listinfo/announce members include:
Why not join today? Annual Individual membership is just £42.00 inc. VAT. See: http://www.flossuk.org/join
As a member you can attend all our events at the specially discounted member rates and receive our quarterly Newsletter!
UKUUG Ltd. t/a FLOSS UK, The Manor House, Buntingford, Herts SG9 9AB
Tel: 01763 273475 Email: offi[email protected]
f eat u r e s Memory Management
Memory Hogs
Even Linux systems with large amounts of main memory are not protected against bottlenecks and potentially
drastic performance degradation because of memory shortage. In this article, we investigate the complex
causes and test potential solutions. By Alexander Hass and Willi Nüßer
Modern software such as large da- of users and manufacturers with such the problem. In this article, we first
tabases run on Linux machines that software solutions shows that this describe the problem in more detail,
often provide hundreds of gigabytes problem is not yet completely solved analyze the background, and then
of RAM. Systems that need to run the and still needs attention. test solutions.
SAP database (HANA) in a produc- Even well tuned applications can
tion environment, for example, can lose performance because of insuf- Memory and Disk Hogs
have up to 4TB of main memory [1]. ficient memory being available under
Given these sizes, you might expect certain conditions. The standard Many critical computer systems,
that storage-related bottlenecks no procedure in such situations – more such as SAP application servers or
longer play a role, but the experience RAM – sometimes does not solve databases, primarily require CPU and Lead Image © J.R. Bale, 123RF.com
main memory. Disk access is rare and tems, distinguishes between virtual While the virtual address space for
optimized. Parallel copying of large memory, which the operating system applications is divided into sections
files should thus have little effect on and applications see, and physical for code, static data, dynamic data
such applications because they re- memory, which is provided by the (heap), and the stack, the operating
quire different resources. hardware of the machine, the virtual- system claims larger page frame areas
Figure 1 shows, however, that this izer, or both [2] [3]. Both forms of for its internal caches [1]. One of the
assumption is not true. The diagram memory are organized into units of key caches is the page cache, which
demonstrates how the (synthetic) equal size. In case of virtual memory, is initially just a storage area in which
throughput of the SAP application these units are known as pages, special pages are temporarily stored.
server changes if disk-based opera- whereas physical memory refers to These pages include the pages that
tions also occur in parallel. As a 100 them as frames. On modern systems, are involved in file access. If these
percent reference value, we also per- they are both still often 4KB in size. pages lie in main memory, the system
formed a test cycle without parallel The hardware architecture also deter- can avoid frequent access to the slow
disk access. In the next test runs, the mines the maximum size of virtual disks. For example, if read() access to
dd command writes a file of the speci- memory: In a 64-bit architecture, the a disk area is needed, the Linux ker-
fied size on the hard disk. virtual address space is a maximum nel first checks to see whether the re-
On a system in a stable state, of 2^64 bytes in size – even if current quested data exists in the page cache.
throughput initially is not affected implementations on Intel and AMD If so, it reads the data there.
by file operations, but after a certain only support 2^48 bytes [4].
value (e.g., 16,384MB), performance Faster Reads
collapses. As Figure 1 shows, the Virtual and Physical
throughput of the system decreases The page cache is thus a cache for file
with increasing file size by nearly If software – including the Linux ker- data that has been stored in pages. Its
40 percent. Although the figures nel itself in this case – wants to ac- use speeds up disk access by orders
are likely to be different in normal cess the memory contents, the virtual of magnitude. Therefore, Linux makes
operation, a significant problem still addresses must be mapped to the ad- heavy use of this cache and always
exists. dresses in the physical memory. This tries to keep it at the maximum pos-
Such behavior is often found in mapping is realized by process-spe- sible size [5].
daily operation if a backup needs to cific page tables and ultimately cor- Two other areas, which were man-
move data at the same time, or over- responds to replacing the page with aged separately in legacy Linux ver-
night, or generally when large files the content of virtual memory by the sions (kernel 1.x and 2.x), are inte-
are copied. A closer examination of page frame in which it is physically grated into the page cache. First is the
these situations shows that paging stored (Figure 2). buffer cache, which manages blocks
increases at the same
time (Figure 1). Thus,
it seems that frequent
disk access of active
processes that actually
need little CPU and
memory can under
certain circumstances
affect the performance
of applications that
only rarely access the
disks.
Memory
Anatomy
The degradation of
throughput with in-
creasing file size is
best understood if you
consider Linux-kernel-
style main memory
management. Linux,
like all modern sys- Figure 2: Memory management components on Linux.
belonging to disks and other block category. A typical SAP application n Focusing on your own application:
devices in main memory. Today, how- server can manage many gigabytes Preventing your own pages from
ever, the Linux kernel no longer uses in these buffers and temporary areas. being swapped out.
blocks as storage units, but pages. The diversity of requests, however, n Focusing on other applications:
The buffer cache thus only stores leads to large parts of these not being Reducing or limiting the memory
page-to-disk block mappings, because used very frequently or very quickly, usage of other applications.
they do not necessarily have to be the although they are still used from time n Focusing on the kernel: Changing
same size as pages. It is of very little to time. The request locality is fairly the way the kernel manages physi-
importance today. small. cal memory.
The second area is the swap cache, Depending on how aggressively it These different approaches are sum-
which is also a cache for pages that swaps, the Linux kernel sacrifices marized graphically in Figure 3.
originate from a disk. However, it these pages in favor of page cache
does not work with pages that reside growth and swaps them out. How- Your Own Application
in regular files but with pages that re- ever, this approach degrades applica-
side on the swap device – the anony- tion performance, because, under One very obvious measure for reduc-
mous pages. When a page is swapped certain circumstances, Linux memory ing storage problems is to keep your
out to the swap partition and later put access is no longer served directly application small and always release
back into storage, it is initially just a from main memory, but only after memory. However, this practice usu-
regular disk page. Thus, it should be swapping back in from swap space. ally requires complex and singular
cached. These considerations provide the planning. From a developer’s point
The swap cache now primarily framework for explaining the empiri- of view, it may seem tempting to pre-
contains only pages that have been cally observed behavior discussed vent your application pages from be-
swapped in but not subsequently previously. When backups or copying ing swapped out of physical memory.
modified. Linux can detect from the large files cause substantial disk ac- Certain methods require intervention
swap cache whether swapping out cess, the operating system can swap with the code, and others are trans-
again requires a write to disk (to the out application buffer pages in favor parent from the application’s perspec-
swap space). One of the swap cache’s of a larger page cache. After this tive and require just a little adminis-
most important tasks is thus avoid- has happened, when the application trative work.
ing (write) access. A clear distinction needs to access the paged memory, The well-known mlock() [6] sys-
to the actual page cache exists here, this access will be slower. Increasing tem call belongs to the first group.
which primarily accelerates reads [5]. the physical memory size by install- This command lets you explicitly
In the case of frequent disk access, ing additional memory modules does pin down an area from the virtual
the kernel now tries to adjust the page not fundamentally solve the problem; address space in physical memory.
cache to meet these requirements. In at best, it just postpones its occur- Swapping is then no longer pos-
the above-outlined test, it grows as rence. sible – even in potentially justified,
the file being written becomes larger. exceptional cases. Additionally, this
However, because the number of avail- Finding
able page frames is limited (to 10GB of
free memory in the test), this growth
Solutions
comes at the expense of the pages that The background
applications can keep for themselves from the previ-
in physical memory. ous sections
shows that the
Old Pages observed perfor-
mance degrada-
As the page cache grows, the kernel’s tion is due to the
page replacement strategies [1] try lack of physical
to create space. Linux uses a simpli- memory. Pos-
fied LRU (least recently used pages) sible solution
strategy for this. More specifically, approaches can
it identifies outsourcing candidates be classified in
by finding pages that have remained terms of how
unused for a longer period of time. they ensure suf-
The kswapd kernel thread handles the ficient space
actual swapping process. for an applica-
Unfortunately, many application- tion in physical
specific buffer areas fall into this memory: Figure 3: Approaches for protecting an application’s memory pages.
approach requires special attention, fortunately, however, the use of huge This method made its way into the
especially for cross-platform pro- pages is cumbersome. For example, to 2.6 kernel in 2011, in the form of
gramming, because mlock() makes provide shared memory, you need to Transparent Huge Pages (THP). As
special demands on the alignment of set up a separate hugetlbfs filesystem. the name suggests, the use of THP
data areas. This, and the overhead Shared memory areas are stored here is invisible to users and developers
of explicit intervention with the pro- as files that are implemented as huge alike. The implementations generally
gram code, make mlock() unsuitable pages in main memory. show slightly slower performance
in many environments. The greater problem is that you can- than huge pages. However, THPs are
Another approach, which also re- not guarantee retroactive allocation of stored in the same way as normal
quires intervention with the code and huge pages on the fly, because they pages.
system administration, relies on huge need to be mapped in contiguous
pages, which the Linux kernel has in memory areas. In fact, you often have Efficient: mmap()
principle supported since 2003 [7]. to reserve huge pages directly after
The basic idea behind huge pages is boot, which means that the memory Another approach relating to your
simple: Instead of many small pages associated with them is not available in-house application is not actually
(typically 4KB), the operating system to other applications without huge a safeguard in the strictest sense but
uses fewer but larger pages – for ex- page support. This comes at a price is a strategy for efficient memory us-
ample, 2MB on x86_64 platforms. of less flexibility, especially for high- age, and thus indirectly for reducing
This approach reduces the number of availability solutions such as failover the need to swap. The mechanism is
page to page frame mappings, thus scenarios but also in general opera- based on the well-known mmap() call
potentially accelerating memory ac- tion. The complexities of huge page [8] [9]. This system call maps areas
cess noticeably. Because of the way management prompted the develop- from files in the virtual address space
they are implemented, Linux cannot ment of a simpler and largely auto- of the caller, so that it can manipulate
swap out huge pages, so they also mated method for the use of larger file content directly in the virtual ad-
help to protect your own pages. Un- pages. dress space, for example.
newsletter readers
• Deep knowledge of the
new IT
www.admin-magazine.com/newsletter
Feat u r e s Memory Management
In some ways, mmap() replaces normal party code, which is obviously not a Or should the Linux kernel only
file operations such as read(). As Fig- viable option. handle your own application with
ure 3 shows, its implementation in more care in special cases?
Linux differs in one important detail: Everything Under Control n Is a termination of the application
For read() and the like, the operating tolerable on reaching the limits?
system stores the data it reads in an A better alternative at first glance These questions show that cgroups
application page and again as a data could be control groups (cgroups) ultimately do not solve the underlying
file in the page cache. The mmap() call [10], which have been around since problem.
does not use this duplication. Linux Linux 2.6. With their help, along The application buffer is swapped
only stores the page in the page cache with sufficient privileges, you can out because the page cache takes up
and modifies the page to page frame also allocate a third-party process to too much space for itself. One pos-
mappings accordingly. a group, which then controls the pro- sible approach is therefore to avoid
In other words, the use of mmap() cess’s use of resources. Unfortunately, letting the page cache grow to the
saves space in physical memory, this gives system administrators dif- extent that it competes with applica-
thereby reducing the likelihood that ficult questions to answer, as is the tion memory. With the support of
your own pages are swapped out – at case with restricting resources in the applications, you can do this to a
least in principle and to a certain, shell via ulimit: certain extent.
typically small, extent. n Can you really restrict an applica- Developers can tell the operating
None of the approaches presented tion at the expense of others? As- system in their application code not
here for making your application suming a reasonable distribution to use the page cache, contrary to its
swap-resistant is completely sat- of applications across machines, usual habits. This is commonly re-
isfactory. Another approach could this prioritization is often difficult ferred to in Linux as direct I/O [11].
therefore be to change the behavior to get right. It works around operating system
of other applications to better protect n What are reasonable values for the caching, for example, to give a da-
your own app. Linux has for a long restrictions? tabase full control over the behavior
time offered the ability to restrict an n Finally, you also can critically of its own caches. Direct I/O can be
application’s resource consumption question the fundamental orien- initiated via options for file opera-
using setrlimit() and similar system tation: Are fixed restrictions for tions using the O_DIRECT option for
calls. However, these calls presum- third-party applications really open() or DONT_NEED with fadvise(),
ably require intervention in third- what you are trying to achieve? for example.
Control groups cgroups Flexible; also without code changes Settings unclear; does not prevent the page cache growing
Focusing on the Kernel
Keeping the page cache small Direct I/O Performance boost possible for application Requires code change in third-party application;
because of proprietary caching mechanism requires own cache management; does not prevent the
page cache growing; a single application without Direct
I/O can negate all benefits
Restricting the page cache Kernel patch Works (demonstrated by other Unix systems); No general support, slow behavior in the case of
no intervention with applications required massive I/O
Small swap space Admin tools Little swapout Not a solution for normal systems; risk of OOM scenarios
Configuring swapping Swappiness No intervention with applications required; Not functional on all distributions; no gradual adjustment
works for a value of 0
Modifying kswapd Kernel patch Does the job; no intervention with Officially available as of kernel 3.11; possibly works with
applications required; very few side effects explicitly parallel I/O (“hot memory”)
Unfortunately, very few applications this approach protects application You can quite easily change the swap-
use direct I/O, and because a single pages against swapping, it comes at piness value to 0 at run time using
misconfigured application can unbal- the price of causing out-of-memory this command:
ance a stable system, such strategies situations – for example, if the system
can at best delay swapping out of the needs to swap because of a temporary echo 0 > /proc/sys/vm/swappiness
application buffer. memory shortage.
In this case, continued smooth opera- This setting, however, does not survive
Focusing on the Kernel tion requires very careful planning a restart of the system. To achieve a
and precise control of the system. permanent change, you need an entry
From an administrative perspective, For computers with an inaccurately in the /etc/sysctl.conf configuration
a much easier approach would be defined workload, such an approach file. If you set vm.swappiness = 0, for
to adapt the kernel itself so that the can easily result in process crashes example, the kernel specifically tries
application memory bottlenecks no and thus in an uncontrollable envi- to keep application pages in main
longer occur. A setting in the kernel ronment. memory. These should normally solve
would then affect all programs and, the problem.
for example, not require any applica- Swappiness Unfortunately, the results discussed
tion-specific coding. You can try to later show major differences in the
achieve this in several ways. Much optimism is thus focused on a way some distributions implement
First, you could set the page cache setting that the Linux kernel has sup- this. Additionally, swappiness changes
limit to a fixed size. Such a limit ef- ported since version 2.6.6 [12]. Below the behavior for all applications in the
fectively prevents swapping – as mea- /proc/sys/vm/swappiness is a run-time same way. In an enterprise environ-
surements will show. Unfortunately, configuration option that lets you in- ment in particular, just a few applica-
the patches required are not included fluence the way the kernel swaps out tions will be particularly important.
in the standard kernel and probably pages. The swappiness value defines The swappiness approach cannot pre-
will not be in the near future. Only a the extent to which the kernel should vent cut-throat competition between
few enterprise versions by distribu- swap out pages that do not belong applications if you have a system run-
tors such as SUSE offer these adjust- to the page cache or other operating ning multiple large applications. Only
ments, and only for special cases. system caches. careful distribution of applications
Thus, this is not a solution for many A high swappiness value (maximum can help here.
environments. 100) causes the kernel to swap out
The next solution is again only suit- anonymous pages intensely, which Gorman Patch
able for special cases. If the swap is very gentle on the page cache. In
space is designed to be very small contrast, a value of 0 tells the kernel, if The final approach, which currently
compared with the amount of main possible, never to swap out application promises at least a partial solution to
memory available, or not present at pages. The default value today is usu- the displacement problem, only be-
all, the Linux kernel can hardly swap ally 60. For many desktop PCs, this has came part of the official kernel in Li-
out even in an emergency. Although proven to be a good compromise. nux 3.11. The patch by developer Mel
Figure 4: Measurements for evaluating kernel adjustments (statistical mean values for several test cycles are shown).
Gorman optimizes paging behavior The different values for swappiness [2] Siberschatz, A., G. Gagne, and P.B. Galvin.
with parallel I/O [13]. do not show any clear trend. Al- Operating System Concepts. Wiley, 2005.
Initial results show that the heuristics though they lead to a clearly poorer [3] Magenheimer, D., C. Mason, D. McCracken,
incorporated into this patch actually performance of the application with and K. Hackel. “Transcendent memory
improve the performance of applica- increasing I/O, it is difficult to distin- and Linux” in Proceedings of the Linux
tions if you have parallel I/O. How- guish systematically between smaller Symposium 2009, pp. 191-200:
ever, it is still unclear whether this values like 10 or 30 and the default [http://oss.oracle.com/projects/tmem/
patch fixes the problem if the applica- value of 60. It seems that the crucial dist/documentation/papers/tmemLS09.
tions were inactive at the time of disk question is whether or not swappiness pdf]
I/O – for example, at night. Will the is set to 0. Intermediate values have [4] AMD Inc. “AMD64 Architecture Program-
users see significantly deteriorated hardly any effect. mer’s Manual Volume 2: System Program-
performance in the morning because Table 1 summarizes all the approaches ming,” Section 5.1:
the required pages were swapped out? and gives an overview of their ad- [http://support.amd.com/TechDocs/
The practical utility value of these vantages and disadvantages, as well 24593.pdf]
approaches can be evaluated from as the measured results. Currently [5] Love, R. Linux Kernel Development.
different perspectives. The above sec- no solution eliminates all aspects of Addison-Wesley, 2010.
tions have already mentioned some the displacement problem. The next [6] Man page for mlock: [http://www.lehman.
criteria. Kernel patching could be ruled best thing still seems to be a correct cuny.edu/cgi‑bin/man‑cgi?mlock+3]
out for organizational and (support) implementation of swappiness – and [7] “Huge Pages” by M. Gorman, Linux
reasons in a production environment. possibly, in future, the approach by Weekly News:
The first important question therefore Mel Gorman. Even with these two [http://lwn.net/Articles/374424/]
is whether a change in the application methods, however, administrators of [8] Man page for mmap: [http://unixhelp.ed.
code is necessary. All approaches that systems with large amounts of main ac.uk/CGI/man‑cgi?mmap]
require such as change are impractical memory will not be able to avoid [9] Stevens, W.R., and S.A. Rago. Advanced
from the administrator’s view. keeping a watchful eye on the memory Programming in the Unix Environment.
On the other hand, you cannot typi- usage of their applications. Addison-Wesley, 2008.
cally see the benefits of changes to [10] Cgroups documentation:
the kernel directly but only determine Conclusions [https://www.kernel.org/doc/
them in operation. Thus, the following Documentation/cgroups/cgroups.txt]
section of this article shows the effects Displacement of applications from [11] Man page for open: [http://unixhelp.ed.
of these changes for the same envi- RAM still proves to be a problem, ac.uk/CGI/man‑cgi?open+2]
ronment as described above. Based even with very well equipped systems. [12] “2.6 swapping behavior” by J. Corbet:
on these experimental results and the Although memory shortage should no [http://lwn.net/Articles/83588/]
basic characteristics of the approach, longer be an issue with such systems, [13] “Reduce system disruption due to
you can ultimately arrive at an overall the basically sensible, intensive utili- kswapd” by M. Gorman: [http://lwn.
picture that supports an assessment. zation of memory by the Linux kernel net/Articles/551643/]; Patchset under:
can lead to significant performance [https://lkml.org/lkml/2013/3/17/50]
Measuring Results problems in applications. [14] SAP LinuxLab, miniSAP:
Linux is in quite good shape com- [http://www.sap.com/minisap]
Measurements are shown in Figure 4. pared with other operating systems,
Restricting the size of the page cache but it can be useful – given the va- The Authors
shows the expected behavior: The ap- riety of approaches that is typical of Alexander Hass, who has been with SAP Li-
plications are not much slower, even Linux – to investigate the behavior of nuxLab since 2002, collaborates with Linux
with massively parallel I/O. The kernel the kernel and the applications you distributors and provides support for custom-
patch by Mel Gorman behaves simi- use, so you can operate large systems ers’ systems to, among other things, reduce the
larly: It is already available in SLES with consistently high performance. effect of the Linux page cache on production
11 SP 3 and is going to be included in For more details on experience with operations.
RHEL 7. It supports good, consistent these systems and the tests used in Willi Nüßer is the Heinz-Nixdorf Foundation
performance of the applications. this article, check out the Test Drive Professor for Computer Science at the Fach-
Also, setting swappiness = 0 on SLES provided by SAP LinuxLab on the SAP hochschule der Wirtschaft (FHDW), University
11 SP 2 and SP 3 seems to protect the Community Network (SCN) [14]. n of Applied Science, in Paderborn, Germany,
applications adequately. Amazingly, where he develops and directs large and small
Red Hat Enterprise Linux 6.4 behaves R&D projects. He previously worked for SAP
differently: The implementation seems Info AG for six years, where as a developer at SAP
to be substantially different and does [1] SAP HANA Enterprise Platform 1.0 Product LinuxLab, he was responsible for, among other
not protect applications against aggres- Availability Matrix: things, porting SAP memory management to Li-
sive swapping – on the contrary. [http://www.saphana.com/docs/DOC‑4611] nux and supporting various hardware platforms.
window, ranges from 0 (less risky) to Figure 6, I’m searching for any cmd. and network commands, such as fin‑
100 (more risky). Because these two exe run by WScript.exe. Other interest- ger, net use, netstat, and so on.
processes started after the malware ing strings to enter in the search box Understanding normal activity is key
installed, they are likely bad. When I would be common Windows system when you start looking for badness.
compared the Start Time of svchost.
exe (PID 1560) with other svchost
processes (Figure 3), I saw that it ap-
peared to have started about 30 min-
utes after the other svchost processes. Figure 2: Hierarchical Processes.
The parent process of svchost.exe
(PID 3028) is WScript.exe (PID 1736);
this is discovered by looking at Hi-
erarchical Processes. After clicking
on WScript.exe (1736), I discovered
the user account that was logged
on when the process was spawned Figure 3: Comparing different running svchost processes.
and the full path of the process bi-
nary (Figure 4). Next, I clicked on
the Handles tab located below and
viewed the Handle Names; notice the
Untrusted status in Figure 5. Using
Redline to check for signed code may
reveal suspicious executables.
To look for evidence of code injection,
I chose Processes | Memory Sections
located in the Analysis Data pane.
This choice brings up memory pages
for every process, although the par-
ticular malware I was investigating Figure 4: Finding the user account and the full path of the process binary.
contained no injected memory sec-
tions that Redline could find.
Finally, I used Process | Strings to look
for additional evidence by entering
http://, https://, .exe, and the like. In
Malware Image
The malware image I am using in this article
is a variant found by the Palo Alto PA-5000
series firewall [5] on a Windows box in our
network, which was sent for further investi-
gation to a sandbox that Palo Alto uses for
such cases. Moments later, I received email
telling me that malware was discovered by
Palo Alto WildFire analysis [6].
WildFire identifies unknown malware,
zero-day exploits, and advanced persistent
threats by executing them directly in a
scalable, cloud-based, virtual sandbox envi-
ronment. The report, which goes into detail
about what the malware has done, gave me
a link to VirusTotal [7], used to score the
executable for maliciousness, along with a
PCAP file of network traffic generated by
the malware. I was able to download the
known malware variant and execute it on
my closed test VM network for observation;
then, I used both Volatility and Mandiant
Redline to research the culprit.
Figure 5: Viewing a process and its Handles.
Volatility
If you don’t know what normal traf- search is svchost.exe, which is used
fic activity looks like, you will be lost to run service DLLs and whose parent To begin, I go to the workstation that
when trying to find said malware. process, services.exe. Windows will took the image off the Windows ma-
One way is to better understand the
operating system you are analyzing – Table 1: Process Information Options
in this case, Windows. PROC Output
Understanding Windows process pslist Lists the processes of a system. It does not detect hidden or unlinked
structure helps; for example, csrss. processes.
exe is created by an instance of smss. pstree Views the process listing in tree form. Child processes are indicated using
exe and will have two or more run- indentation and periods (Figure 8).
ning instances. The start time is psscan Enumerates processes using pool tag scanning. This can find a process
within seconds of boot time for the that previously terminated (inactive) and processes that have been hidden
or unlinked by a rootkit (Figure 9).
first two instances (Sessions 0 and 1).
Start times for additional instances dlllist Displays process-loaded DLLs and shows the command line used to start the
process (including services) along with the DLL libraries for the process.
occur as new sessions are created,
dlllist ‑p 1764 Displays process-loaded DLLs for PID 1764 WScript.exe (Figure 10).
although often only Sessions 0 and 1
are created. (This
information is
available on the
SANS DFIR poster
[8].) Looking at
Hierarchical Pro-
cesses in Redline
will reveal an
instance of smss.
exe that spawns
csrss.exe.
Another interest-
ing item to re- Figure 8: pstree example. Notice the child processes indicated by indentation and periods.
chine with the F-Response tool and suggests parameters I can pass in to $ vol.py ‑‑profile=PROFILE ‑f FILE.img FORM
open a terminal. After changing to the Volatility (with ‑‑profile=PROFILE);
case files directory, I enter you may see more than one profile where PROFILE is one of the suggested
suggestion if profiles are closely re- profiles from the imageinfo output
$ python vol.py U lated. I can figure out which one is (here I’ll use WinXPSP2x86), FILE is the
‑f remote‑system‑memory11.img imageinfo most appropriate by checking the memory image of interest (in this case,
Image Type field, which is blank for remote‑system‑memory11.img), and PROC
to get information about the image Service Pack 0 and filled in for other takes on the values in Table 1; that is:
with imageinfo (Figure 7). The Sug- Service Packs. To find the processes
gested Profile(s) line in the output and DLLs, use $ vol.py ‑‑profile=WinXPSP2x86 U
‑f remote‑system‑memory11.img PROC
Volatility Network
Resources
To view active connections, use the
connections command, or to find
connection structures using pool tag
scanning, use the connscan command:
$ vol.py ‑‑profile=WinXPSP2x86 U
‑f remote‑system‑memory005.img connscan
so I need to investigate PIDs 1792 and which dumps all the processes with a challenge to analyze because the
132 further. Running psscan to enu- injected code in the directory called investigation threw no obvious red
merate processes does not show PID memory (Figure 13). The next step is to flags, demonstrating how you need to
1792, so that is a dead process; how- upload these .dmp files to VirusTotal dig deep to find threats to your sys-
ever, PID 132 shows up as svchost. to see if it can identify any known tems. What I learned:
exe. When I run pstree for more infor- issues. The only suspect DMP file 1. The memory image taken before
mation, the indented list in Figure 12 flagged by VirusTotal (Figure 14) was infection showed communication
shows parent and child processes. 0x370000.dmp, which is the F-Response with the Windows box and the
Another plugin available in Volatility tool used to extract the memory im- forensic workstation, but no other
called malfind extracts injected DLLs, age. This is a false positive. connections.
injected code, unpacker stubs, and 2. The memory image taken after the
API hook trampolines and scans for Conclusion infection showed communication
any ANSI string, Unicode string, regu- with 10.10.3.180 (two instances),
lar expression, or byte sequence in After taking a class for GCFA certifica- which is an internal IP address that
processes or in kernel driver memory. tion [9] and learning how to use the does not exist on my test network.
The syntax for this plugin is tools described in this article, I started 3. The PIDs related to IP address
analyzing the malware found by the 10.10.3.180 were 1792, a dead pro-
$ vol.py ‑‑profile=WinXPSP2x86 U firewall on our network, along with cess, and 132, svchost.exe, which
‑f remote‑system‑memory005.img malfind U a known malware variant from Palo was a child of wscript.exe and had
‑D memory/ Alto Networks. This example was a parent process of PID 1648 (ex‑
plorer.exe).
4. s vchost.exe (PID 132) is a generic
host process for Windows services
and is used to run service DLLs;
it should always be a child of ser‑
Figure 12: The parent process of svchost.exe is wscript.exe, which is a child of explorer.exe. vices.exe. Because it showed up
as a child process of wscript.exe, it
was a clear indication of wrongdo-
ing (Figure 12). n
Info
[1] “Acquiring a Memory Image” by David J.
Dodd, ADMIN, Issue 20, pg. 8,
Figure 13: Using malfind results in a number of DMP files for analysis. [http://www.admin‑magazine.com/Archive/
2014/20/Acquiring‑a‑Memory‑Image/
(language)/eng‑US]
[2] F-Response: [https://www.f‑response.com/
software/tac]
[3] Volatility: [http://code.google.com/p/
volatility/wiki/VolatilityIntroduction?tm=6]
[4] Redline: [https://www.mandiant.com/
resources/download/redline]
[5] PA-5000 series:
[https://www.paloaltonetworks.com/
products/platforms/firewalls/pa‑5000/
features.html]
[6] WildFire:
[https://www.paloaltonetworks.com/
products/technologies/wildfire.html]
[7] VirusTotal: [https://www.virustotal.com/]
[8] SANS DFIR:
[https://digital‑forensics3.sans.org/media/
dfir_poster_2014.pdf]
[9] GCFA certification:
[http://www.giac.org/certification/certifie
Figure 14: VirusTotal analysis of 0x370000.dmp (F-Response file) generated by running malfind. d‑forensic‑analyst‑gcfa]
• network security
• system management
• troubleshooting
• performance tuning
• virtualization
• cloud computing
on Windows, Linux,
Solaris, and popular
varieties of Unix.
Free
CD or DVD
in every Issue!
p e r y e a r!
6 issues xn e w m e d ia.com
t: s h o p .linu
d e r O n line a
Or
F e at u r e s TheSSS
Lightweight
Want to set up a full-fledged web, file, or proxy server in 10 seconds? No problem
with TheSSS, the smallest server suite in the world. The new 8.0 version of this
useful Linux distribution weighs in at a mere 30MB. By Tim Schürmann
The temporary workgroup you have Polipo proxy, which can use the Tor MySQL-compatible MariaDB database,
set up wants to save its files on a ded- anonymizer service, if so desired. The and the Adminer database manage-
icated FTP server; you want to try out 4MLinux firewall based on iptables ment tool (aka phpMinAdmin). De-
a proxy to see if you can reduce the adds security. spite the fairly extensive additional
network load and set up a web server As a bonus, administrators also have components, the alternative ISO image
on the intranet to advertise the daily the Clam AntiVirus scanner and a takes up only 45MB. An 80MB multi-
menu at the cafeteria. In other words, rudimentary backup program. For the boot CD (TheSSS‑8.0‑ToolBox.iso) is
you need a small server – enter the sake of completeness, TheSSS also also available. Besides TheSSS with
smallest server suite in the world, throws in a couple of minor league PHP, it also contains Antivirus Live CD
TheSSS [1]. monitoring tools that diligently gather [4], BakAndImgCD [5], and FreeDOS
TheSSS runs as a Live system by de- information about the system and the [6]. The boot menu gives you the
fault, but it can also be installed on network. Among other things, they choice between these Live systems.
the hard disk. You need to boot this can tell you which of the enabled ser- FreeDOS is primarily used to start the
lean Linux distribution on a server vices is causing a high load. TestDisk tool and Ranish Partition
and then enable the services you Of course, you’ll encounter a couple Manager.
require with a short command line. of minor drawbacks: The new 8.0 ver- You can either burn the selected im-
The fact that it runs in main memory sion of the Linux distribution is still age to a CD or create a bootable USB
is especially handy if you want, or only available as a 32-bit distro and stick. The TheSSS makers recommend
need, to set up a service temporar- therefore cannot use more than 4GB the UNetbootin [7] tool for the latter
ily – for example, because the main of RAM. Additionally, it still does not approach. If you are already using an
FTP server is down. support UEFI firmware, thus forcing older version of TheSSS, you can use
admins to enable the BIOS emulator the ZK package manager to upgrade
Old Friends on newer systems. to the current 8.0 version.
Lead Image © zhao lizu, 123RF.com
whereas httpd stop shuts The PHP version of TheSSS also in-
it down. The default web cludes the MariaDB database, which
server here is Tiny Server is automatically launched with the
(thttpd); its configuration web server. You can then access the
file is located below /etc/ Adminer web front end on http://lo-
httpd/thttpd.conf, and the calhost/adminer.php. Incidentally, the
files it serves are located same startup scripts are available for
below /srv/http. all variants of TheSSS. If a service is
Alternatively, administra- missing in your flavor, the script just
tors can switch to the notifies you.
BusyBox web server. For
Figure 1: The TheSSS login screen tells you the time and date. this purpose, you delete, or Encore!
simply move, the configura-
parameters, press the Tab key. In any tion file for thttpd and call httpd re‑ The antivir script automatically
case, you boot into a console on a Li- start. Settings for the BusyBox HTTP downloads the ClamAV antivirus
nux 3.10.23 SMP kernel; TheSSS does daemon, which is then enabled, are program from an archive hosted on
not come with a GUI. available in /etc/httpd/httpd.conf. Dropbox and updates the signature
After starting the system, the all- Following the same principle, you can data. After the install, you decide
powerful root user first needs to enter launch all the other supported ser- how ClamAV should deal with suspi-
a new password. TheSSS initially vices. Table 1 gives an overview of the cious files; it then automatically starts
rejects weak passwords, but it leaves main services, their start commands, a scan of all files. The starting point
them alone if you make a second at- and the locations of the configuration is the root directory /, and the virus
tempt. Then, you log in as root with files. Using serverd start starts all the scanner checks all connected hard
the previously assigned password supported services at once. drives.
(Figure 1). In our lab, TheSSS exhib- The Polipo proxy server listens by TheSSS also offers a few well-known
ited a strangely large vulnerability default on port 8123 and stores any monitoring tools. Nmon provides gen-
here: From time to time, say every cached pages in /var/cache/polipo. In eral system information (Figure 3),
seventh attempt, the system did not addition to this in-memory operation and Netwatch observes network ac-
prompt for a password and simply mode, it can also swap out its data to tivity. The nmonitor script serves up
logged root in. the hard disk, as well as pass queries this and all the other monitoring tools
The shell that welcomes the user here through to the outside via the Tor net- for you to choose from.
is BusyBox, which is tailored to mini- work. To do this, you simply set the Midnight Commander simplifies
distributions. It includes many Unix mode in the configuration file to disk the process of working with files;
commands such as tar and gzip as or tor. you can launch it by typing mc (Fig
built-ins. They do not always fully im- The FTP server by default only sup- ure 4). Links is a rudimentary
plement the functionality of their GNU ports file downloads. Uploading is browser that runs in text mode, simi-
role models, but where the parameters only possible if you modify the config- lar to Lynx.
do exist, they use the same syntax [8]. uration with the upload command. By Typing backup starts a minimalist
TheSSS automatically retrieves a net- default, the server allows anonymous backup program, which first asks
work address via DHCP. If you prefer access without a password and pro- whether you want to store the files
a static address or need special set- vides the files below /var/ftp/. to be backed on a USB flash drive or
tings or a wireless connection, you can
call on the netconfig tool to help you
(Figure 2). In a small question-and-
answer session, it guides you through
the network setup. Furthermore,
TheSSS automatically mounts all the
filesystems it can see at startup. The
contents of the partitions it finds are
then accessible below /mnt.
Staff Entrance
The distribution comes with scripts
that start the individual services
and applications. For example, httpd
start launches the web server, Figure 2: The help command takes you to this help page, which briefly covers all the important commands.
Conclusions
TheSSS is the right choice if you need
one of the services it provides in a
hurry. Even if the THTTP web server
cannot compete with Apache or
Nginx, the range of functions is fine
for most purposes on an intranet. In
return, TheSSS – thanks to its small
Figure 3: TheSSS comes with monitoring tools such as Nmon, which provides an overview of network traffic. size – also runs on underpowered
systems or systems working close to
an FTP server. Then, it simply starts system – without any options for user full load. Many administrators, how-
Midnight Commander, in which the intervention – and installs TheSSS. If ever, are likely to miss their preferred
user then painstakingly copies the you want to repartition your disk first, tools.
files to be backed up by hand. Using TheSSS provides the fdisk and cfdisk The backup programs can also hardly
fsbackup simply backs up the content tools for this purpose. The target par- be described as such; if you seriously
of one partition on another. tition must be at least 1GB. want to back up your data, you would
If you take a liking to TheSSS, you The bootloader TheSSS uses is the do better to choose a distribution that
can install it on your hard disk with outdated LILO. If any other operating provides better support. n
install2hd. You simply pass in an ex- systems exist on your computer, the
isting target partition to the script, in task of setting up a boot manager like
which it then creates a new ext4 file- GRUB is left to the user. The instal- Info
[1] TheSSS: [http://thesss.4mlinux.com/]
[2] 4MLinux: [http://4mlinux.com]
[3] TheSSS on SourceForge: [http://
sourceforge.net/projects/thesss/files/]
[4] Antivirus Live CD:
[http://sourceforge.net/projects/
antiviruslivecd/files/?source=navbar]
[5] BakAndImgCD: [http://sourceforge.net/
projects/bakandimgcd/]
[6] FreeDOS: [http://www.freedos.org/]
[7] UNetbootin:
[http://unetbootin.sourceforge.net/]
[8] BusyBox:
Figure 4: Midnight Commander helps with file management on the console. [http://www.busybox.net/about.html]
Keynote Address by Phil Lapsley, author of Invited Talk: “Battling Human Trafficking with Big
Exploding the Phone: The Untold Story of the Data” by Rolando R. Lopez, Orphan Secure
Teenagers and Outlaws Who Hacked Ma Bell
Invited Talk: “Insight into the NSA’s Weakening
Panel: “The Future of Crypto: Getting from Here of Crypto Standards” by Joseph Menn, Reuters
to Guarantees” with Daniel J. Bernstein, Matt
Blaze, and Tanja Lange
The following co-located events will precede the Symposium on August 18–19, 2014:
EVT/WOTE ’14: 2014 Electronic Voting FOCI ’14: 4th USENIX Workshop on Free and
Technology Workshop/Workshop on Open Communications on the Internet
Trustworthy Elections
USENIX Journal of Election Technology HotSec ’14: 2014 USENIX Summit on Hot Topics
and Systems (JETS) in Security
Published in conjunction with EVT/WOTE
www.usenix.org/jets HealthTech ’14: 2014 USENIX Summit on Health
Information Technologies
CSET ’14: 7th Workshop on Cyber Security Safety, Security, Privacy, and Interoperability
Experimentation and Test of Health Information Technologies
NEW! 3GSE ’14: 2014 USENIX Summit on Gaming, WOOT ’14: 8th USENIX Workshop on Offensive
Games, and Gamification in Security Education Technologies
www.usenix.org/sec14
Big Business
Modern enterprises require powerful ERP and CRM solutions to manage Open source ERP solutions offer two
key advantages. For one thing, the
processes, but the high cost of proprietary solutions can be prohibitive.
company can take an open source
We look at some open source options. solution for a test drive at virtually no
By Filipe Pereira Martins, Anna Kobylinska, and Jens-Christoph Brendel cost to check in depth its suitability
for enterprise needs. Additionally, us-
ers are free to modify the open source
The complex business processes in a vertically and therefore can provide ERP system. The stronger the open
modern enterprise, from procurement powerful functionality with minimal source community of experienced
through receipt of incoming goods, adjustments. developers, the easier the task. Many
the flow of goods through production, Proprietary ERP solutions admittedly service providers now specialize in
and the delivery of goods to custom- have some benefits. ERP systems by providing customized open source
ers, are almost unmanageable today the Tier I providers (SAP, Oracle, and ERP software to small and medium
without the support of powerful en- Microsoft) score points with their so- enterprises (SMEs).
terprise resource planning (ERP) and phisticated group-specific functional-
customer relationship management ity, such as the ability to manage geo- The Open Source
(CRM) solutions. A robust and flex- graphically dispersed sites in different
ible ERP/CRM system is vital for a countries in different languages and
Alternative
modern company. to model complex business processes Open source ERP/CRM solutions are
in various industries. However, these many and varied. Some are turnkey;
Proprietary or Open Source group-specific features are mainly others must be adapted. Some score
useful for large international compa- points with powerful modules, and
According to a survey by Panorama nies and are not always beneficial to others with numerous experienced
Consulting Solutions, the three lead- small and medium-sized businesses. system integrators. Given that imple-
ing providers of proprietary ERP The low added value of these fea- mentation of a CRM system can take
software – SAP, Oracle, and Microsoft tures outside of large corporations is up to two years to complete, it is not
(the so-called Tier I suppliers) – cur- disproportionate to the considerable surprising that finding the perfect so-
rently have a combined market share costs of implementing a Tier I solu- lution places an enormous burden on
of 54 percent. All Tier II suppliers tion. According to Panorama Consult- IT managers.
(which include less complex solutions ing, these costs range in the single- to The evaluation of a CRM system
with a comprehensive range of func- double-digit millions. begins with creating a requirements
tions, such as Infor and Epicor) had In addition to the high cost for a pro- profile that maps all the relevant
Lead Image © ximagination, fotolia.com
to settle for a total market share of 14 prietary solution, users must contend processes. The intent is to examine
percent last year. with the problem of being locked in how to retrofit missing functionality
The rest of the market contenders – to a single vendor. The inflexibility or possibly add that functionality by
the Tier III providers – still account and high cost of proprietary ERP/ integrating software packages, such
for a significant 31 percent cut of the CRM tools has caused many compa- as loadable modules, by different
cake. Tier III providers include indus- nies to look for open source alterna- vendors. More or less any ERP system
try-specific solutions that are tailored tives. will do a satisfactory job of handling
the standard functions (purchase tomated Tariff and Local Customs now clearly saturated. Although large
order, goods receipt, invoice verifica- Clearance System). The formalities companies have at least partly covered
tion, etc.). Features for further auto- basically involve filling out a form. their needs, ERP software is still new
mation of business processes, such as Nevertheless, very few proprietary territory for smaller businesses. Open
automatic data acquisition from sup- ERP systems, such as SAP ERP, can source ERP software solutions come in
pliers, can further increase the practi- offer active support for ATLAS. How- many shapes and forms. Some systems
cal benefits if they are implemented ever, whether these features justify the are available only as a download for
correctly and completely. investment costs in an individual case on-premises or cloud installations for
Exporters often face special require- is another matter. Many companies are which integrator support is available.
ments. Export declarations within the willing to retrofit missing functionality. In this article, we highlight some
European Union have been regulated The market for ERP solutions, es- popular open source ERP and CRM
for over a decade by ATLAS (Au- pecially in the upper price sector, is alternatives.
Opentaps ERP + CRM Pentaho system and the open source JasperReports software
for reporting. Reports can be designed with iReport – a Jas-
URL: [opentaps.org] perReports graphical tool – in a very intuitive way and then
opened in opentaps and executed from there.
Strengths: Integration with Pentaho allows data from various
n Combines Apache OFBiz with an object-oriented, sources, including Hadoop and NoSQL databases, to be
domain-driven architecture based on Java J2EE loaded into a data warehouse system. Support for the
n Offers authentication via LDAP and Kerberos SSO data integration system by Pentaho allows the purely
n Supports SSL, OWASP, and data encryption visual design of ETL database operations. The user can
n Enables the design of ETL (Extract, Transform, and then save the visually created transformations and run
Load) operations with graphical tools them in opentaps.
n Cooperation with all the leading databases, including IT administrators can try out opentaps by launching an
MySQL (including opentaps Analytics), PostgreSQL, opentaps AMI (Amazon Machine Image, a preconfigured
Oracle, and SQL Server template of an executable EC2 instance) in AWS. There
n Preconfigured AMIs for use in the AWS (Amazon Web are currently two official AMIs:
Services) cloud n An opentaps mini-AMI with functionality for small
deployments with up to 10 users (not suitable for
Weaknesses: e‑commerce or MRP inventory planning) at a price of
n Opentaps Analytics uses a data warehouse based on US$ 0.25 per instance and hour.
MySQL, even if it manages the ERP data in a different n An opentaps full AMI with a full installation at a price
database of US$ 0.50 per instance and hour.
Unfortunately, the AMIs could not be found on the AWS
Opentaps, short for Open Source Enterprise Applications Marketplace; instead, direct links to the vendor’s site take
Suite, is an integrated ERP solution with a full CRM feature you there [2].
set including warehouse and inventory management, sup- Both AMIs offer a preinstalled opentaps environment with
ply chain, finance, business intelligence, e-commerce, POS, a MySQL database preconfigured for production use. To fa-
and mobility (Figure 2). cilitate creating your own company, the installation comes
Opentaps was originally launched as a module for Apache with a turnkey template that does not include any compa-
OFBiz with the aim of expanding the functionality of the ny-specific data. In this way, companies can take the sys-
open source framework to include an ERP/CRM suite. tem for a test drive for very little expense and investigate
Opentaps combines Apache OFBiz with an object-oriented, it in detail with regard to their own needs. For permanent
domain-driven architecture based on Java J2EE. Current use, however, the use of reserved EC2 instances is recom-
versions of opentaps include Apache OFBiz, as well as the mended to reduce operating costs.
Apache Tomcat and Apache Geronimo servers. Despite its many strengths, opentaps is perhaps not ev-
The particular advantages of opentaps compared with eryone’s darling. If you want flexibility to choose the
Apache OFBiz include functions related to user authentica- individual components of the system yourself, you are
tion (using LDAP and Kerberos SSO), data security (SSL better served by Apache OFBiz. If you are looking for a
support, OWASP, and the encryption of critical company powerful CMS system, you might be better served with a
data), and accounting. Opentaps also impresses with so- combination of Apache OFBiz and Magnolia CMS instead
phisticated functionality in the fields of customer service/ of opentaps. n
CRM, order management, inventory,
procurement, planning, budgeting, and
finance. Last but not least, the opentaps
user interface offers significantly better
usability than the OFBiz framework.
The now very popular opentaps ERP/
CRM software also comes with many
well-thought-out features. For example,
parts of production can be outsourced
in the short term to external manufac-
turing contractors in case of unpredict-
able resource bottlenecks – in the scope
of multistage manufacturing processes.
Thankfully, the developers of opentaps
have not attempted to reinvent the wheel.
In the area of business intelligence,
opentaps relies on the also open source Figure 2: The opentaps ERP software used by Toyota Great Britain (source: opentaps).
ERP5
URL: [erp5.com]
Strengths:
n Unique ERP workflow options
n Mature internationalization
n Easy data extraction using OLAP
n Support for data warehousing
n Various vertical market solutions
n Operation via web browser
Weaknesses:
n Only poor and incomplete docu-
mentation available
n Stagnation in the evolution of
ERP5
ERP5 is a popular middle-market Figure 3: Variants of ERP software in use in the enterprise (source: ERP Report 2013 by Panorama Consulting
solution that covers all the typical Solutions).
functional areas of an ERP system,
from sales, inventory management, salary calculation, per- OS X, and other Unix derivatives. Connectors to relational
sonnel management, product design management (PDM), databases include MySQL, Postgres, SQL Server, IBM DB2,
CRM, and e-commerce through accounting. ERP5 also pro- and Oracle.
vides some customized, integrated solutions for markets However, a weakness exists that is not to be underesti-
such as aerospace, fashion, and banking. In particular, the mated and that has raised concerns among many users:
solution scores points in the healthcare and public admin- the clear stagnation in the evolution of ERP5. Although
istration sectors (Figure 3). nothing is technically wrong with ERP5, enterprise users
The system, which was developed in Python, uses the rightly expect a solution to be future-proof, and ERP5 can-
Zope application server as its base on Linux, Windows, not offer this, despite all of its features. n
merger of CDC Software and the software forge Consona, Edition users are given access to the latest bugfixes and
which previously had secured the rights to Compiere can currently enjoy version 3.8.3 from 13 February 2014,
through an acquisition. rather than having to make do with the very outdated
Aptean points to the availability of a cloud edition of version 3.3 from March 2, 2009, like the users of the Com-
Compiere on AWS. Unfortunately, we were unable to find munity Edition.
this (Figure 4), which is too bad, because it would give In its turbulent history, Compiere has repeatedly changed
interested IT administrators an option for trying the ERP hands. The successful and robust model-driven architecture
system with low costs and overhead. Ultimately, that is the in conjunction with the open source nature of the software
idea of a commercial AMI. Interested parties are thus left have now – to the indignation of many open source devel-
with no option other than to download the Community opers who specialized in Compiere – opened the floodgates
Edition, manually install the software package, and battle to countless imitators who draw heavily on concepts intro-
their way through the database setup. duced by the pioneer of open source ERP software. Among
With Compiere, Aptean addresses small to medium-sized others, the developers of the ERP systems ADempiere,
businesses that require particular flexibility in an ERP soft- Adaxa Suite, iDempiere, and Openbravo have made heavy
ware. Compiere focuses on business processes rather than use of the Compiere code base. n
on organizational structures and is highly suitable for
use in modern, agile, medium-sized companies (large
corporations are typically happy with rigid organi-
zational structures and a lower degree of flexibility).
Incidentally, the word “compiere” comes from Italian
and means to do or accomplish something.
Compiere provides multiple vertical market solu-
tions for companies in the distribution, retail, manu-
facturing, and service sectors. The scope of services
includes finance, management of distribution chan-
nels and production processes, calculation of the
order costs/project-specific accounting, and sales via
e-commerce and point of sale. The suite supports
Linux and Windows. The supported databases are
Oracle Standard Edition/Enterprise Edition/XE or
Postgres Plus Advanced Server/EnterpriseDB. Ac-
cess to Compiere applications is via a Java client
(currently compatible with Java JDK 1.6/7) to allow
desktop users a free choice of platform.
In the Community Edition, unfortunately, mobile us-
ers with tablets and smartphones are left out in the
cold, because only the Enterprise Edition provides a
browser-based user interface. Also, only Enterprise Figure 4: Compiere sports the AWS logo; unfortunately, it cannot be found in the AWS cloud.
catalog of exten-
sions [3], featuring
nearly 300 entries,
open source mod-
ules are unfortu-
nately the exception
rather than the rule.
Most of these solu-
tions are proprietary
and commercial,
and some even
require the Profes-
sional Edition of
Openbravo.
Thanks to the open
API, the system is
expandable. The
developers focus Figure 6: The dashboard of a running Openbravo ERP system.
on agility, mobility,
and cloud support. It is not just that the user interface of quantify the benefits of implementing the ERP system:
the Mobile Edition of Openbravo is based on HTML5, it is Thus far, the solution provider for compliance solutions
also touch-enabled and runs in an ordinary mobile web has been able to save 40 percent of its administrative
browser. Using a framework like PhoneGap allows the time and cost overhead.
Openbravo mobile front end to be packaged as a mobile Openbravo is represented on AWS with official AMIs [4]
app. based on Ubuntu Linux or Openbravo appliances in four
The excellent usability of the browser-based Openbravo regions (including one in the EU). Most active users (99.9
GUI offers users great flexibility. If users have incom- percent) of Openbravo manage without any commercial
plete data, they can navigate between different tabs and support (Figure 7). Only 0.1 percent use commercial
browser windows and continue their work without losing features and services. Openbravo is one of the few open
form input (Figure 6). source CRMs that can compete with the three commercial
Stephen Schober, the CEO and managing director of top dogs. n
Metal Supermarkets, who uses Openbravo
and provides it to his concession partners,
said: “Openbravo turned out to be the best
of the 80 solutions that our project team in-
vestigated. We were particularly impressed
by the high degree of flexibility with regard
to the setup (system and user environment),
language localization, tax rules, and data
management.” Visual Governance, another
North American user of Openbravo, can even
xTuple ERP PostBooks Edition built-in support for billing in multiple currencies, multilin-
(previously OpenMFG) gual localization packages, and adaptable taxation rules
(Figure 8).
URL: [http://www.xtuple.com/get‑xtuple‑erp] The software first emerged under the name “OpenMFG”
as a solution to a specific problem for a specific client,
Strengths: Cedarlane Natural Foods Inc., and was distributed for a
n Manufacturing-oriented product time under a commercial license. Cedarlane is a supplier
n Good process and manufacturing flow to supermarkets in the United States and Canada and thus
n Strong reseller and developer community optimized the ERP and CRM software for its own require-
n Available for Windows, Linux, BSD, and Mac OS X ments. A positive side effect of this development is that
OpenMFG gained a flexible electronic data exchange inter-
Weaknesses: face and powerful price- and cost-controlling tools.
n Extensions and modules can cost several thousand The system supports the manufacturing of products both
dollars for stock and for delivery to order and was developed in
n The extensive technical PDF documentation is unfortu- line with the APICS standards. It uses PostgreSQL, the Qt
nately not free of charge framework for C++, and the REST API. Unfortunately,
n Useful functions for international trade only in the com- the prices of what are otherwise interesting extensions,
mercial version as well as the prices of the specialist commercial edi-
tions, are quite high and therefore unrealistic. The lack
The xTuple ERP PostBooks Edition is a very functional of an open API has led to a lack of third-party solutions,
ERP/CRM system for SMEs in the manufacturing indus- which seems to be bringing about the slow extinction of
try that features integrated accounting. The commercial the entire ecosystem. This is definitely not in the spirit
editions include a production management system with of open source. n
Figure 8: XTuple, the successor to OpenMFG, comes as a free edition – unfortunately, with countless restrictions.
GIMP
Handbook
Sure you
know Linux...
but do you know GiMP?
▪ Fix your digital photos
▪ Create animations
▪ Build posters, signs, and logos
order now and become an
expert in one of the most
important and practical
open source tools!
SugarCRM
URL: [sugarcrm.com]
Strengths:
n Very extensive scope of commer-
cial editions
n A high degree of interoperability
n Support for mobile devices with a
mobile app
n Available as a SaaS solution
Weaknesses:
n Restricted functionality of the
Community Edition
Conclusions in. Unfortunately, at that point, there opportunity to take the short-listed
is often no meaningful way of turning solutions for a test drive but also
When choosing an ERP/CRM system, back, given the high costs. means they scale on demand. Com-
the feature scope and three other Fortunately, the rapid pace of innova- panies can combine their open source
important criteria play a central role: tion has produced a variety of alter- ERP/CRM systems with the high scal-
scalability, vendor lock-in, and in- natives in recent years in the open ability and performance of the cloud,
teroperability. source ERP/CRM sector. Open source without the long-term commitment
ERP/CRM solutions that deserve the solutions offer the enterprise plenty to a supplier or the financial burden
term “professional” have almost ex- of scope for testing and extending the of licensing costs for several years. A
clusively been commercial in the past. available feature set. company can set up a needs-driven
Companies who were not willing to Commercial licenses and proprietary ERP/CRM system that suits its own
invest in a commercial ERP system software do not give users any op- requirement profile, independent of
were sidelined; only in the rarest of tions for modifying the acquired company size. n
cases did developing an open source solution to suit their own needs. This
solution with the required functional- aspect of an ERP solution has enor-
ity prove genuinely practical. mous significance for agile, medium- Info
The users of a proprietary ERP solu- sized companies. The open source [1] Apache OFBiz:
tion have to accept critical limitations: solutions that look particularly prom- [http://ofbiz.apache.org/]
They have to tolerate vendor lock-in ising in this context are opentaps, [2] Official opentaps AMIs:
and lack of interoperability. Errors in Openbravo, and SuiteCRM (based on [http://www.opentaps.org/Cloud]
evaluation or implementation of such SugarCRM). [3] Catalog of Openbravo modules: [http://
a critical IT system as ERP software Providers of the leading open source www.openbravo.com/modules‑catalog]
can have catastrophic consequences ERP/CRM systems have recognized [4] Official Openbravo AMIs:
for the company concerned. Compa- the importance of the cloud at an [http://wiki.openbravo.com/wiki/
nies who identify the weaknesses in early stage and put their money on Installation/Amazon_EC2/Select_your_AMI]
their choice of a proprietary solution AWS. The availability of turnkey EC2 [5] X2Engine: [http://www.x2engine.com/]
too late already have their data locked instances not only gives admins the [6] SuiteCRM: [http://www.suitecrm.com]
Iron Ore
Google Compute Engine removes the technical and financial headaches of maintaining server, networking, and
storage. By Joseph Guarino
Cloud computing is a fundamental in, though, take a look at the many than most other players in the cloud
evolutionary step in the world of Google cloud services (Table 1), marketplace. Arrival time aside, it is
computing. Using the cloud lowers which should cover just about any- a powerful, scalable, and performant
initial investment, reduces costs, and thing you would want to build. IaaS solution.
improves ROI, with the option to be Compute Engine allows you quickly
elastic, scalable, and infinitely perfor- Google Compute Engine and easily to create anything from
mant. It’s the best of both worlds for a simple single-node VM to a large-
technical people and bean counters Google Compute Engine was opened scale compute cluster on Google’s
alike. Google Compute Engine is a to the public in June 2012, a bit later world class infrastructure. As of this
stellar IaaS (Infrastructure as a Ser-
vice) example that is part of a larger
suite of options, including the App
Engine PaaS (Platform as a Service),
Lead Image © Zavalnyuk Sergey, 123RF.com
writing, it supports several stellar Instances are available with many instance (f1-micro), with one core
open source Linux distributions (and options and are completely customiz- and 0.60GB of memory, and go up
one closed-source option), includ- able from a hardware perspective. to 16 cores and 104GB of RAM. For
ing Debian and CentOS; CoreOS, You can choose the number of cores, the sake of the demo here, I will be
FreeBSD, and SELinux [2]; and Red RAM, and other machine properties, using a shared core micro instance
Hat Enterprise Linux, SUSE, and and you can scale them as you grow (g1-small; Table 2). Competition from
Windows [3]. [4]. Virtual instances start at a micro Amazon, Microsoft, Rackspace, and
others in the cloud marketplace has
Table 1: Google Cloud Services put increasing downward pressure on
Name Service the price of many cloud offerings.
Google Compute Engine Allows you to build hosted virtual machines in Google’s cloud. As Constructing your own server in
an IaaS option, it allows you to right scale and reduce your costs at Google Compute Engine (GCE) is
the same time. easy. Your first requirement is to have
Google App Engine Allows you to focus on coding your application and not on a Google account. If you have any
infrastructure configuration, or administration. As a PaaS solution, experience with any other cloud plat-
it supports the Python, Java, PHP, and Go languages.
form (e.g., Amazon AWS and its AWS
Google Cloud SQL Fully managed MySQL at your fingertips, with no headaches of management console), you will feel
administration, scaling, or replication.
right at home in the Google Developer
Cloud Storage Highly available storage service for your data. If offers standard Console [5].
storage or durable reduced availability (think backup), OAuth, and
granular access control. Like other cloud services, GCE offers
both a UI (user interface) and API
Cloud Datastore Offers a managed NoSQL database for storing non-relational data.
(application programming interface).
This is a robust, powerful solution without the headaches.
In this article, I focus on the basics
BigQuery Lets you analyze big data in the cloud with massive scalability and of using the UI and related utilities to
the fast data queries required of big data problems.
get you up and running quickly.
Cloud DNS Google’s high-performance domain name system (DNS) in the cloud.
It is manageable via the command line or scriptable with Python.
Setting Up Your Project
Cloud Endpoints Allows you to create RESTful services available to iOS, Android, and
JavaScript clients. It features DDoS protection, OAuth support, and To get started with the GCE, you need
client key management.
to set up a project name and ID. To
Translate API Helps you translate your application to another language begin, choose a project name and
programatically. It supports most popular languages, but sadly
lacks Klingon support. Lu’?
project ID, then click Create Compute
Engine Instance, as is detailed in Fig-
Prediction API Allows use of Google’s machine learning algorithms to analyze
ure 1. As with other cloud computing
data. It is a powerful way to gain insights into future trends using
historical data. services, you have a dizzying array of
options from which to choose.
Cloud Deployment Manager Provides a way to design, create, and deploy system templates.
It also lets you actively monitor the status of your Google Cloud On the left side of the window, select
post-deployment. Compute Engine. To run your in-
Cloud SDK A powerful group of tools and libraries for orchestrating your stance, you first will have had to set
Google Cloud deployments, with the ability to control App Engine, up payment. Simply click on Setup
Compute Engine, Cloud Storage, BigQuery, and Cloud SQL. Billing, fill in the required informa-
Push-to-Deploy Lets you use Git to deploy your application automatically to App tion, and submit it.
Engine. It works for applications written in Python, PHP, and Java.
Creating a New Instance
Cloud Playground Lets you try out App Engine, Cloud Storage, or Cloud SQL right in
your browser. It also supports importing projects directly from Once you have created a project and
GitHub.
entered your billing information, you
Android Studio Allows you to to develop, debug, and put your code to work in Google’s are finally ready to add a new Google
Cloud Platform from this new Android development environment.
Compute Engine instance. As shown
Google Plugin for Eclipse Software development tools for Java developers to aid in the design, in Figure 2, you need to fill in the
build, and deployment of cloud-based App Engine applications.
name of the server and any other
desired configuration. The only ele- n External IP: Specifies the external must meet the following Cloud SDK
ments you might need to customize address allotted when the in- requirements:
to your own needs include: stance is created. Here, the default n Python 2.7.x
n Zone: Specifies the geographic re- Ephemeral address is bound to the n Java 1.7+ (for App Engine)
gion in which your virtual instance instance as long as it exists. n A supported OS: Windows (requires
and its data will be located. Gener- Now that you have created a project Cygwin [6]), Mac OS X, Linux
ally choose the one closest to the and set up a GCE instance through To set up Gcutil [7], you must down-
clients you will serve. the web interface, I’ll explore setting load and install the Google Cloud
n Machine type: Lets you choose up, managing, and controlling the SDK. On the Linux distro of your
system specifications in terms of project through Google’s suite of com- choice, enter the commands
processors and RAM. mand-line tools via the Cloud SDK.
n Image: Lets you choose an op- $ curl https://dl.google.com/U
erating system from the many Cloud SDK dl/cloudsdk/release/U
supported options. Currently sup- install_google_cloud_sdk.bash | bash
ported OSs are Debian, CentOS, The Google Cloud SDK is a set of $ unzip google‑cloud‑sdk.zip
Red Hat Enterprise Linux, SUSE, tools and libraries to create and man- $ ./google‑cloud‑sdk/install.sh
and Windows [3]. age your Google Cloud. It supports $ gcloud auth login
n Network: Specifies the network App Engine, Compute Engine, Cloud
that traffic can access. In this ex- Storage, BigQuery, Cloud SQL, and to transfer the SDK to your machine,
ample, default is correct. Cloud DNS. Before going further, you unzip the file, run the installation
Google Compute Engine uses the 04 INFO: Zone for gcerocks‑instance‑1 detected as us‑central1‑b.
OAuth2 standard for authentica- 05 WARNING: You don't have an ssh key for Google Compute Engine. Creating one now...
tion and authorization to access the 06 Enter passphrase (empty for no passphrase):
Google Cloud. OAuth allows users to 07 Enter same passphrase again:
share data with your website or appli- 08 INFO: Updated project with new ssh key. It can take some time for the instance to pick up the key.
cation while keeping their username 09 INFO: Waiting 10 seconds before attempting to connect.
and password – and other sensitive 10 INFO: Running command line: ssh ‑o UserKnownHostsFile=/dev/null ‑o CheckHostIP=no ‑o
information – private. StrictHostKeyChecking=no ‑i /home/joe/.ssh/google_compute_engine ‑A ‑p 22 [email protected] ‑‑
With a Cloud SDK and authentica- 11 Warning: Permanently added '1.2.3.4' (ECDSA) to the list of known hosts.
tion, you can now SSH into your 12 Enter passphrase for key '/home/joe/.ssh/google_compute_engine':
new instance. As you see (Listing 13 Linux gcerocks‑instance‑1 3.2.0‑4‑amd64 #1 SMP Debian 3.2.54‑2 x86_64
1), Google Cloud SDK sets up key-
based authentication and takes you
right into the instance specified setting up Gcutil with key-based au- (created with this instance), and go
in the gcutil command: gcerocks- thentication is helpful but means little to Firewall | Create a new Firewall.
instance1. if you fail to add a strong passphrase Where you see default rules, click
Note that it is always good practice to protect your key and lock down Create new. For example, Figure 3
to put in a strong passphrase when your local machine. shows an Nginx web server with
asked to do so. Never leave it blank. HTTP on port 80 and HTTPS (SSL/
Also mind the security of the local Firewall in the Cloud TLS) on 443.
machine you use to manage your
Google Cloud. Next, you need to set up your cloud Adding Storage
With a Cloud SDK set up, you have instance by configuring a firewall and
a range of utilities to manage your adding persistent storage. All new GCE has two kinds of storage: scratch
cloud (Table 3). If you use Gcutil instances by default block all exter- disks and persistent disks. When you
standalone, it automates the setup of nal traffic, which is a smart security create a GCE instance, for example,
key-based authentication for SSH ac- move from Google; default deny is you get a default disk of 10GB. This
cess to your instance. Gcutil uploads always a good idea. To make the ser- “scratch space” storage shouldn’t be
and creates a public/private key and vices you install available, you need used to save mission critical data and
uploads your public key to the cloud. to open up the firewall rules to that can’t be used to share data; instead,
Finally, it associates the key with your newly created instance. you should use a persistent disk.
Google account, giving you access to To create a new firewall rule, click A scratch disk is tied to the virtual
any instance you create. As always, Networks, choose the default network instance itself and will not be as per-
formant as persistent storage with lose data – because you might delete tual instances. You can think of these
Google Cloud. Remember, scratch and recreate instances. as your virtual enterprise cloud storage
storage isn’t where you store or back A persistent disk is separate from any that you create, format, and mount
up critical data – unless you like to instance and exists outside your vir- to make available to your instances.
Adding a persistent disk can be done cloud infrastructure set up, you are
both with gsutil and from the web primed to build whatever you like
GUI. For the sake of space and to get with this powerful IaaS cloud option,
you up and running quickly, I will use so have some fun in the cloud play-
the quickest method: the web console. ground. n
Again, those familiar with almost any
other cloud provider will feel right at Figure 5: Attaching a disk.
home with the ease of use and power Info
of the Google Cloud Platform. [1] Google Cloud Platform: [8] gcloud: [https://developers.google.com/
Adding persistent storage is as easy [https://cloud.google.com/products/] cloud/sdk/gcloud]
as going to the Google Cloud Console [2] CoreOS, FreeBSD, and SELinux can be [9] Gcutil command reference:
and navigating to Compute Engine | imported via instructions at [https://developers.google.com/compute/
Disks and then New Disk (Figure 4). [https://developers.google.com/compute/ docs/gcutil/reference/]
Fill in a name for this disk and any docs/operating‑systems] [10] Google Cloud SDK: [https://developers.
related description; then, pick a zone [3] Red Hat Enterprise Linux, SUSE, and Win- google.com/cloud/sdk/]
(same zone as you specified before or dows are commercially supported operat-
it will not work) and select a source ing systems, so they are offered as paid The Author
type of None (blank disk). options: [https://developers.google.com/ Joseph Guarino is a Senior Consultant/Owner
Finally, select a size for the new per- compute/docs/operating‑systems] at Evolutionary IT, which provides Business and
sistent disk and click Create, then [4] Google Compute Engine: [https://cloud. Information Technology solutions to the New
click on your instance and scroll google.com/products/compute‑engine/] England area and beyond. In his free time, you
down to the Disks section. Select at- [5] Google Developer Console: will find him writing, teaching, speaking, brew-
tach and add the disk you just created [http://cloud.google.com/console] ing delicious ales, and digging on FOSS projects.
with read/write (Figure 5). Now you [6] Cygwin: [http://cygwin.com/] You can find and connect with Joseph online on
should SSH into your instances and [7] gcutil: [https://developers.google.com/ all social networks from [http://network.evolu-
look at your current disks (Listing 2). compute/docs/gcutil/] tionaryit.com]
Central Register
Centralized user management with LDAP or Active Directory is the standard today, although many
prefer to manage user data manually rather than build this kind of infrastructure. In this article, we
look at a better approach with OpenLDAP. By Ulrich Habel
Today’s OpenLDAP server was cre- the issue of central user administra- a phone book, but the comparison is
ated by Kurt Zeilenga in 1998 as a tion was only taken seriously in the tenuous. Although an LDAP server can
clone of source code for the LDAP enterprise environment. Small isolated contain contact information for the
(Lightweight Directory Access Proto- solutions were then the basis for company, it can also be enriched with
col) server at the University of Michi- central user administration; directory additional information. Ultimately,
gan. Interestingly, the OpenLDAP servers were only available from major however, the type of information is
project has never been dormant but IT vendors. Older readers will perhaps not specified, so the directory could
has evolved constantly and is there- smile when they think back to the accommodate a product catalog or an
fore still as progressive and ground- beginnings of domain management on inventory list.
breaking as ever. However, strict en- Windows NT, Novell Netware, or NIS. A directory server is always useful
forcement of the numerous changes A mature service was also available whenever information is to be stored
also puts off some users. in the form of X.500, but it was not in a tree-like structure with corre-
Usually, changes of elemental com- very widespread in practice. LDAP was sponding sub-branches. The tree struc-
ponents in a version are marked as originally designed as a protocol for ture is referred to as the DIT (Directory
deprecated with an appropriate warn- X.500 services. This mutated into the Information Tree). Each item of infor-
ing, indicating which functionality LDAP directory servers that are seen in mation stored within the tree can con-
will no longer be available in the next a variety of uses today. tain a set of attributes, some of which
version. Although this means ensur- are mandatory and others optional.
Lead Image © tiero, 123RF.com
ing progress for the project, for the What Is a Directory Server? The schema determines which at-
administrator, it means always having tributes are available. The OpenLDAP
to keep on your toes. A directory server provides a container server provides its own configuration
The OpenLDAP server has a long his- for information that can be queried via in a DIT, for example.
tory in the Unix world. The beginnings the LDAP protocol and matching cli- In this article, you will learn how to
of the project date back to 1998 when ents. Some people compare this with install and commission OpenLDAP
server version 2.4.23 on CentOS 6.5. sight make everything more compli- $ sudo slaptest ‑u
As an example, I will authenticate cated, become apparent upon second $ sudo chkconfig slapd on
users of a web server, although the inspection. $ service slapd start
configuration can also be extended Changes can be made on the fly,
for operating systems or other ser- without requiring a server restart. Voilà, the OpenLDAP server is ready
vices. At the end of the article, you Especially for larger installations, for an initial configuration. Currently,
will have a fully functional LDAP however, restarting is relatively time- even though the server is running, no
server for the enterprise that is eas- consuming and can take several min- one can connect.
ily extensible and reflects the current utes. Additionally, the LDAP server
state of CentOS 6.5 and OpenLDAP loses its cache, which resides in main Authentication
2.4, without having to piece together memory, on a restart. Requests then
the configuration. take longer until the cache is repopu- First, create the LDAP RootDN pass-
lated. word. RootDN is the top node in an
Installing the OpenLDAP The dynamic configuration model LDAP directory and can basically
removes the need for reboots and en- change all the nodes below it – it’s
Server sures availability of the LDAP server. practically the root user of the LDAP
Installing OpenLDAP is easy. All Once you understand the concept system.
required packages are found in the behind the new configuration model, The password can be generated using
CentOS repositories and are thus it also feels more coherent. In the old the slappasswd command. The follow-
available in any CentOS installation model, the configuration files were ing command sets the password to se-
without further change. Using the stored in a directory tree below /etc/ cret and returns the SHA hash of the
Yum package manager, the install openldap/slapd.d; it took priority over password at the command line:
takes just a single command line: an /etc/openldap/slapd.conf configu-
ration file that you might happen to $ sudo slappasswd ‑s secret
$ sudo yum install U have. In this article, I focus exclu- {SSHA}f0pv70XFFox5UqKc6A4Uy39NcxkqcJbc
openldap‑serversopenldap‑clients U sively on the new model.
httpd ldapvi After the installation, some steps The complete line returned will be
are required before you can start the needed later for the configuration
The first two packages are self-ex- LDAP server daemon, slapd. First, file, so copy it to a temporary editor
planatory and are required to install you must decide on a data back end. window.
and manage the OpenLDAP server. OpenLDAP supports a variety of back At the present time, the slapd daemon
The web server, httpd, is used in the ends, starting with Berkeley DB (BDB; still has a very rudimentary configu-
course of the tutorial to demonstrate the default), MySQL, Memory data- ration and cannot fulfill any meaning-
authentication and authorization of a bases, or even Perl data structures. ful tasks. Nevertheless, looking at the
web server location against the LDAP In the sample configuration here, the directory information tree in its cur-
server. The ldapvi tool is a universal, LDAP server uses BDB. rent form can be educational. Once
command-line LDAP client ideal for The following steps must be run with the LDAP server is running, you can
smaller administrative tasks. root privileges. Instead of working di- formulate a search query using the
rectly as the root user, all commands ldapsearch command and send it to
Configuration via OLC run with sudo. The OpenLDAP server the server:
includes a configuration template to
OpenLDAP version 2.4, which is help you create an initial database. $ sudo ldapsearch ‑b cn=config U
the one in the CentOS repository, Copy it to the data directory on the ‑Y EXTERNAL U
will repeatedly see references to the /var/lib/ldap/DB_CONFIG The ldapsearch command uses a
cn = config method, which means the socket (ldapi) to connect to the LDAP
same thing. The template contains information server and start a search. The first
The cn=config model stores the con- about cache size and database log- part of the server’s response looks
figuration data on the LDAP server files. However, all these values can like this:
which is processed by the LDAP be modified later on, thanks to the
client tools. In the old model, the dynamic configuration model. After SASL/EXTERNAL authentication started
OpenLDAP server was still managed copying the file, check to see that the SASL username:
via a central configuration file. The OpenLDAP server is configured prop- gidNumber=0+uidNumber=0,cn=peercred,U
reasons for the change, which at first erly and then start it with: cn=external,cn=auth
The authentication method is speci- # extended LDIF is a data container that is filled with
fied by the ‑Y EXTERNAL option. It tells # LDAPv3 attributes.
the LDAP server not to authenticate # base <cn=config> with scope subtree Additionally, you can see the two
against the data on the LDAP server, # filter: (objectClass=olcDatabaseConfig) most interesting databases in the
but to make the decision on the ba- # requesting: olcDatabase output:
sis of the user ID or other criteria.
A standard for this form of authen- The extended LDIF format is the de- # {0}config, config
tication is already pre-configured in fault in LDAP protocol version 3. The dn: olcDatabase={0}config,cn=config
OpenLDAP, and that standard is used base line indicates where the search olcDatabase: {0}config
here. started. In this case, the node with # {2}bdb, config
The user with the user ID 0 and the the common name (cn) config was dn: olcDatabase={2}bdb,cn=config
GroupID 0 can log on (i.e., the root searched. olcDatabase: {2}bdb
user). This is why the command The scope (i.e., the search area) pro-
needs a sudo prefix. Right now, this vides information about the depth of The first element is the configuration
is the only way to log on to the LDAP the search. The subtree keyword in- database, then comes the ({2}bdb) da-
server. dicates a search for everything below tabase, which will later take the user
The other lines of output provide the base. The search filter was set to data. The response to the request is
information about the search and the an objectClass – in this case, olcDa‑ rounded off by some statistics on the
type of output: tabaseConfig. An object class in LDAP search and the results:
within the LDAP branch, specifies the each user is allowed to authenticate $ sudo U
type of change, and concludes with and then change their own data and /usr/libexec/openldap/create‑certdb.sh
the new value. view all data – except for the user Creating certificate database in U
TLS_CACERTDIR /etc/openldap/certs root user. The first script builds a cer- ‑n "OpenLDAP Server" U
TLS_REQCERT allow
tificate database in the /etc/openldap/ ‑a > /etc/pki/tls/certs/ldap.acme‑U
certs directory: services.org.crt
$ sudo ln ‑sf /etc/pki/tls/certs/U member of the ldap group. It is there- For the server to use the new SSL
ldap.acme‑services.org.crt $(openssl U fore advisable, to assigned the private connections, it is helpful to modify
x509 ‑in ldap.acme‑services.org.crt U key file to the owner root and the the configuration file for the LDAP cli-
‑noout ‑hash).0 ldap group. The permissions need to ent /etc/openldap/ldap.conf (Listing
be set to 640. 3). When modifying the configuration
After these two steps, you can check Unfortunately, the OpenLDAP server file, note how to set up SSL encryp-
the server certificate with the openssl is not aware of this key in the de- tion. In particular, you may need to
command, which provides the s_cli‑ fault configuration, so you need to adapt the TLS_CACERTDIR parameter to
ent command for this purpose: configure this separately. Listing 2 point to the system-wide certificate
shows a typical configuration. Using database.
$ openssl s_client U the ldapmodify command, you pass The server configuration is now com-
‑connect ldap.acme‑services.org:636 the file to the LDAP server, as you do plete, and the user database setup
when parsing the initial configura- can begin.
The certificates are output on the tion. Then, for the first approach, set
console. The decisive thing here is the SLAPD_LDAPS variable in the /etc/ Directory Structure
that the last line should read 0 (ok), sysconfig/ldap file to yes and restart
given a correct configuration. The the server. Make sure the private key In this section, I will be creating the
command now waits for input. How- is not protected with a password. tree for the user data. The top level of
ever, since you only want it to test the
certificate, you can cancel by pressing
Ctrl+C.
This second approach describes the
use of your own certificates that will
not reside in the OpenLDAP certifi-
cate database. It often makes sense
not to store certificates there (e.g., if
other services need to use them).
CentOS stores certificates in the /etc/
pki/tls/certs directory and the keys
in /etc/pki/tls/private. It is impor-
tant to ensure that the group that has
access to private keys includes the
OpenLDAP server. The OpenLDAP
server runs as the ldap user and is a Figure 1: The complete LDAP directory tree.
Listing 4: base.ldif
01 dn: dc=acme‑services,dc=org 23 cn: Ulrich Habel
02 dc: acme‑services 24 gidNumber: 100
03 objectClass: top 25 homeDirectory: /home/uhabel
04 objectClass: domain 26 sn: Habel
05 27 uid: uhabel
06 dn: ou=people,dc=acme‑services,dc=org 28 uidNumber: 1000
07 ou: people 29 userPassword: {SSHA}f0pv70XFFox5UqKc6A4Uy39NcxkqcJbc
08 objectClass: 30
09 31 dn: cn=vcsldap,ou=groups,dc=acme‑services,dc=org
10 dn: ou=groups,dc=acme‑services,dc=org 32 objectClass: groupOfUniqueNames
11 ou: groups 33 objectClass: top
12 objectClass: organizationalUnit 34 cn: vcsldap
13 35 uniqueMember: uid=uhabel,ou=people,dc=acme‑services,dc=org
14 dn: ou=systems,dc=acme‑services,dc=org 36
15 ou: systems 37 dn: cn=httpd,ou=systems,dc=acme‑services,dc=org
16 objectClass: organizationalUnit 38 objectClass: inetOrgPerson
17 39 objectClass: organizationalPerson
18 dn: uid=uhabel,ou=people,dc=acme‑services,dc=org 40 objectClass: person
19 objectClass: person 41 objectClass: top
20 objectClass: organizationalPerson 42 cn: httpd
21 objectClass: inetOrgPerson 43 sn: httpd Webserver
22 objectClass: posixAccount 44 userPassword: {SSHA}f0pv70XFFox5UqKc6A4Uy39NcxkqcJbc
this tree has already been established The following command parses the LDIF file, the information tree shown
and consists of two domain compo- LDIF file: in Figure 1 is completely initialized.
nents (DCs): dc=acme‑services and
dc=org. All new structures are inserted $ ldapadd ‑x ‑W U
Test and Try
below this point. The tree should look ‑D cn=manager,dc=acme‑services,dc=org U
tools. An LDAP search using the $ ldapsearch ‑D uid=uhabel,ou=people,dc=U connects the LDAP search with the
command-line tools is tedious. In this acme‑services,dc=org U vi editor, thus supporting simple
example, the user uhabel logs on and ‑W ‑x'(objectClass=*)' changes. When you save and quit
looks for objects that have any object the editor, an LDIF file is created and
class. The result will be all objects of After entering the password, all en- then applied.
the LDAP server, because each object tries are listed. The user password at- The ldapvi tool’s command-line
must at least have an object class: tribute is only displayed for user uha‑ options are similar to those of open‑
bel; it remains hidden ldap‑client, which I already looked
for the other users. In at:
principle, it is possible
to output each node in $ ldapvi U
the Eclipse platform, the client com- the httpd user account on the LDAP $ curl ‑sL U
puter definitely needs a few mega- server, which I created earlier. ‑w "%{http_code} %{url_effective}\n" U
APPLE NEWSSTAND
New age convenience...
Our inspired IT insights
are only a tap away. n lo a d
Dow ue of
Look for us on R EE iss n
Apple Newsstand aF blic at io
h pu
eac
and the iTunes store. now!
Too l s Kolab iRony
Group iRony
Open standards and open source are requisite in Kolab groupware. The alpha release of version 3.1
hugely extended the number of compatible clients with the CalDAV and CardDAV protocols, making
Kolab data available on iOS and Mac OS X and in Thunderbird and Evolution. By Andrej Radonic and Markus Feilner
As mobile clients become more central calendar and address book. tacts to the server, allowing admins
popular, admins find it increasingly Mobile devices needed Microsoft’s to turn to many existing groupware-
difficult to avoid offering remote ac- ActiveSync protocol or (up to Kolab enabled clients, such as the Lightning
cess to calendars and address data to 3) the outdated SyncML to connect Thunderbird extension, Evolution,
their users. Despite being open source with the open source groupware. OS X applications like Apple’s iCal
and open standards-driven group- As of version 3.1, the Kolab team (Calendar since Mountain Lion), as
ware, Kolab [1] has had little to offer from Swiss-based Kolab Systems well as iOS and Android systems.
in this scenario. Only its own clients added the CalDAV and CardDAV open For this to work, the Kolab team did
(Kontact; the Horde web interface protocol standards for the exchange not reinvent the wheel; rather, they
and, later, Roundcube; and some pro- of appointments, integrated the proven PHP package by
prietary Outlook plugins) can use the tasks, and con- SabreDAV [2] (Figure 1), which im-
# Install EPEL
rpm ‑Uhv http://url/to/epel‑release.rpm
Default Configuration
Pitfalls
Once installed and set up, you should
be able to access the administra-
tive GUI at http://<Kolab-Server>/ Figure 2: iRony needs a little convincing to reveal its inner structure.
Alternatively, you
can even leave
out the subdirec-
tory in theory;
after all, stan-
dards-compliant
clients first look
for the principal
address below
../.well‑known/
caldav.
The Kolab iRony
configuration for
Apache includes
corresponding
redirects that
beam this call
to /iRony/. The
username re-
quested in this
way also helps
locate the correct
user directory –
in theory.
Figure 3: Kolab authorization – Thunderbird, Lightning, and Kolab cooperating via SabreDAV. However, this does not always work smoothly.
Thunder,
publishes its DAV services on http:// nect with Kolab using one of the DAV
<Kolab-Server>/iRony. This step protocols. Basically, you need to enter
Lightning, and Evolution
initially returned a URL exception the Kolab server URL using the fol- In practice, widespread groupware cli-
(Listing 2), which was easy to fix lowing pattern: ents, such as Lightning (Thunderbird)
with some help from the Kolab mail- and Evolution tend to be pretty quirky
ing list: In the iRony configuration http://<Kolab‑Server>/iRony in this scenario, in that they do not –
file (/etc/iRony/
dav.inc.php) just
replace the value
of the $rcmail_
config['base_
uri'] configura-
tion parameter
with '/iRony/'
and restart the
Kolab daemon
by typing service
kolabd restart.
After this, Kolab
is ready to ex-
change data via
the iRony proto-
col stack.
Trying to
Connect
One item on the
checklist involves
your chosen cli- Figure 4: Roundcube, the Kolab webmail client on the left; the same calendar in Thunderbird on the right. Because Mozilla’s mailer was
ent trying to con- not meant for corporate use, it shows weaknesses as a groupware client.
Fractious
Interface
When you first
call http://<Kolab-
Server>/iRony,
the Kolab interface
turns out to be
quite recalcitrant,
responding to calls
with the message
shown in Listing 3.
Although this re-
sponse only shows
Figure 5: Evolution works – with some restrictions: Creating contacts does not work, but the calendar does. All told, not so many that the interface is
developers seem to be working on the Gnome groupware anymore. working correctly
and is therefore not
or at least do not correctly – support enterprise groupware operations (e.g., regarded as a fault, it is extremely im-
service discovery. For clients affected Thunderbird) or that too few develop- practical for test purposes. To improve
by this issue, users need to specify the ers work on it (Evolution). the module’s behavior, you need to add
full calendar or address book URL. Unfortunately, users currently can a SetEnv DAVBROWSER 1 to the Apache
If users have several calendars, they only display the ICS (Internet Connec- configuration for iRony in /etc/httpd/
need to configure each calendar sepa- tion Sharing) address in Roundcube, conf.d/iRony.conf. After an httpd re-
rately. Again, it is evident that many not the CalDAV URL of each calendar. start, you can finally call the iRony URL
mail clients were never intended for You thus need a different approach to and explore the user-specific directory
structure (Figure 2).
Thunderbird
as Client
Now you can
install Lightning
as the calendar pl-
ugin [5] for Mozil-
la’s Thunderbird
email client and
connect with the
groupware. The
latest Thunderbird
is advisable, as is
a recent version of
Lightning. In the
taskbar, choose
New | Calendar;
then, in the con-
figuration dialog,
Figure 6: The Windows eM Client tool provides the best connection to the CalDAV/CardDAV server. select Network as
the type and CalDAV as the format before defining your cal-
endar with the CalDAV URL you determined previously.
Finally, Thunderbird prompts you for user data, as shown
in Figure 3. If the data is correct, Lightning synchronizes
your appointments with the Kolab server without complaint
(Figure 4). However, many features are still missing, such
as support for tasks. If Thunderbird users want to access
Kolab address books in their mail clients, they need to in-
stall SOGo Connector [6], an additional plugin retrofits the
CardDAV support that the Mozilla mailer lacks.
EM Client on Windows
Figure 7: The calendar on an iPad. iRony helps owners of Apple devices connect to
The Windows-only eM Client [9] (Figure 6) shows that Kolab the first time.
cleanly implemented web interfaces also exist in the Mi-
crosoft world. Autodiscovery works without ado; you just
Listing 4: kolab-caldav-vhost.conf
01 <VirtualHost *:443>
02 ServerName caldav.yourkolab.com
03 ServerAdmin [email protected]
04
05 DocumentRoot /usr/share/iRony/public_html/
06
07 ErrorLog logs/caldav.yourkolab.com‑error_log
08 CustomLog logs/caldav.yourkolab.com‑access_log combined
09
10 <Directory "/usr/share/iRony/public_html/">
11 AllowOverride All
12 Order Allow,Deny
13 Allow from All
14
15 RewriteEngine On
16 RewriteBase /
17 RewriteRule ^\.well‑known/caldav / [R,L]
18 RewriteRule ^\.well‑known/carddav / [R,L]
19
20 RewriteCond %{REQUEST_FILENAME} !‑f
21 RewriteCond %{REQUEST_FILENAME} !‑d
22 RewriteRule (.*) index.php [qsappend,last]
23
24 </Directory>
25
26 </VirtualHost> Figure 8: The iPhone Reminders app syncs all tasks with the Kolab server upon
request.
Clear Skies
Users who lose interest in websites that don’t respond in the expected time take their clicks
elsewhere. We look at ways to improve your WordPress website performance. By Joseph Guarino
The vast majority of web surfers indicated they were less likely to n Low Conversions – Faster load
are spoiled with DSL high-speed buy from the site again. times mean more conversions.
broadband and fiber connectivity, Conversions can be any action you
yet sometimes users still experience Why Load Time Matters want the user to complete, such as
slow page loads. Statistics show that a sale, joining a mailing list, or get-
consumers are quite responsive to At this point, it should be abundantly ting a follow on Twitter. Your suc-
slow page load times, and those who clear that users respond negatively to cess can be negatively affected by
create and maintain websites gener- an underperforming website; there- a slow site that makes these valu-
ally acknowledge that website perfor- fore, it is imperative that you work to able opportunities disappear.
mance matters. improve site performance. Slow load n High Bounce Rate – A slow-loading
In this article, I will explore how you times can have other negative effects site leads to a high bounce rate,
can improve your WordPress website on your web efforts, such as: meaning more users “bounce” or
and make it deliver optimally with n Bad User Experience – You want leave your website rather than con-
the use of Amazon CloudFront [1] visitors to your site to have a posi- tinue on it. Obviously, this is a lost
and a potent plugin, W3 Total Cache tive perception and response to opportunity for a conversion, sale,
[2]. How your site performs can your efforts, but a slow site dam- or whatever outcome you seek.
help or hinder your success online, ages that potential. When your site
whether you have a simple WordPress lags behind established norms, Bottlenecks Abound
website, blog, or elegant e-commerce it can affect the user experience,
effort. your brand, and your very online Website administrators today can
According to an Akamai study on web success. build their sites in so many different
performance [3]: n E-commerce Abandonment – As ways, including building everything
n 47% of people expect a website to shown in the statistics above, e- on standard hardware, taking ad-
load in two seconds or less. commerce is very sensitive to load vantage of virtualization, or using a
n 40% will abandon a web page if times. Every second counts, with hosted solution (shared, VPS, etc.)
Lead Image © 123RF.com
it takes longer than three seconds costs in lost visitors and, more im- Other supernumerary private/public
to load. portantly, sales. Bottom line, you cloud environments include market
n 79% of online shoppers who ex- can improve your online efforts or players such as Amazon, Google,
perienced a lackluster experience lose out to competitors. Rackspace, Red Hat, and so on.
No matter how you build, you cer- you take some of the overhead of dy- server in Boston, Massachusetts, gets
tainly don’t lack choice, but some- namically generated pages and reduce a visitor from Paris, France, latency
times you can improve on perfor- this potential bottleneck. becomes an issue. If they were to
mance and scalability. Without plan- Additionally, you can move some download all of the resources from the
ning, WordPress can present some commonly accessed content into a site in Boston, the performance might
thorny performance issues. CDN, dramatically improving load not be optimal. With a CDN and cach-
The problems of website performance times. The CDN moves the content ing in place, the French visitor can
are literally everywhere in infrastruc- closer to the user on more well-con- load the content placed in the CDN
ture, protocols, web server/database nected, low-latency servers, which is from a more local server, dramatically
configurations, networking gear, a win all around. Before I continue, improving load time. This means more
network latency, and on and on. I I’ll explore the basics of a CDN. optimized web performance and a
could spend 1,000 pages discussing generally happier end user.
the problems that influence page load What Is a CDN?
time and never address the bigger Types of CDNs
picture. In this article, I will focus on A content delivery network acts as a
a simple way to improve WordPress distributor. It moves the job of distrib- The origin push type of CDN requires
performance via caching and a Con- uting content that would be served a manual push or upload of content
tent Delivery Network (CDN). by your web server (or more likely into the CDN. That is to say, the user
server farm) to more efficient servers (you) manually pushes content into
Caching/CDN-Powered that are geographically dispersed and the CDN and then links to it on your
closer to the user visiting your site. site. This allows more control over
WordPress Content is algorithmically directing a what is uploaded and minimizes up-
In the case of WordPress, a key perfor- more efficient infrastructure for deliv- loaded content. An origin pull CDN
mance factor is that pages, posts, com- ering it. It is more efficient than tra- doesn’t require a manual upload of
ments – almost everything – is dynam- versing the same network to the same the content to the content network.
ically generated with each user visit. old web server and downloading files. Instead, the CDN automatically loads
For example, if you visit a WordPress The CDN directs a client (browser) files without user intervention. That
site, PHP code pulls data stored in a to download files from a more local is to say, it tests to see whether it
MySQL database and generates a page source within the CDN network. This has the requested content, and if
that you then view in your browser. has the effect of dramatically reduc- not, it automatically loads it from the
This process sounds slow, and it cer- ing the page load time. original server. Origin pull has the
tainly can be. Thankfully, with a CDN The CDN copies the content placed benefit of automating the propaga-
and caching, you can dramatically in it to the geographically dispersed tion of your content into the CDN
improve this situation. With caching, nodes of its CDN network. When a and therefore is optimal for caching
Figure 1: My Pre-optimized Google PageSpeed Insights site scores: desktop 75/100; mobile 62/100.
and CDN configuration. Herein I will any site for which you would like to plugin W3 Total Cache. A quick over-
describe an origin pull CDN in Ama- improve the performance (Figure 1). view of the features and functionality
zon CloudFront. In this case, your With your page speed results, Google of each is in order.
CDN will also automatically rewrite provides copious detail on what you CloudFront is an impressive content
URLs to point to those in the CDN can improve upon [5]. These details delivery web service from the cloud
and not on the original web server. give clear tips on what you need to giant Amazon. It offers low-latency,
work on to improve your site perfor- high-speed data transfer by serving
Baseline Your Website mance. user requests with a global network
of edge locations. As a CDN, it is a
Taking a baseline of your site perfor- Performance Boosters powerful, cost-effective way to boost
mance before you start is vital. In the the performance of your WordPress
interest of limiting the scope of this To improve your site responsiveness, site. CloudFront offerings and W3 To-
article, I’ll use Google PageSpeed In- you can use Amazon CloudFront tal Cache make for even more Word-
sights [4], which you can use to test along with the powerful WordPress Press bliss.
W3 Total Cache (W3TC) is a web instance. This plugin packs a great count set up and ready to go. If not,
performance optimization plugin for performance punch that I will ex- you need to create an Amazon ac-
WordPress. It employs a variety of plore later. For now, I’ll demonstrate count [6] and then log in to your it
features that boost the performance the first step of setting up Amazon [7]. Next, click on Create Distribu-
of a WordPress installation. All of CloudFront. tion (Figure 2), and select Web |
these compelling features together Continue. Under Origin Settings, en-
mean that a webmaster with few re- Setting Up Amazon ter the full address for your website
sources can dramatically improve the (rather than my example site www.
performance of a site, regardless of
CloudFront m0nk3y.biz) and a descriptive Origin
whether it is hosted on inexpensive For the sake of focus, I’m going to ID (Figure 3). Leave all the other op-
shared hosting, a VPS, or a cloud assume you have an Amazon ac- tions at the defaults and scroll down
to Distribution Setting, where you Finally, click Create Distribution. This tion and click on Distribution Set-
add your CNAME settings. can take some time, so grab a coffee tings (Figure 5).
or, if it’s after 5pm, a nice cold one.
Adding a CNAME Once this process has completed, Set Up A CNAME Record in
you will see CloudFront Distribu-
Stop at Alternate Domain Names tions with the distribution you have
Your DNS Server
(CNAMEs) settings (Figure 4). This just created and a status listed as At this point, you need to go to your
option allows you to use a CNAME InProgress. Be patient; it could take a DNS server and add a CNAME re-
you set up in your DNS servers rather few minutes until the status switches cord to map the Amazon CloudFront
than the autogenerated CloudFront to Deployed. Next, simply select the server (listed in Figure 5 after Domain
names (e.g., d1234.cloudfront.net). checkbox next to your new distribu- Name) to the name you specified
earlier. Simply copy the full hostname case, your AWS CloudFront bucket. fully. First, if you have any other
from the Domain Name listed and use Here, you are going to create a spe- caching plugin, you should disable
this to create a CNAME record. In this cific user for accessing your bucket. and remove it – no need to create
example, d3bti9iv8fnppw.cloudfront. To add an IAM user [10], go to the up- conflicts or other issues with multi-
net should point to cdn.m0nk3y.biz. per right-hand portion of the AWS con- ple plugins stepping on each others’
Because of supernumerary DNS serv- sole and click on <your username> toes. Next, you temporarily need to
ers, as well as cloud or hosted DNSs, | Security Credentials; then, on the far
I’ll assume you added a CNAME re- left, go to Users then Create New Users. chmod 777 wp‑content/
cord as needed in your DNS server; Name the user whatever you would chmod 777 wp‑content/uploads/
note that it could take time for these like and click Create (Figure 6). Once
changes to propagate. Then, to vali- the user is created, go to the bottom of via FTP, SSH, or cPanel. From the
date that your DNS changes have been the page and click on the Permissions dashboard, go to Plugins |Add New,
made, simply dig for that record. If tab. Under User Policies, click Attach type in W3 Total Cache, and click In-
you see that it points to the correct lo- User Policy and select your desired pol- stall Now (Figure 7). Last, you need
cation, you are ready for the next step. icy template or create a more granular to activate the plugin and go back
one with the Policy Generator [11]. If via FTP, SSH, or cPanel to change the
Amazon IAM you don’t already have multifactor au- permissions with:
thentication (MFA) [12] set up for your
Amazon Identity and Access Man- AWS root account, you should set it up chmod 755 wp‑content/
agement (IAM) [8] allows you to immediately. chmod 755 wp‑content/uploads/
control access to all your Amazon You can use virtual MFA applications
Web Services (AWS) [9] services and such as AWS MFA or Google Authen- If you forget to reset the permissions
resources. You can use IAM to create ticator. Even better, use a hardware to 755, expect to be hacked or de-
users and groups, just as you would in MFA key fob. Either way, I hope you faced or to become part of a cyber-
any operating system, to control access are using 2FA/MFA everywhere at this criminal’s bot network: rwx for every-
to services and resources. point. Once AWS CloudFront is all set body is a recipe for disaster.
The rule of least privilege is a maxim up, you can log in to WordPress and
for a reason. Just as on a Linux server install and configure W3TC. W3TC Configuration
you generally don’t log in as root (or
run services as root), so too with your Installing W3 Total Cache Once W3TC is installed, the Perfor-
Amazon AWS root account. Your AWS mance link shows up in the left-hand
root account credentials are for man- Installing a plugin in WordPress is panel, which allows you to configure
aging your AWS configuration and not as easy as pie, but this one has some W3TC. Before you begin, click on
for access to the APIs, CLI, or, in this tricky aspects, so follow along care- Performance and hit the compatibility
check button (Figure 8) to ensure hosting provider will help assure that JS, improving site performance. Next,
you have the dependencies you might you can configure W3TC as you desire enable Minify by selecting the Enable
need to deploy various features. Then, (Table 1). After the core configuration radio button and checking Auto next
simply go to General Settings (Figure is done, you can set the basic W3TC to Minify mode. Leave all other set-
9). Unless you are installing this in a settings (Figure 10). tings at their default values.
server over which you have full man- In this example, I enable page caching Because you are working in a shared
agement control (i.e., root access), it on the Page Cache screen by selecting hosting environment in this example,
is important to research what your Enable and Page cache method:Disk: you should disable the Database Cache
provider supports. Taking a moment Enhanced. On the Minify screen, I can and Object Cache by making sure the
to speak with a shared or managed optimize the size of HTML, CSS, and Enable box isn’t checked. However,
the Browser Cache Enable box should because they are not relevant to the to the Configuration section: Enter
be checked to enable HTTP compres- configuration for this example. From the Access Key ID and your Secret
sion, which reduces server load and here, you move on to finalizing your key and CNAME information (Figure
site load time. Under CDN, select the CDN configuration. 12). Now simply click Test CloudFront
Enable checkbox, and for Origin Pull/ distribution, and it should return Test
Mirror, choose Amazon CloudFront Configuring W3TC Passed. You are now good to go, so
from the drop-down box. Click Save click Save all settings.
all settings to finish (Figure 11). Now that you have enabled your Now that W3TC is all set up, you
You can safely ignore all other options CDN, you need to configure it. Again need to give your site a moment
on the remainder of the W3TC page in the left panel, select CDN and go to copy its contents into the CDN.
Figure 13: Post-optimized Google PageSpeed Insights site scores: desktop 88/100; mobile 73/100.
Jailbreak
A common misconception posits that software
cannot cause mischief if you lock the system away
in a virtual machine, because even if an intruder
compromises the web server on the virtual machine,
it will only damage the guest. If you believe this, you
are in for a heap of hurt. By Tim Schürmann
Virtual machines give the impres- eters around their virtual machines. The following command, for example,
sion of a small jail, but administrators It helps that the virtual machines ap- restricts the computational time for
should not be fooled by this idea. As pear to be normal processes from the a virtual machine called webserver to
early as the Black Hat security confer- host system’s point of view. These 100 shares (this is often equivalent
ence in 2011, Nelson Elhage presented processes in turn can be regulated by to about 10 percent of the computing
a breakout vector [1] that exploited SELinux, AppArmor, or some other power):
vulnerabilities in the contemporary mandatory access control system.
versions of KVM or Qemu. The sVirt component in libvirt 0.6.1 virsh schedinfo U
Installing the Virtio driver on the and newer actually does some of the ‑‑set cpu_shares=100 webserver
guest also allowed a breakout by work for SELinux and AppArmor
exploiting existing or undiscovered [2]. For example, in SELinux, sVirt However, virsh and KVM do not let
bugs. Once malware gains control of attaches labels to virtual machines, you regulate all of your resources.
the host system, it can also directly which can then be isolated selec- The virsh man page lists the options
hijack and control the other virtual tively. below schedinfo.
machines running there. Administra- You also can lock virtual machines
tors should therefore take care to in cgroups and thus control their re- HR Department
keep KVM, Qemu, and the Virtio source consumption and access. In-
drivers up to date at all times. This is cidentally, this practice protects you If you boot the virtual machine di-
Lead Image © George Tsartsianidis, 123RF.com
especially true for Windows guests, against a crashed machine running rectly using sudo qemu‑kvm ‑m 512 …,
which – in contrast to Linux distribu- wild or using too much CPU time, you are giving the VM – and any pro-
tions – cannot update Virtio drivers or a DoS attack blocking network gram that breaks out of jail – system
automatically. access to other virtual machines. privileges (Figure 1). Fortunately,
Normally, libvirt automatically pro- root privileges are not necessary:
Walled In duces a corresponding cgroup hierar- Most distributions give normal users
chy [3]; the access to resources can from the kvm group access to /dev/
To guard against a breakout, admins be controlled in a targeted way using kvm. It is therefore perfectly okay to
need to build virtual safety perim- the virsh tool. add your own user account to the
virt‑manager ‑c U
qemu+ssh://[email protected]/system
kvm group and then use this account sequently, you need to limit access Smugglers
to run qemu‑kvm … in the future. If to libvirtd. In particular, you must
you give the virtual machine access make sure that the /etc/libvirt/ It is dangerous simply to pass host
to the network via a TAP interface, libvirtd.conf configuration file does devices through to the virtual ma-
you must also create the interface not have a listen_tcp = 1 line. In chine. If you mount a filesystem on
for the user and group under which this case, libvritd would accept TCP the virtual machine that other com-
the virtual machine will be running. connections from anyone – assuming puters on the LAN can also see, the
Usually, this is qemu in the qemu the firewall on the host system does infected guest could use this pass-
group. not block the appropriate port (by de- through to distribute malicious pro-
Alternatively, you can use the lib‑ fault, 16509). grams, change configuration files, or
virt tools to manage your virtual destroy existing documents. The valu-
machines. They automatically start a Personals able project files on the NAS would
virtual machine as the non-privileged then be goners. Administrators should
qemu user. A number of pitfalls exist Administrators also should use encryp- therefore carefully consider what
here, however: On some distribu- tion. libvirtd and its tools optionally filesystems depend on which virtual
tions, such as Ubuntu, all users in can tunnel their communication via machine. Ideally, the virtual machine
the libvirtd group are allowed to run SSH, use authentication via SASL/Ker- uses only its own image files.
virsh. However, if you have access to beros, or open an
virsh, you can use it not only to man- encrypted SSL/
age all of the virtual machines, you TLS TCP con-
can also use virsh nodeinfo to query nection. In most
the host’s hardware specs (Figure 2). cases, an SSH
Thus, virsh and the like should only connection is
be used by selected administrators. If probably the best
you integrate virsh into your scripts, bet, which means
you should also check the rights these running an SSH
scripts have. server on the
Furthermore, the libvirtd daemon host system. Be-
running on the host system likes to cause root privi-
have root privileges. If attackers were leges are needed,
able to contact libvirtd, they could the connection is
hijack the virtual machine and, in the opened with, for Figure 2: The virsh command also provides information about the physical
worst case, the whole system. Con- example: hardware of a remote node.
qemu‑kvm ‑monitor U
tcp::4444,server,nowait ...
qemu‑kvm ‑monitor U
tcp:192.168.100.21:4444,U
server,nowait ...
Figure 4: The administrator can serve up the monitor … Figure 5: … to everyone on the network.
vncviewer 192.168.100.21:5901
analyze the installed system at lei- supported by the libvirt tools – in ited to a single network interface (Fig-
sure – this is particularly true for the particular the nwfilter network filter ure 9). This approach makes it easier
RAW format. Administrators should driver [6]. to distinguish between and identify
thus ensure that no one gains access In the normal NAT operating mode the connections to the virtual machine
to the images. It also makes sense to (user mode, default network), only and the host. Maintenance on the host
encrypt the images. To do this, you the guest system on the virtual ma- computer then exclusively uses the
can store the images on an filesystem chine accesses the network; remote eth0 interface, whereas access to the
encrypted by the host system and let attackers only see the host computer, virtual machines only uses eth1.
the guest system encrypt its own (vir- which they can attempt to com- The libvirt tools and the MacVTap
tual) disk. promise. Conversely, however, the driver can help here by directly con-
Furthermore, KVM or Qemu also guest by default reaches its host on necting virtual machines with a net-
encrypt an entire image with the 10.0.2.2, which is also the address on work interface on the host system.
AES algorithm. Encryption itself is which the built-in DHCP server runs. Private and bridge modes of operation
completely transparent for the guest. Although this setup is useful for ex- are of interest here, where the virtual
However, the image must be in the changing data, malicious programs on machines cannot access the host sys-
qcow2 format, and Qemu currently the virtual machine can easily attack tem. In Private mode, even the virtual
only uses a 128-bit key. An encrypted the host. The firewall on the host usu- machines can no longer communicate
image is created, for example, using ally does not intervene here because with one another [7]. However, these
qemu‑img (Figure 7). the queries are running on the host actions do not mean that you can do
system itself. It is therefore important without a firewall.
Caught in the Net that no vulnerable services are run- Qemu offers a built-in TFTP server
ning on the host system. Ideally, the that lets you quickly and easily share
Once a virtual machine accesses the host only runs the virtual machines, files between the host and guest
intranet or Internet, it can fall victim and nothing else. system. However, communication is
to DoS attacks, port scans, and intru- Qemu also lets you connect virtual unencrypted, and there is no user
sion attempts, just like any physical machines via TCP and UDP sockets. authentication. Administrators will
machine. Therefore, the same security However, these connections are not thus want to disable the built-in TFTP
measures should be taken: The web encrypted, and anyone who has ac- server and, instead, set up a properly
applications or services running on cess to the sockets can send data secured FTP server or, preferably, SSH
the virtual machine should be always to the connected virtual machines. access on the guest.
up to date. If you do use sockets, you should
Even if a break-in succeeds, the therefore at least choose a setup Fide Sed Cui Vide
malware must not spread via the net- where the connections use the ac-
work to the host computer, to other cess restrictions offered by Qemu Administrators should always keep
physical computers on the LAN, or and only allow connections from the guest system up to date and in-
to other virtual machines. The latter, specific IP addresses (parameters ‑net stall only known software. Caution is
in particular, is easy to achieve if all socket,connect=hostname,port). also advised with prebuilt virtual ma-
virtual machines are connected to the chines (appliances): They could in-
same (virtual) network (Figure 8). Conductive clude malware from the outset. Any-
Firewalls must thus check the flow one planning to migrate an existing
of data to, and especially between, If you have set up port forwarding on system should examine it up front for
the virtual machines. This practice is the virtual machine, it should be lim- malicious software. Otherwise, you
Figure 8: When you want your virtual machines to be accessible from the outside, Figure 9: Clearly isolated interfaces for the host and guest can be easily and
several points on the host need to be appropriately firewalled. securely managed.
could also infect the new host system. Virtual machines can be automati- care as the host system. Finally, most
If you boot to the virtual machine cally started using the start scripts defaults in KVM, Qemu, and libvirt
over the network via PXE, the server when booting the host system or us- are designed for security, and ad-
contacted for this purpose must be ing libvirtd. However, you should ministrators should therefore change
trustworthy. This is especially true if think twice about this approach: If them only for a good reason. n
you automated the process of creating you have a malicious program on the
new virtual machines by scripting. virtual machine, and the program
In a live migration, only the host sys- manages to break out, it can access Info
tems involved should be allowed ac- the system at a very early stage. In [1] Black Hat/DEFCON 2011 Talk: Breaking Out
cess to a shared filesystem or an NFS extreme cases, you have no opportu- of KVM: [https://blog.nelhage.com/2011/
share. After the migration, you would nity to rein in and isolate the rogue 08/breaking‑out‑of‑kvm/]
then subsequently restrict access to virtual machine. [2] sVirt: [http://selinuxproject.org/page/SVirt]
the migrated system. Live migration [3] Cgroups and libvirt:
should take place over encrypted con- Conclusions [http://libvirt.org/cgroups.html]
nections, such as SSH. [4] Redirecting the serial interface:
A virtual machine that goes haywire A virtual machine is not a practical [http://qemu.weilnetz.de/qemu‑doc.html#
can hog computing time, whereas jail. On the contrary, it even increases index‑g_t_002dserial‑80]
a DoS attack on a virtual machine the number of attack vectors. Ad- [5] SPICE: [http://qemu‑buch.de/de/index.php/
blocks the physical network con- ministrators should think of virtual QEMU‑KVM‑Buch/_Anhang/_Spice]
nection. Both cases also paralyze all machines as additional, physical [6] Firewall and network filtering in libvirt:
other virtual machines at the same computers on a network and thus [http://libvirt.org/firewall.html]
time. It is consequently advisable initiate the same security measures [7] Libvirt: Direct attachment to physical
to set up a monitoring system and or include the machines in their (ex- interface:
to organize the virtual machines in isting) security concept. Guests also [http://www.libvirt.org/formatdomain.
cgroups, as already mentioned. need regular updates and the same html#elementsNICSDirect]
Linux Magazine
ACADEMY
20% r curr
ent
off fo scribers
sub
Online Training with Linux Magazine Academy
Preparing for the LPIC exam - the easy way
GET YOUR LINUX KNOW-HOW CERTIFIED WITH LPIC:
Professional training for the exams LPI 101 and 102
Fast Track
Microsoft has introduced several improvements to Windows Server 2012
and Windows Server 2012 R2 with its Server Message Block 3. Hyper-V
mainly benefits from faster and more stable access to network storage.
In this article, we look at the innovations. By Thomas Joos
The SMB protocol is mainly known servers on the network – offers some availability, SMB 3 also supports high
as the basis for file sharing in Win- advantages over block-based storage, availability.
dows and is familiar to Samba and including easier management. This is
Linux users, too. Windows 8.1 and especially true if the files are stored in New in SMB 2.0 and 2.1
Windows Server 2012 R2 use the new file shares, because you don’t need to
Server Message Block 3 (SMB 3) pro- use external management tools or to SMB was initially developed by IBM
tocol, which has several advantages change management workflows. and integrated by Microsoft into Win-
over the legacy version. Although it Windows Server 2012 R2 lets you use dows in the mid-1990s via LAN Man-
was already introduced with Win- VHDX files as an iSCSI target. This ager. Microsoft modified SMB 1.0 and
dows Server 2012, it was once again means that Hyper-V hosts can store submitted it to the Internet Engineer-
improved in Windows Server 2012 their data on iSCSI disks, which are in ing Task Force (IETF), and SMB was
R2. Rapid access to network storage turn connected via SMB 3. VHDX files renamed to CIFS (Common Internet
especially benefits enterprise applica- are also much more robust and allow File System).
tions, such as SQL Server and virtual sizes up to 64TB. Microsoft immediately started im-
disks in Hyper-V. SMB 3 can forward SMB sessions proving SMB after taking it over from
The disks of virtual servers can reside belonging to services and users on IBM. Microsoft added some improve-
on the network with Windows Server virtual servers in clusters. If a virtual ments in version 2.0 of Windows
2012 R2, for example, on file shares server is migrated between cluster Vista and version 2.1 of Windows
provements, especially for very fast physical server name, as for a normal connection is possible. These new
networks in the 10Gb range. The drive letter. technologies are useful for trouble-
version supports larger transmission shooting performance issues and for
units (Maximum Transmission Units, SMB 3.0 in the Enterprise ensuring stable data access on the
MTUs). Additionally, the energy ef- network.
ficiency of the clients was improved. SMB 3.0 is much more robust, more The new SMB encryption lets you
Clients from SMB 2.1 can switch to powerful, and more scalable and of- encrypt the data in SMB connections.
power saving mode despite active fers more security than its predeces- This technology is only active if you
SMB connections. sors, which is especially important use SMB 3.0 clients and servers. If
Improvements in SMB 3.0 include, for for large environments. For example, you use legacy clients with SMB 2.0
example, TCP Windows Scaling and SMB Transparent Failover allows and SMB 1.0 in parallel, encryption
accelerations on the WLAN. Microsoft clients to connect to a file server in is disabled.
also optimized the connection be- a cluster environment. If the virtual
tween the client and server, and im- file server is moved to another cluster RDMA and Hyper-V
proved the cache on the client. With node, the connections to the clients
the new pipeline function, servers can remain active. In the current ver- Normally, every action in which a
write multiple requests to a queue sion of SMB, open SMB connections service such as Hyper-V sends data
and execute them in parallel. This are also redirected to the new node over the network – for example, a live
new technology is similar to buffer and remain active. The process is migration – generates processor load.
credits in Fibre Channel technology. completely transparent to clients and This is because the processor has to
Microsoft has extended the data Hyper-V hosts. compose and compute data packets
width to 64 bits in the current ver- If virtual disks are stored on file for the network. To do so, in turn,
sion, allowing block sizes greater shares in a cluster, services are no it needs access to the server’s RAM.
than 64KB, which accelerates the longer interrupted when the server Once the package is assembled, the
transfer of large files, such as virtual is migrated. Another new feature processor forwards it to a cache on
disks or databases. Additionally, the in SMB 3.0 is SMB multichannel, in the network card. The packets wait
optimized connections between the which the bandwidth is aggregated for transmission here and are then
client and server prevent disconnec- from multiple network adapters sent by the network card to the target
tions on unreliable networks such as between SMB 3 clients and SMB 3 server or client. The same process
WLAN or WAN environments. servers. This approach offers two takes place when data packets arrive
main advantages: The bandwidth is at the server. For large amounts of
What’s New in SMB 3.0? distributed across multiple links for data, such as occur in the transmis-
increased throughput, and the ap- sion of a virtual server during live
In Windows Server 2012, Microsoft proach provides better fault tolerance migration, these operations are very
introduced SMB version 2.2 with fur- if a connection fails. The technology time consuming and computationally
ther improvements. Later, these inno- works in a similar way to multipath intensive.
vations were deemed so far-reaching I/O (MPIO) for iSCSI and Fibre Chan- The solution to these problems is Di-
that the version was subsequently nel networks. rect Memory Access (DMA). Simply
increased to 3.0. One new feature in SMB Scale-Out file servers use Cluster put, the various system components,
SMB 3.0, for example, is server-based Shared Volumes (CSV) for parallel ac- such as network cards, directly ac-
workloads, which are supported for cess to files across all nodes in a clus- cess the memory to store data and
Hyper-V and databases with SQL ter, which improves the performance perform calculations, which offloads
Server 2012/2014. and scalability of server-based ser- some of the work from the processor
Hyper-V for SMB can now handle vices, because all nodes are involved. and significantly shortens queues and
Universal Naming Convention (UNC) The technology works in parallel with processes. This approach, in turn,
paths as the location of the control features such as transparent failover increases the speed of the operating
files on virtual servers. Also, virtual and multichannel. system and the various services such
disks can now use UNC paths; that Additional SMB performance indica- as Hyper-V.
is, they can save files directly on the tors let admins measure usage and Remote Direct Memory Access
network. Simply put, this means that utilization of file shares, including (RDMA) is an extension of this tech-
the location of the virtual disks on throughput, latency, and IOPS man- nology that adds network functions.
Windows Server 2012 or newer can agement reporting. The new counters The technology allows the RAM
be a UNC path, so you don’t need in the Windows Server 2012 and content to be sent to another server
to use drive letters or map network Windows Server 2012 R2 performance on the network, as well as the direct
drives. For example, you can now monitoring feature support metrics access by Windows Server 2012/2012
address the data via a service name; for the client and server so that R2 to the memory of another server.
there is no longer a requirement for a analysis of both ends of the SMB 3.0 Microsoft already built RDMA into
The
Collector
Collectd 4.3 is a comprehensive monitoring tool with
a removable plugin architecture. By Martin Loschwitz
Collectd [1] is a familiar site on Linux that is available in terms of monitor- different environments, in everyday
and Unix systems. The collectd de- ing functionality comes exclusively life, admins who rely on collectd for
velopers bill the tool as “the system from plugins that the collectd core their monitoring needs are more likely
statistics collection daemon,” which just loads. Collectd is written in C to deploy classic server hardware on
means it is like many other system and contains practically no code that Linux. A commercial box is perfectly
monitoring tools that inhabit the net- would be specific to any single oper- adequate and, no matter which Linux
work. Still, the simplicity, versatility, ating system, so it can operate on al- distribution it runs, collectd is ready
and portability of collectd make it the most any Unix-style system. Addition- in almost no time. Debian-based dis-
tool of choice for many environments. ally, it is extremely frugal: Because tributions include collectd as a pack-
For many users, the really impres- this tool requires very few resources, age, and if you feel more at home on
sive feature of collectd is its design it also runs on minimal hardware like CentOS- or RHEL-based systems, you
and pervasive modularity. Everything the good old Linksys WRT54G or a will find precompiled packages of the
Raspberry Pi. current version of collectd on the web.
The goal of col- Collectd, which is very easy to install,
lectd is simply to works on a simple client-server prin-
gather statistics ciple (Figure 1). A central server runs
about the system the most important collectd, but you
and store the also start an instance of the service
information. Flo- on each host to be monitored.
rian Forster pub- An exchange of data takes place be-
lished the first tween the many collectd instances
versions of col- and the master server. Read plugins
lectd [1] in 2005, collect the monitoring data on the
and his work has monitored systems, and a write
been continued plugin then sends data to the collectd
and extended by master instance via a separate pro-
an enthusiastic tocol. The master evaluates and pro-
Lead Image © Christos Georghiou, 123RF.com
Automation Tool
Rex doesn’t need agents or a special language to describe the tasks it performs on remote computers. By Jens-Christoph Brendel
If you have to run standard tasks in ror prone, and too inefficient. Many Control Master – needs a few soft-
an environment with a large number admins would rather have a tool that ware modules. By the way, the Rex
of systems (e.g., a compute cluster, a lets them run standard tasks on all developers call the program (R)?ex,
server farm, or a cloud environment) clients in parallel, without typos and which is totally unpronounceable, so
you might want a tool to help you in a reproducible manner. I’ll settle for plain old Rex for the rest
save time and avoid duplication of Tools such as Puppet, Chef, SaltStack, of the article. You can install these
labor. Logging in on each server and and Ansible provide this function- software modules either via a pack-
typing your commands hundreds of ality through an agent running on age manager in Linux or FreeBSD
times manually is too slow, too er- the client, and a special description or via Git. If you’re using a package
language lets the user define tasks manager, you’ll want to include the
Listing 1: Rex Installation with Git or the target state. Rex takes a differ- Rex [1] repository to leverage auto-
01 git clone https://github.com/krimdomu/Rex.git ent approach. The Rex configuration mated updates by the distribution.
02 cd Rex management tool uses SSH as the Listing 1 shows how to install via Git.
03 perl Makefile.PL transport medium and Perl as the The advantage of using Git is that the
04 make command language, which means Git sources have the freshest version.
05 make test
any computer can act as a Rex client If you are accustomed to working
06 make install
without the need for additional soft- with Perl, a third way to install Rex is
ware. through Perl’s CPAN archive [2].
Listing 2: Simple Rex File The fact that Rex doesn’t rely on a
01 user "root"; client agent program also means the First Steps
02 password "secret"; user won’t run into conflicts between
03 pass_auth; newer and older Rex versions. And Commands are transmitted to the
04 you won’t have to learn a new, spe- remote Rex client through a so-called
Lead Image © Konstantin Chagin , 123RF.com
05 group server => "hercules", "sugar"; cialized command language: As long Rex file. Listing 2 shows a fairly
06 as you know some Perl, you’ll be simple example. The first three lines
07 desc "Get the uptime of all servers"; ready to get started. define the name of the user who ex-
08
ecutes the commands, the password,
09 task "uptime", group => "server", sub {
10 my $output = run "uptime"; Installing Rex and the authentication method.
You might notice that Listing 2 holds
11 say $output;
Only the command center – the Rex the root password in unencrypted
12 }
host, which is sometimes called Rex form. Rex also allows authentica-
82 A d mi n 2 1 w w w. a d mi n - m aga z i n e .co m
Rex Automated Management M a n ag e m e n t
tion using SSH keys. For encrypted well with a hetero- Listing 3: Key-Based Authentication
authentication, you need to create an geneous group of
01 root@hercules:~/.ssh# ssh‑keygen ‑t rsa
RSA or DSA key pair without a pass- servers, as long as 02 Generating public/private rsa key pair.
word on the Rex host and then copy each group member 03 Enter file in which to save the key (/root/.ssh/id_rsa):
the public key over to the client (List- belongs to the sup- 04 Enter passphrase (empty for no passphrase):
ing 3). After that, a test login on the ported systems (Cen- 05 Enter same passphrase again:
client without password should work. tOS 5/6, Debian 5/ 06 Your identification has been saved in /root/.ssh/id_rsa.
07 Your public key has been saved in /root/.ssh/id_rsa.pub.
Check ~/.ssh/authorized_keys to make 6, Fedora, Gentoo,
08 The key fingerprint is:
sure you haven’t added extra keys Mageia, openSUSE,
09 9b:e4:e2:27:92:04:4a:9b:ee:82:cc:9f:4d:4b:4d:c1 root@hercules
that you weren’t expecting. RHEL 5/6, Scientific 10 The key's randomart image is:
Then, change the first lines of the Rex Linux, Ubuntu ver- 11 +‑‑[ RSA 2048]‑‑‑‑+
file into the following: sion 10 or greater, So- 12 | |
laris 10/11, FreeBSD, 13 | . |
user "root"; NetBSD, OpenBSD). 14 | E |
15 | .. . |
private_key "/root/.ssh/id_rsa"; Rex knows how to
16 | ..o. .S |
public_key "/root/.ssh/id_rsa.pub"; install packages on
17 | .o . oo o |
these platforms and 18 | = . +..+ |
The CPAN Net::OpenSSH module also will use the appropri- 19 | o+ B.o.. |
supports the possibility of Kerberos as ate command (rpm, 20 | o..o +.o |
an authentication method. apt, pkg, emerge, 21 +‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+
urpmi,opkg, yum, pkgadd 22 root@hercules:~/.ssh# ssh‑copy‑id root@sugar
23 root@sugar's password:
Installing Packages or zypper). However,
all systems in a group
Listing 2 shows how you can use have to use the same package name. same name under Ubuntu or to an
the Rex file to query the client for equivalent svcadm call under Solaris.
information such as the uptime. You How to Edit config Files
can use the same technique to ob- User Management and
tain other client values, such as the Sar is now installed, but it can’t
amount of free memory (free or vm‑ gather any data yet. First of all, you
Other Tasks
stat), the fill factor of the hard disks have to change the entry ENABLED= Rex offers numerous commands for
(df), the network utilization (netstat), from false to true in /etc/default/ typical tasks that occur regularly on
and the I/O performance (iostat). You sysstat file. You can use Rex either the admin’s agenda. For example,
can also easily filter and format the to overwrite the whole sysstat file or you can use Rex to create and delete
output with Perl. to update the file with the necessary groups, start and stop processes,
If you want to archive the results, a change. Use a script like the snippet manage cron jobs, manipulate Ip-
tool like the System Activity Reporter in Listing 5 to change only the rel- tables rules, load kernel modules,
sar or the Performance Co-Pilot PCP evant word. download files with scp, or edit sys-
[3] would be more appropriate, be- Rex actually has a special command tem parameters with sysctl.
cause they are designed for long-term for replacing text in a file: Rex also provides detailed information
data handling. Sar comes with most about the client systems. This informa-
Linux distributions (package sysstat), task searchreplace => sub { tion goes from a simple query of the
but it is not installed by default. So, sed qr{search}, "replace", U operating system (get_operating_sys‑
you have to install it first, and Rex can "/directory/file.txt";
even help with installing other tools. }; Listing 4: Packet Installation
Rex comes with several commands 01 use Rex::Commands;
written in Perl and one of these com- Related commands allow you to de- 02 use Rex::Commands::Pkg;
mands is install, which you can use lete lines that match a search pattern, 03
to install software package. Append to overwrite whole files, or to append 04 desc "Install sar (sysstat)";
the lines from Listing 4 to the Rex lines. After changing a configura- 05 task "install_sysstat", group => "server", sub {
06 install package => "sysstat";
file. After that, type tion file, you’ll need to restart the
07 };
service you are updating. As Listing
rex install_sysstat 5 shows, Rex will
again work with Listing 5: Editing the Configuration
The install command automatically an abstraction. The 01 task "enable_sar", group => "server", sub {
takes care of the translation into restart command 02 run qq(sed ‑i 's/ENABLED="false"/ENABLED="true"/' /etc/default/sysstat);
real commands for each platform on is translated into a 03 service "sysstat" => "restart";
04 };
which it runs. Thus, install works command with the
w w w. a d mi n - m aga z i n e .co m A d mi n 2 1 83
M a n ag e m e n t Rex Automated Management
tem()) to a long list with all kinds of ments for the central Rex host and its least the documentation of the API
details (as shown in Listing 6). clients are minimal, which means it functions seems complete. The error
Rex can also report details about won’t take you long to start benefit- message documentation might help a
hypervisors, and it can create, con- ing from the automation of the Rex Perl developer who can study the Perl
figure, start, stop, and destroy virtual environment. Rex helps you avoid sources, but it is not very helpful for
machines with VirtualBox, KVM, and typos and gives you access to a use- the ordinary user. n
Xen. You can even use Rex to monitor ful collection of well-documented
a system running in Amazon’s Elastic system-management commands, but
Compute Cloud (EC2). perhaps the biggest benefit is the time Info
you’ll save with parallel execution of [1] Rex Repositories: [http://www.rexify.org/
Summary common tasks on all clients. get/index.html]
One weakness of Rex is the lack [2] CPAN Archive: [http://search.cpan.org]
The most impressive thing about Rex of documentation. An online book [3] Performance Co-Pilot: [http://oss.sgi.com/
is its ease of use. An experienced about Rex isn’t much more than an projects/pcp]
admin only needs a short amount of index of contents and a FAQ consists [4] Sysstat Homepage: [http://sebastien.
time to learn the ropes. The require- with only of a handful questions. At godard.pagesperso‑orange.fr]
Listing 6: dump_system_information()
01 root@hercules:/home/jcb/Rex/test# rex sysinfo 45 used => '0'
02 [2014‑04‑25 10:49:34] INFO ‑ Running task sysinfo on sugar 46 total => '509'
03 [2014‑04‑25 10:49:34] INFO ‑ Connecting to sugar:22 (root) 47 }
04 [2014‑04‑25 10:49:34] INFO ‑ Connected to sugar, trying to authenticate. 48 $eth0_ip = '192.168.111.188'
05 [2014‑04‑25 10:49:35] INFO ‑ Successfully authenticated on sugar. 49 $swap_used = '0'
06 $memory_cached = '89' 50 $Host = {
07 $Kernel = { 51 kernelname => 'Linux'
08 kernelversion => '#25~precise1‑Ubuntu SMP Thu Jan 30 17:42:40 UTC 2014' 52 operating_system => 'Ubuntu'
09 architecture => 'i686' 53 hostname => 'sugar'
10 kernel => 'Linux' 54 operatingsystemrelease => '12.04'
11 kernelrelease => '3.11.0‑15‑generic' 55 operatingsystem => 'Ubuntu'
12 } 56 domain => ''
13 $hostname = 'sugar' 57 operating_system_release => '12.04'
14 $operating_system = 'Ubuntu' 58 manufacturer => 'innotek GmbH'
15 $operatingsystem = 'Ubuntu' 59 }
16 $operating_system_release = '12.04' 60 $kernelversion = '#25~precise1‑Ubuntu SMP Thu Jan 30 17:42:40 UTC 2014'
17 $eth0_mac = '08:00:27:c4:a1:d8' 61 $memory_total = '494'
18 $VirtInfo = { 62 $kernelrelease = '3.11.0‑15‑generic'
19 virtualization_role => 'guest' 63 $operatingsystemrelease = '12.04'
20 virtualization_type => 'virtualbox' 64 $architecture = 'i686'
21 } 65 $domain = ''
22 $memory_shared = '0' 66 $swap_free = '509'
23 $Network = { 67 $lo_broadcast = ''
24 networkdevices => [ 68 $kernel = 'Linux'
25 'lo' 69 $memory_used = '178'
26 'eth0' 70 $kernelname = 'Linux'
27 ] 71 $swap_total = '509'
28 networkconfiguration => { 72 $memory_buffers = '12'
29 lo => { 73 $lo_netmask = '255.0.0.0'
30 broadcast => '' 74 $lo_ip = '127.0.0.1'
31 ip => '127.0.0.1' 75 $lo_mac = ''
32 netmask => '255.0.0.0' 76 $memory_free = '316'
33 mac => '' 77 $manufacturer = 'innotek GmbH'
34 } 78 $Memory = {
35 eth0 => { 79 shared => '0'
36 broadcast => '192.168.111.255' 80 buffers => '12'
37 ip => '192.168.111.188' 81 free => '316'
38 netmask => '255.255.255.0' 82 used => '178'
39 mac => '08:00:27:c4:a1:d8' 83 total => '494'
40 } 84 cached => '89'
41 } 85 }
42 } 86 $ETH0_broadcast = '192.168.111.255'
43 $Swap = { 87 $ETH0_netmask = '255.255.255.0'
44 free => '509'
84 A d mi n 2 1 w w w. a d mi n - m aga z i n e .co m
More
UbUntU!
Can’t get enough
Ubuntu? We’ve got
a whole lot more!
Ubuntu User is your roadmap to the Ubuntu
community. In the pages of Ubuntu User,
you’ll learn about the latest tools, best tricks,
and newest developments in the Ubuntu story.
Don’t Miss
another issUe!
TODAY'S
comeback. a very clear manager. In main window provides
something of a timely tu 13.10 The program displays Figure 1: The Blather
base from the package
1). After you click in the python-xli
b, for the voice con-
ter with speech main window (Figure Ubuntu, these are only starting and stopping
Controlling a compu command 2, python-
LISTEN TO ME
for a speech plejson, python-gtk
Listen, it waits can python-sim trol.
Alternatively, you cketsphinx, and
through the mic. gst0.10, python-po
s mode in which
the sphinx packages. comes from a
switch to Continuou gstreamer0.10-pocket manager. Each command
SPECIALS
globe
Marble virtual
sly. There are dis- the FileBrowser
inx isn’t in your
U
your school atlas: can also use File a few device. After remove the hash the Sox, Python-
small including quite generally a dictation the text editor and To use Palaver, install
mouse; too much
move- cities appear as additional maps, a buntu include a simple text editor of the follow- Xvkbd, Xauto-
you wield the are green, large the main as historical globes, s the startup, it opens mark (#) at the beginning Argparse, Wget, Espeak,
the ball rolling
too fast. bottom left of exotic ones, such Software Center, convenient into the mic packages via the
ment can start squares. At the find zones, and even
repre- ries created
where all the words
spoken
mation, and Zenity
the screen shows
a small Legend tab, you’ll map of climate Ubuntu’s Softwa centralized which provide by the softwar from Carnegie Mellon ing line:
and be sure that
the
window, on the Mars. Once you re Center lets you software manage s developers. e’s users or PocketSphinx [1] language commands
A
The middle of ng on the symbols and Saturn and ment and lets The reposito belted is the “other” software are written. Special package manager
r (which, dependi the descriptions
for all sentations of
it will install just about you search
- only the softwar ries contain strong “Start browser!” University (CMU) t editing. Thus, an
edi- YTES , Notify-osd, or No-
click Install, and new softwar not = -DSLM_SWAP_B
white crosshai detect). can remove elements anythin e and install for e, but also e will start allow subsequen #BYTESWAP_FLAG Notification-daemon
can be hard to choose a map,
free software, purcha g, including deletes all the text depending on the
colors. Here, you it with a mouse on which they the packag into the microphon
the background, used.
cover- View. click. With es
is If city names
are a in Map an Ubuntu depend. (How that’s what generally refer to tor clear command to tifyd are activated,
r lands on Earth from the map. become available sed apps, and can purcha One accoun these reposito
ries is covered
to tap into Firefox – at least, The applications . a terminal, change you need the sox,
Where the crosshai map in the you want to see,
remove
games. se software t, you speech rec- as back ends or previously interpreted After saving, open desktop. In Ubuntu,
the five leading free
simply with “What Are in the xvkbd, and
Many dis- on the stylized world as a ing up a feature d Regions.
credit card. a Repositories?”
section.) , such analysis assistance A window appearing after startup
the CMU-Cam _Toolkit_v 2/src subdirec- h, wget, espeak,
distributions. shown
the position appears before Populate SEARCHING (Blather, FreeSpeech FreeSpeech, Palaver, commands (Fig- make install. Then,
python-arg
the checkmark
The Debian ognition programs engines. Blather,
of all the major default, upper left, and the latitude and a particular place, packages (ending P ACKAG E M ANAGE the GNU GPLv3 li- shows all the available tory and execute xautomation packages.
virtual globe by another re- searching for
M
the educa- remove that Softwar and Vedics) promise. under folder develop-
arble is one of tros install the is a
licking
white dot. Double-c quickly take you If you want to If you’re
in the Search
field in the e Center offers in .deb)
Getting to reposito R IN A CTION Palaver, Simon, and Vedics are
ure 2). Here you
can modify a command
copy the programs
created in the
Next, download the current
s of the KDE Because Marble rs, deselect Coordi- enter its name archive files are actually to make input eas- still uses the older in the $PATH en- from GitHub [7]
tional program but not Ubuntu. will longitude indicato Marble containing ries and installin With that, they want cense, whereas Simon it. As of version 120,
n; it is , during installati
on it will gion of the world left of the main window. uration files, libraries, config- software is
often faster g individuals by double-clicking to a directory included ment version of Palaver
Software Collectio KDE program libraries,
upper type and, and executa mand line. from the com- ier and also help disabled
version 2. the option to con- such as /usr/local/ ZIP. Unzip the created
repositories amount of KDE there. nates.
global maps,
such results as you you install ble program The “Quick FreeSpeech provides vironment variable, using Download
therefore in the drag in a large You can add more begins building or the Firefox s. If goes into greater Sources” section
better operate the
desktop. with a virtual key- drive and execute
Given the the Map View press the Enter key ample, the packag
package manage e, for ex- being trol other programs bin. archive on the hard
100MB. by clicking as you look at managi detail. To begin, I’ll Four of these vendors
– Vedics
keys button in the [5], download the
adding up to
nearly
disks and SUNDOWN other lo-
as road maps,
left and choosing
an- as soon
on, a list of all
possible r distributes ng packages
you to decide for
BLATHER ed in Python and board. Click the Send Also from the web the ./setup command
from the Pala-
of modern hard a city or some choose a suggesti tu’s Softwar using Ubun-
(pay attention to
the as the root user.
enormous size this is When you click tab at the bottom and marks them
on the the necessa
ry files e Center. the exception – allow Blather [1] is programm text editor and speak
the key combina-
FreeSpeech archive ver-master directory
connecti ons,
displays a window
with A road map appears locations appear triggers an ac- you must install the personal details and
To access the
high-speed Internet other map type.
to the correct
Software Center, yourself what command to get it to work, the archive on the
, and it means
you cation, Marble point OpenStreetMap
(Figure
could conceiv- tion into the mic. ReleaseDate). Unzip You can skip the
the geographic in the package
places in the the Super key press
not really a problem when you click globe.
might come up [1] tion. A “Start browser!” PocketSphinx archive exclusively Eng- software using the Default
installed if information about indicates, the
data file- start page (Figure and enter softw. The a text editor – con-
interprets hard drive and start the
enter the language. Integrate
these libraries the Geonames
service The search results , system.
manager, along with
the Python FreeSpeech
that fact the degree
of di- Then, create a key-
will also have later. (Figure 2) from 3). As the name project at times. For example What’s New 1) features ably be used to open lish words, despite python freespeech
.py in the created
Plugins using Install.
other KDE apps appears beneath
the OpenStreetMap with odd results
Often depend and Top Rated a few GTK (in Ubuntu, y good. In our
you want to use at un- [1]. If the point comes from the on s of Law- en- fusing, yet possible. Gstreamer and Python the system settings
detection is not particularl board shortcut in
several instance Clicking More applications.
looks somewh element, you
need an Internet connecti Marble found
cies exist, which on the far right s do not analyze 0.10, and pock- rectory.
Although Marble than just a compass or another [2]. You’ll need hometown of means that additional gives you The five application python-gtk2, python-gst inx background pro-
provides more at. maps. – apart from the install- choices. s; they leave inx isn’t part of case, the PocketSph
impressi ve, it
to rotate the globe
somewh to access the rence parts of ing program If you click speech patterns themselve etsphinx). If PocketSph clearly spoken “Hello
can plan a trip
inven-
additional informat
ion a spot can take
some
Ubuntu User
– all in different A re- Turn On Recomm
software. As a rule follow the instruction
s cess interpreted a PALAVER
world globe. You for a bike If the boxes for Navigating to same quires that at the bottom
, Ubuntu suggest endations that task to other your distribution, as “An over To open” to the two previously
men-
profile View | Info Boxes
on resolves it. The you
p Sphinx” box. World” curiously In contrast rec-
tory, set the altitude bother you, click time until Marble have program a user accoun s opening
in the “Three-Ste try yielded “An the Palaver speech
fly over the moon. info B in- t at Ubuntu
download the cur- (Figure 3). The second tioned programs,
tour, or even and unlock the goes for maps
that stalled also. your comput One. Then,
From Gitorious [2], also written in Py-
and off to lock them The er regularly
sends lists adult wall.” ognition program,
can drag-and-drop rain- package manage software you’ve version of Blather. documentation, you
you
rent development
show average of
boxes. Or, you n r installed to
According to the interface. Instead,
To reach the navigatio identifies the and the compan Canonical archive, rename the thon, has no user
ROLLING the Earth to another place. fall. Satellite View de- y
mended softwar returns a list of recom- After unzipping the recognition rate by
cor-
voice input using
a
Marble shows right, you need
to a cur- pendent packag in commands and can improve the start and stop the
After startup, screen, as bar at the bottom doesn’t show es e that might
file commands.tmp text in the editor
and shortcut. Pala-
right side of the to the edge, where
it automatically These recomm
endations appear
interest you.
enter the desired
Eng- recting the failed freely selectable keyboard
image on the stars as its move the pointer rent world picture dur- start page and use a text editor to test unfortunately spoken text to Google
a universe of You can toggle
the to
ing installa
tion and can be deactiv on the
. Begin each clicking Learn. My in ver then sends the
in Figure 1, with edge, turns into a hand. rs. but dates back loads them View | Turn ated with lish-language commands of error messages a certain trust of
the bottom right with View | Crosshai d Figure 1: Software onto Off Recomm letter followed by produced a number – naturally requiring
background. At You al- crosshair display just win- pictures publishe Center encourag the comput global menu. In the endatio ns in the line with an uppercase giant (Figure 4).
navigation bar. nd stars are not es you to browse. er. Usu-
purchase program meantime, you can shell com- the search engine
you’ll find the The backgrou by NASA [3] in ally, you will a colon and the executable s loudly
ready may be familiar with this from
but are a true represen ta-
2004. In this view, tice the depend
no- One account.
s using your
Ubuntu THREE-STEP SPHINX If you voice the command them far
the top are dow dressing mand. the Bison package and clearly, Palaver
recognizes
The arrows at sky. If you wish,
you
you can also see cies when
en- Use the arrow blather di- To begin, integrate
Google Maps. and tion of the night You you try icons in the Next, create the ~/.config/ into it e, Perl. From the its competitors. You
globe. The plus
off with View
| Stars. to install a to return to
the start window upper left commands file more readily than
for rotating the out. can turn them cloud cover, which single rectory, copy the and, when appropriat English com-
you zoom in and te the simulate
d sur- Debian packag with the release . Starting
y from the Blather
di- the sphinxbase, can also learn which
minus slider lets and the can also deactiva you can remove e of Ubuntu
and run ./Blather.p web [4], download ds by using
is at bottom left,
ere with View
| Atmo- over the Interne Ubuntu develop 12.10,
program seems to sphinxtrain pack- mands Palaver understan the program
The scale ruler bar shows rounding atmosph with View | Clouds. t. free and comme
ers have placed
rectory. When the pocketsphinx, and in FreeSpeech shows all
bottom status appears as a halo
around on The packag ads for
Ctrl+C. Then, upload install them the ./plugin -l command Figure 2: After startup,
use to manipulate the
Altitude in the Marble etMap e you rcial softwar and
which Finally, from OpenStre Software Center end it with ages. Unzip them
the commands you can
e from the
Earth of the view. sphere, the the “flat” road map
are looking crash,
tences.corpus procedure directory.
the distance from 1. shows not only 3: Marble projects blurring of for, across the
using the usual three-step command, for ex- save it as a text file.
the globe in Figure some distortion or the ~/.config/blather/sen
you such as a video of the screen broad
Instead of the
navigation bar,
Earth but the
moon. Figure which can lead to
under the heading surface
Knowledg e Base Tools you start with the The “Open music” file dictated text and even
Music folder in the
vorites.
mouse. Moving
the mouse
you the rounded globe, codec, may
be
Our Fa- file to the Sphinx in Listing 1, where
ample, opens the
can use your turns it into a MOONSTRUCK hical In Map View, text.
missing in
the Soft-
The left side
of the Softwar website [3]. base package.
image shows a topograp Base
across the globe
grab and Marble usually can click the Earth Figure 2: Clicking
ware Center.
In
screen organiz
es the applica
e Center
After clicking Compile
Knowledge
so that you can likely recogniz
e from a category opens this case, you egory. Clickin tions by cat- the generated file ISSUE 20 45
hand pointer lets map that you’ll ga on the website, save
egories (Figure category opens subcat-
ISSUE 20 15
The mouse wheel up subcategories. find the packag
can
rotate the globe. how e 2). All other
and out. Be careful external reposito in at the top of controls are USER
you zoom in - the packag
e management .COM • UBUNTU
window. The WWW.UBUNTU-USER
UBUNTU USER 86 ISSUE
All Softwar
e menu lists 5/2/14 12:46:07
USER.COM • 2018:03:42
4/2/14 all
• WWW.UBUNTU-USER
.COM
WWW.UBUNTU- UBUNTU USER
USER.COM
• WWW.UBUNTU- d 45
UBUNTU USER 086-088_P
ackages_A UBUNTU USER
44 ISSUE 20 044-048_VoiceRecognition.ind
A2.indd 86 • WWW.UBUNTU 5/2/14 12:46:03
d 15 -USER.COM
014-017_Marble_AA2.ind
4/2/14 18:03:36
14 ISSUE 20
d 44
044-048_VoiceRecognition.ind
4/2/14 18:57:12
d 14
014-017_Marble_AA2.ind
Workaround
Managing your cluster could be so simple if it weren’t so complicated. The object of many an admin’s wrath in
such cases is often a single component: Pacemaker. Luckily, other open source tools offer alternative options
for high availability. By Martin Loschwitz
The Pacemaker cluster resource istence, bugs have repeatedly reared your computer at three o’clock in the
manager is not the friendliest of ap- their heads, thereby undermining morning, you expect your provider’s
plications. In the best case scenario, confidence in the Pacemaker stack. email service to work just as it would
Pacemaker will keep the load bal- Admins often face a difficult choice: at three in the afternoon. In this kind
anced on your cluster and monitor Introducing Pacemaker might solve of construct, users are not interested
the back-end servers, rerouting work the HA problem, but it means install- in which server delivers their mail,
to maintain high availability when a ing a “black box” in the environ- they just want to be able to access
system goes down. Pacemaker has ment that apparently does whatever their mail at any time.
come a long way since the release it wants. Of course, not having high The “failover” principle achieves
of its predecessor, Heartbeat 2 – an availability at all is not a genuine al- transparent high availability by work-
unfriendly and unstable tool that was ternative. This dilemma leads to the ing with dynamic IP addresses. An IP
only manageable at all by integrating question of whether it is possible to address is assigned to a service and
XML snippets. achieve meaningful high availability migrates with the service from one
Pacemaker no longer has the short- in some other way. A number of FOSS server to another if the original server
comings of Heartbeat, but it still has solutions vie for the admin’s atten- fails.
not outgrown some of the original tion. This kind of magic works well for
usability issues. The Pacemaker stateless protocols; in HTTP, for ex-
resource manager has ambitious Understanding High ample, it does not matter whether
Lead Image © madpixblue, 123RF.com
Figure 2: HAProxy is a classic load balancer and ensures that incoming requests are distributed to many back ends.
simple services with the help of load involves classic firewall and router involved in its development. VRRP
balancers. systems. If you do not use a hardware works on a simple principle: Rather
A setup in which HAProxy ensures router from the outset, you are likely than acting as individual devices
the permanent availability of a service to assume that a simple Linux-based on a network, all routers configured
involves some challenges. Challenge firewall is probably the best solution via VRRP act as a logical group.
number 1 is the availability of data; for security and routing on the corpo- The logical router has both a vir-
you will usually want the same data rate network. Although there is noth- tual MAC and a virtual IP address,
to be available across all your web ing wrong with this assumption, you and one member of the VRRP pool
servers. To achieve this goal, you will rarely need more than one DHCP always handles the actual routing
could use a classic NFS setup, but server or a huge number of iptables based on the given configuration.
newer solutions such as GlusterFS are rules. Of course, the corporate firewall Failover is an inherent feature of the
also definite options. must somehow be highly available; VRRP protocol: If one router fails,
Factory-supplied appliances often have if the server fails, work grinds to a another takes over the virtual router
built-in HA features out of the box standstill in the company. MAC address and the virtual IP ad-
so that admins do not have to worry The classic approach using Pace- dress. For the user, this means, at
about implementing it. However, if maker involves operating services most, a small break but not a notice-
you use plain vanilla HAProxy, a basic such as the DHCP server or the able loss.
cluster manager is necessary. This firewall, along with the appropri- On Linux systems, VRRP setups can
alone does not question the concept, ate masquerading configuration, as be created relatively easily through
because the cluster manager in this ex- separate services in Pacemaker. This the previously mentioned keepalived
ample would only move an IP address method inevitably drags in the entire software [2]. Keepalived was not
from one computer to another and Pacemaker stack, which is precisely initially conceived for VRRP, but it’s
back – unless you use VRRP. the component you wanted to avoid. probably the most common task for
VRRP can provide efficient routing keepalived today.
Variant 2: Routing with functions on multiple systems.
VRRP VRRP stands for “Virtual Redun- Variant 3: Inherent HA with
dancy Router Protocol.” It dates
One scenario in which the use of back to 1998, and companies such
ISC DHCP
Pacemaker is particularly painful as IBM, Microsoft, and Nokia were After using VRRP on your firewall/
router to take care of routing, the
next issue is DHCP. Most corporate
networks use DHCP for dynamic dis-
tribution of addresses; frequently, dh-
cpcd is used in a typical failover con-
figuration. ISC DHCPd comes with a
handy feature that removes the need
for Pacemaker. Much like the VRRP
protocol, it lets you create “DHCP
pools.” The core idea behind these
pools is that multiple DHCP serv-
ers take care of the same network; a
DHCP request can therefore in prin-
ciple be answered by more than one
DHCP server. To avoid collisions in
the addresses, the two DHCP servers
keep their lease databases perma-
nently synchronized.
In the case of failure of one of the
two servers, a second node is still
left. The pool-based solution thus of-
fers the same kind of availability as a
solution based on a cluster manager,
but with considerably less effort and
less complexity. The same rules ap-
ply as for VRRP; the combination of
Figure 3: Galera ensures seamless high availability at the MySQL layer, but it takes a load balancer to make VRRP on one hand and a DHCP pool
the program easy to use for most applications. on the other gives admins a complete
Figure 4: Ceph is not a candidate in this article, but of all the programs I looked at, it is the most sensible option.
➤ shop.linuxnewmedia.com
N U TS A N D B O LTS Ganglia
Nerve Center
Ganglia is probably the most popular monitoring framework and tool in that HPC, Big
Data, and even cloud systems are using it. In this article, we show you how to install
and configure Ganglia and get it up and running on a simple two-node system. By Jeff Layton
When you know better, you swers. Having so many options and world. In this article, I present Gan-
do better – Maya Angelou opinions is not a bad thing, but you glia as a monitoring framework.
need to sort through the ideas to find
Monitoring clusters and understand- something that works for you and A Few Words
ing how the cluster is performing is your situation.
key to helping users better run their In two further blog posts [3] [4], I Ganglia has been in use for several
applications and to optimizing the wrote some simple scripts to measure years (since about 2001). Because of
use of cluster resources. metrics on a single server as a start- the sheer size of the systems involved,
Such information is valuable for a ing point for use in a cluster. This the HPC world has been using Gan-
variety of reasons, including under- code measured the processes of inter- glia, and in the past few years, the Big
standing how the cluster is being est by collecting data on an individual Data and Hadoop communities have
used, how much of the processing node basis. been using it a great deal, primarily
capability is being used, how much Now it’s time to look at monitoring for its scalability and extensibility. The
of the memory is being used for user frameworks where, I hope, the scripts OpenStack and cloud communities fre-
applications, and what the network will be useful for custom monitoring quently use it, too.
is doing and whether it is being used and perhaps provide a nice visual Ganglia has grown over the years and
for applications. This information can representation of the state of the has gained the ability to monitor very
help you understand where you need cluster. large systems – into the 1,000-node
to make changes in the configuration A non-exhaustive list of monitoring range – as well as the ability to moni-
of the current cluster to improve the frameworks that people use to moni- tor close to 1,000 metrics for each sys-
utilization of resources. Moreover, tor system processes includes the fol- tem. You can run Ganglia on a number
this information can help you plan for lowing [5]-[12]: of different platforms, making it truly
the next cluster. n Monitorix flexible. Additionally, it can use cus-
In a past blog post, I looked at n Munin tom metrics written in a variety of lan-
monitoring from the perspective of n Cacti guages including C, C++, and Python.
understanding what is happening n Ganglia Ganglia has also gained a new web
in the system [1] (metrics) and how n Zabbix interface with custom graphs. At a
important it can be to understand the n Zenoss Community high level, Ganglia comprises three
frequency at which you monitor the n Observium parts: The first part is gmond, which
Lead Image © Alexander Rivosh, 123RF.com
92 A d m in 2 1 w w w. ad m in - m aga z in e .co m
Ganglia N U TS A N D B O LTS
Table 1: Software mir Vuksan [15], has some pre-built version of Ganglia [16]. These RPMs
binaries that saved my bacon. The install on CentOS 6.5 and make your
Software Version
specific versions of the system soft- life a great deal easier. I just used the
CentOS 6.5 ware I use are listed in Table 1. rpm command and pointed to the URL
Ganglia 3.6.0 Before installing any binaries, I try for the RPMs. For the master node, I
to install the prerequisites rather started with gmond (Listing 1). Next,
Ganglia web 3.5.12 than totally rely on RPM or Yum to I installed gmetad on the master
Confuse 2.7 resolve dependencies. I think this node (Listing 2). Remember, you
forces me to pay closer attention to only need to install this on one node,
RRDtool 1.3.8
the software rather than installing typically the master node, for a basic
things willy-nilly. Thus, I installed configuration.
same time, each gmond also listens to the following packages: Ganglia installed very easily using
the announcements from its peers so these binaries, but I still had to do
that every node knows the metrics of yum install php some configuration before declaring
all of the other nodes. The advantage yum install httpd victory. Before configuring Ganglia
of this architecture is that the master yum install apr however, I wanted to find out where
node only needs to communicate with yum install libconfuse certain components from the RPMs
one node instead of having to commu- yum install expat landed in the filesystem. The binaries,
nicate with every single node. It also yum install pcre gmond and gmetad were installed in
makes Ganglia more robust because, yum install libcmemcached /usr/sbin. (Note: If you build from
if a node dies, you can just talk to the yum install rrdtool source, they go into /usr/local/sbin).
other nodes to access the information. You can use the command whereis
The second part is called gmetad, The php and httpd packages are used gmond to see where it is installed.
which you install on the master node. for the web interface. I also recommend The man pages were installed in the
With a list of gmond nodes, it just turning off SELinux (you can Google usual location, /usr/share/man. You
polls a gmond-equipped node in a how to do this). I also turned off ip- can check this by running man gmond.
cluster and gathers the data. This data tables for the purposes of this exercise. The binaries also installed key init
is stored using RRDtool [13]. The third If you need to keep iptables turned on, scripts into /etc/rc.d/init.d/ that are
piece is called gweb and is the Ganglia please refer to Sharma’s blog [14] and used for starting, stopping, restarting,
web interface. This second-generation go to the bottom for details on how to and checking the status of the gmond
web interface offers custom graphs configure iptables rules for Ganglia. If and gmetad processes. The scripts use
that you can create to match your situ- you get stuck, please email the Ganglia Ganglia configuration files located in
ation and needs. mailing list and ask for help. Now I’m /etc/ganglia. To see the files in this
ready to install Ganglia! directory, use the tree command (Figure
Installing on the Master Node 1). (Note: You may have to install this
Configure/Compile/Install command with yum install tree).
When working with a new tool, you Ganglia has two files on which to
have two choices: build from source Vladimir Vuksan also has created a direct your focus: gmond.conf and
or install pre-built binaries. Typically, set of CentOS 6 RPMs for the latest gmetad.conf. The subdirectory, conf.d
I like to build from source the first
time I use a new tool, so I can better Listing 1: gmond Modules
understand the dependencies and how rpm ‑ivh http://vuksan.com/centos/RPMS‑6/x86_64/ganglia‑gmond‑modules‑python‑3.6.0‑1.x86_64.rpm
the tool is built. I spend some time http://vuksan.com/centos/RPMS‑6/x86_64/libganglia‑3.6.0‑1.x86_64.rpm
building Ganglia following a blog by http://vuksan.com/centos/RPMS‑6/x86_64/ganglia‑gmond‑3.6.0‑1.x86_64.rpm
w w w. ad m in - m aga z in e .co m A d m in 2 1 93
N U TS A N D B O LTS Ganglia
4. I won’t list any of the Python code, easy. First, look for a line in the file
but this is where you will put any that reads
Python metrics you write. Before you
can use the Python metrics, you have data_source "my cluster" localhost
to tell Ganglia about them in the *.py‑
conf files in /etc/ganglia/conf.d (see and change it to
Figure 1). In these files, you define the
metrics and how often to collect them data_source "Ganglia Test Setup" U
94 A d m in 2 1 w w w. ad m in - m aga z in e .co m
Ganglia N U TS A N D B O LTS
Testing gmond and gmetad installation instructions. [root@home4 tmp]# script gmond.out
[root@home4 tmp]# gmond ‑d 5 ‑c /etc/ganglia/gmond.conf
For my installation, I ed-
To run and test (debug) gmond from ited Makefile and made [root@home4 tmp]# ^c
[root@home4 tmp]# ^d
the command line, I’ll run it “by just four changes:
hand,” telling it that I’m “debugging.” (1) At the top of the file,
Sometimes this process produces a change the GDESTDIR line to: (4) Change the APACHE_USER line to:
great deal of output, so I’ll capture it
using the script command (Listing GDESTDIR = /var/www/html/ganglia APACHE_USER = apache
7). Remember to use Ctrl+C (^c) to
kill gmond and then Ctrl+D (^d) to This is where the Ganglia web inter- Once these changes are made, you
stop the script. face will be installed. can simply run make install to install
Take a look at the top of the file and (2) Change the GWEB_STATEDIR line to: the Ganglia web pieces. Now comes
you should see some output that the big test. In your browser, open
looks like Listing 8, which indicates GWEB_STATEDIR = /var/lib/ganglia‑web the URL for the Ganglia web page
that gmond is working correctly. If as http://192.168.1.4/ganglia (recall
everything is running correctly – at (3) Change the GMETAD_ROOTDIR line to: that in gmetad.conf I told it that the
least as far as you can tell – then start data source was 192.168.1.4). You
up the gmetad and gmond daemons and GMETAD_ROOTDIR = /var/lib/ganglia should see something like the image
make sure they function correctly
(Listing 9). You should see the OK Listing 8: gmond Test Output
output from these commands (one [root@home4 tmp]# gmond ‑d 5 ‑c /etc/ganglia/gmond.conf
from each). If you don’t, you have a
problem and should go back through loaded module: core_metrics
the steps. loaded module: cpu_module
loaded module: disk_module
If you got two OKs, then you can also
loaded module: load_module
check whether the processes are run- loaded module: mem_module
ning and the ports are configured cor- loaded module: net_module
rectly (Listing 10). Notice that port loaded module: proc_module
8640 is in use, so everything’s good loaded module: sys_module
loaded module: python_module
at this point. Now I’m ready to install
udp_recv_channel mcast_join=239.2.11.71 mcast_if=NULL port=8649 bind=239.2.11.71 buffer=0
the web interface!
socket created, SO_RCVBUF = 124928
w w w. ad m in - m aga z in e .co m A d m in 2 1 95
N U TS A N D B O LTS Ganglia
‑c /etc/ganglia/gmond.conf
Retrieving http://vuksan.com/centos/RPMS‑6/x86_64/ganglia‑gmond‑modules‑python‑3.6.0‑1.x86_64.rpm
Retrieving http://vuksan.com/centos/RPMS‑6/x86_64/libganglia‑3.6.0‑1.x86_64.rpm
Retrieving http://vuksan.com/centos/RPMS‑6/x86_64/ganglia‑gmond‑3.6.0‑1.x86_64.rpm and look for problems. (Maybe you
Preparing... ########################################### [100%] are missing something on the client
1:libganglia ########################################### [ 33%] system?)
2:ganglia‑gmond ########################################### [ 67%] The next step is to add the client to
3:ganglia‑gmond‑modules‑p########################################### [100%] the list of nodes on the master node.
As root, edit the file /etc/ganglia/
in Figure 2. Notice that on the left- /etc/rc.d/init.d/gmond start gmetad.conf, then go to the line that
hand side of the image, near the top Starting GANGLIA gmond: [ OK ] starts with data_source and add the
of the web page, that the number of [root@test8 ~]# U IP address of the client node. In my
Hosts up: is 1 and that it has eight /etc/rc.d/init.d/gmond status case, the IP address of the client is
CPUs. Plus, the charts are populated. gmond (pid 3250) is running... 192.168.1.250. Now save the file and
(I took the screen capture after letting restart gmetad using:
it run a while, so the charts actually At this point, gmond should be run-
had real data.) ning on the client node. You can /etc/rc.d/init.d/gmetad restart
Remember that the default refresh or check this by searching for it in the
polling interval is 15 seconds, so it process table with the following com- If you are successful, you should see
might take a couple of minutes for the mand: the second node in the web interface
charts to show you much. Be sure to
look at the data below the charts. If
the values are reasonable, then most
likely things are working correctly.
Ganglia Clients
As soon as you have the master node
up and running, it’s time to start
getting other nodes to report their
information to the master. This isn’t
difficult; I just installed the gmond
RPMs on the client node.
I used the exact same command I did
for installing gmond on the master
node. The output looks something
like Listing 11. I did have to make
sure I had libconfuse installed on
the client node before installing the
RPMs.
The next steps are pretty easy, just
start up gmond:
96 A d m in 2 1 w w w. ad m in - m aga z in e .co m
Ganglia N U TS A N D B O LTS
(Figure 3). Notice that both nodes not be correct). Thus, I downloaded [http://www.admin‑magazine.com/HPC/
are shown: home4 (master node) the RPMs that one of the developers Articles/HPC‑Monitoring‑What‑Should‑You
and 192.168.1.250 (client). In the makes available. Installation was very ‑Monitor]
upper left-hand corner, you can see easy (rpm ‑ivh ?), as was configura- [2] BeoBash: [http://www.slideshare.net/
the number of hosts that are up (2). tion. Gweb was also easily built and insideHPC/beo‑bash‑2013‑podcast]
I also scrolled down a bit in the web installed, and within seconds, I could [3] “Monitoring HPC Systems: Processor and
page so you could see the “1-min- visualize what was happening on Memory Metrics” by Jeff Layton,
ute” stacked graph (called load_one). my desktop. This was nice, but I also [http://www.admin‑magazine.com/HPC/
Notice that you see both hosts in wanted to extend my power grab to Articles/Processor‑and‑Memory‑Metrics]
this graph (green = home4, red = other systems, so I installed the gmond [4] “Monitoring HPC Systems: Process, Net‑
192.168.1.250). I think I can declare RPMs on a second system, told the work, and Disk Metrics” by Jeff Layton,
success at this point. master node about it, and, bingo, the [http://www.admin‑magazine.com/HPC/
new node was being monitored. Articles/Process‑Network‑and‑Disk‑Metrics]
Summary Ganglia has a number of built-in met- [5] Monitorix: [http://www.monitorix.org]
rics that it monitors, primarily from [6] Munin: [http://munin‑monitoring.org]
Ganglia is quickly becoming the go-to the /proc filesystem. However, you can [7] Cacti: [http://www.cacti.net]
system monitoring tool, particularly extend or add your custom metrics to [8] Ganglia: [http://ganglia.sourceforge.net]
for large-scale systems. It has been Ganglia via Python or C/C++modules. [9] Zabbix: [http://www.zabbix.com]
in use for many years, so most of the In a future blog, I’ll write about inte- [10] Zenoss Community:
kinks have been worked out, and it grating the metrics I wrote in previ- [http://community.zenoss.org/]
has been tested at very large scales ously into the Ganglia framework. [11] Observium: [http://www.observium.org/]
just to make sure. The new web inter- Until then, give Ganglia a whirl. I also [12] GKrellM:
face makes data visualization much highly recommend the well-written [http://en.wikipedia.org/wiki/GKrellM]
easier than before; it was designed for book Monitoring with Ganglia [18] [13] RRDtool: [http://oss.oetiker.ch/rrdtool/]
and written by admins. from O’Reilly, which explains the in- [14] “Installing Ganglia” by Sachin Sharma,
For this article, I installed Ganglia on ner workings of Ganglia. n [http://sachinsharm.wordpress.com/tag/
my master node (my desktop), which installing‑ganglia/]
is running CentOS 6.5. Although I [15] Vuksan’s World: [http://vuksan.com]
originally tried to build everything Info [16] CentOS RPMs: [http://vuksan.com/centos/
myself, it’s not easy to get right. (You [1] “Monitoring HPC Systems: What Should You RPMS‑6/x86_64/]
might get it running, but it might Monitor?” by Jeff Layton, [17] Ganglia on Sourceforge:
[http://sourceforge.net/projects/ganglia/
files/ganglia‑web/]
[18] Massie, Matt, Bernard Li, Brad Nicholes.
Monitoring with Ganglia. O’Reilly, 2012
[19] SecOPS/SysOp blog:
[http://maciek.lasyk.info/sysop/]
Acknowledgments
A quick thank you to Maciej Lasyk [19] and Vladi‑
mir Vuksan. Lasyk spent a great deal of time on
the Ganglia developers list doing his absolute
best to help me build Ganglia from source and
get it working. He was very polite and very
determined to help me succeed, but in the end,
he convinced me that I totally screwed up my
installation and I was better off installing the
RPMs. I hate to admit defeat, but he was correct.
Vuksan’s RPMs were my saving grace in installing
Ganglia. Thank you, Maciej and Vladimir.
The Author
Jeff Layton has been in the HPC business for
almost 25 years (starting when he was 4 years
old). He can be found lounging around at a
nearby Frys enjoying the coffee and waiting
Figure 3: Ganglia showing the host and the client node. for sales.
w w w. ad m in - m aga z in e .co m A d m in 2 1 97
S e rv ic e Contact Info / Authors
Write for Us
Admin: Network and Security is looking • unheralded open source utilities
for good, practical articles on system ad- • Windows networking techniques that
ministration topics. We love to hear from aren’t explained (or aren’t explained
IT professionals who have discovered well) in the standard documentation.
innovative tools or techniques for solving We need concrete, fully developed solu-
real-world problems. tions: installation steps, configuration
Tell us about your favorite: files, examples – we are looking for a
• interoperability solutions complete discussion, not just a “hot tip”
• practical tools for cloud environments that leaves the details to the reader.
• security problems and how you solved If you have an idea for an article, send
them a 1-2 paragraph proposal describing your
• ingenious custom scripts topic to: edit@admin‑magazine.com.
Contact Info
Editor in Chief Marketing Communications
Joe Casad, [email protected] Darrah Buren, [email protected]
Managing Editor Customer Service / Subscription
Rita L Sooby, [email protected] For USA and Canada:
Email: [email protected]
Senior Editor Phone: 1-866-247-2802
Ken Hess (toll-free from the US and Canada)
Contributing Editors / Test Lab Staff Fax: 1-785-856-3084
Andreas Bohle, Jens-Christoph Brendel, For all other countries:
Hans-Georg Eßer, Markus Feilner, Mathias Email: [email protected]
Huber, Anika Kehrer, Kristian Kißling, Phone: +49 89 9934 1167
Jan Kleinert, Thomas Leichtenstern, Fax: +49 89 9934 1199
Jörg Luther, Nils Magnus ADMIN – c/o Linux New Media USA
Localization & Translation 616 Kentucky St. Lawrence, KS 66044
Ian Travis www.admin-magazine.com
While every care has been taken in the content of
News
the magazine, the publishers cannot be held re-
Joe Casad
sponsible for the accuracy of the information con-
Proofing and Polishing tained within it or any consequences arising from
Amber Ankerholz the use of it. The use of the DVD provided with the
magazine or any material provided on it is at your
Layout
own risk.
Authors Dena Friesen, Lori White
Copyright and Trademarks © 2014 Linux New
Cover Design Media Ltd.
Jens-Christoph Brendel 28, 82
Dena Friesen, Illustration based on graphics No material may be reproduced in any form what-
David J. Dodd 18 by Otmar Winterleitner, Fotolia.com and artqu, soever in whole or in part without the written per-
123rf.com mission of the publishers. It is assumed that all cor-
Joseph Guarino 38, 60 Advertising – North America respondence sent, for example, letters, email,
Ann Jesse, [email protected] faxes, photographs, articles, drawings, are
Ulrich Habel 44
phone +1 785 841 8834 supplied for publication or license to third parties
Alexander Hass 10 on a non-exclusive worldwide basis by Linux New
Advertising – Europe Media unless otherwise stated in writing.
Ken Hess 3 Penny Wilby, [email protected]
All brand or product names are trademarks of their
phone +44 1787 211100
respective owners. Contact us if we haven’t cred-
Thomas Joos 76 Advertising – All other countries ited your copyright; we will always correct any
Anna Kobylinska 28 Petra Jaser, [email protected] oversight.
Phone: +49 89 9934 1124 Printed in Germany
Jeff Layton 92 Michael Seiter, [email protected] Distributed by COMAG Specialist, Tavistock Road,
Phone: +49 89 9934 1123 West Drayton, Middlesex, UB7 7QE, United
Martin Loschwitz 80, 86 Kingdom
Product Management
Filipe Pereira Martins 28 Christian Ullrich ADMIN (ISSN 2045-0802) is published bimonthly by
Corporate Management (Vorstand) Linux New Media USA, LLC, 616 Kentucky St,
Dr. Willi Nüßer 10 Lawrence, KS 66044, USA, and Linux New Media Ltd,
Hermann Plank, [email protected]
Manchester, England. Company registered in
Andrej Radonic 54 Brian Osborn, [email protected]
England. Periodicals Postage pending at Lawrence,
Publisher KS. POSTMASTER: Please send address changes to
Tim Schürmann 24, 70
Brian Osborn, [email protected] ADMIN, 616 Kentucky St., Lawrence, KS 66044, USA.