Linux User & Developer 194 - Sysadmin Survival Guide
Linux User & Developer 194 - Sysadmin Survival Guide
DNA
Learn how to
Data that lasts for Make the most of KDE’s
Plasma
hundreds of years!
computing desktop
48
bot wars
Pages of
AI is changing the tutorials
face of security > Host a Git server + GitHub alternatives
> Make great GUIs > Code a YouTube clone
Welcome
Future PLC Quay House, The Ambury, Bath BA1 1UA
Editorial
Editor Chris Thornett
[email protected]
01202 442244
Designer Rosie Webber
Production Editor Ed Ricketts
Contributors
Dan Aldred, Mike Bedford, Joey Bernard, Christian Cawley,
In this issue
John Gowers, Toni Castillo Girona, Jon Masters, Cliff Newman,
Paul O’Brien,Mark Pickavance, Calvin Robinson, Mayank
Sharma, Shashank Sharma, Bill Thomas, Kevin Wittmer
[email protected]
was sourced and produced from sustainable managed forests, conforming to strict
environmental and socioeconomic standards. The manufacturing paper mill holds full
FSC (Forest Stewardship Council) certification and accreditation
All contents © 2018 Future Publishing Limited or published under licence. All rights
reserved. No part of this magazine may be used, stored, transmitted or reproduced in
If you submit material to us, you warrant that you own the material and/or have the
necessary rights/permissions to supply the material and you automatically grant
subscription offer
Future and its licensees a licence to publish your submission in whole or in part in any/
all issues and/or editions of publications, in any format published worldwide and on
associated websites, social media channels and associated products. Any material you Subscribe and receive
a FREE Native Union
submit is sent at your own risk and, although every care is taken, neither Future nor its
employees, agents, subcontractors or licensees shall be liable for loss or damage. We
assume all unsolicited material is for publication unless otherwise stated, and reserve
the right to edit, amend, adapt all submissions.
free Eclipse charger hub!
gift!
more details on page 30
KDE
power user’s guide
plasma
64
computing
with
a Double Helix
4
Issue 194
July 2018
facebook.com/LinuxUserUK 94 Free downloads
Twitter: @linuxusermag We’ve uploaded a host of
new free and open source
software this month
86
72
special
report:
Machine
Learning in
Security
32 76 88
Practical Pi Reviews Back page
72 Pi Laser Tag 81 Group test: Recovery Distros 96 Obnoxious
Using a couple of Pi Zero W machines, ALT Linux Rescue, MorpheusArch Another intriguing short story
some 3D printing for the ‘guns’ and a Linux, SystemRescueCd and Ultimate from sci-fi author Stephen Oram
simple desktop server, Terran Gerratt Boot CD all promise to rescue and
created his own laser tag game repair a Linux installation with the
minimum of hassle
74 Create a Gaming Controller
Prototype for the Pi 86 Acer Chromebook Spin 11
It’s remarkably simple to create a Does the Acer Chromebook Spin 11
basic console-style controller for the have enough oomph to function as a
Raspberry Pi, with some breadboard modern notebook?
and a few push buttons connected to
the GPIO ports. We show you how 88 Linux Mint 19 Cinnamon ‘Tara’
This latest LTS release of the hugely
76 Getting Started with the popular ‘Linux beginner’s distro’ has a
Pimoroni Inky pHAT
Pimoroni’s beautiful 212x104-pixel,
lot to live up to SUBSCRIBE TODAY
three-colour ePaper device is ideal 90 Fresh FOSS Get a free high-speed USB
for physical displays of data, as we Cinnamon 3.8.4, Crystal 0.25, uftpd 2.5 charger when you subscribe!
explain in this guide and Stacer 1.0.9 reviewed Turn to page 30 for more information
www.linuxuser.co.uk 5
06 News & Opinion | 10 Letters | 12 Interview | 16 Kernel Column
business
6
Distro feed
Top 10
(Average hits per day, 30 days to 29 June 2018)
1. Manjaro 4101
2. Mint 2508
3. elementary 1483
4. MX Linux 1349
5. Ubuntu 1191
6. Debian 1029
7. openSUSE 775
8. Fedora 696
Solus
security 9.
10. Lite
691
609
www.linuxuser.co.uk 7
OpenSource Your source of Linux news & views
raspberry pi
distro
Canonical shares desktop metrics
Canonical has revealed that 67 per cent of fact that all data is available to review by the ons. Perhaps worryingly from a security
users opted-in to the collection of desktop user before it’s even sent. standpoint of view, 28 per cent choose
metrics when installing Ubuntu 18.04 LTS. While some of the information revealed is automatic login.
Controversial upon its announcement, the a little dry, Canonical has been able to get Metrics also reveal that systems with
Ubuntu Report tool was integrated into setup an idea of the current state of Linux PCs and 4GB and 8GB RAM are most common, as
to record hardware and localisation options. laptops. Unsurprisingly, Ubuntu is used all are single CPUs. Single GPUs also outweigh
Intended to help Canonical understand what over the world (this information was gathered dual-CPU systems, but things are a little
systems are running Ubuntu – and how it can via timezone selection – no IP addresses more complicated when it comes to hard
improve the OS – the decision to include the are stored), with the BRICS nations of disk storage. Canonical found that 21.89 per
tool has reportedly turned long-term Ubuntu Brazil, Russia, India, China and South Africa cent of users installed Ubuntu 18.04 LTS to a
users away from the distro. That’s despite particularly keen on the operating system. manually created partition table. This single
assertions from Canonical that the reporting Meanwhile, 90 per cent of users choose to revelation has already prompted Canonical
is not an actual privacy breach (no user- download updates during installation, with to investigate how to simplify the process for
identifiable information is shared), and the 53 per cent requesting restricted add- Ubuntu 18.10.
8
opinion
Demystifying Kubernetes
Making it work for you can take some effort, but it’s worth it
T
here’s unquestionably a heap of confusion of the tricksy and frankly bewildering installation process
around getting started with the now- when you first try a multi-region cloud installation. Of
dominant container orchestrator, course the trade-off is that you only have a local copy of a
Kubernetes. It’s no longer a particularly new Kubernetes cluster. For most people, however, I suspect
piece of software, but I think it’s safe to say that when it that will whet their appetite enough.
was first released it appeared relatively arcane. The mighty Minikube punches well above its weight
The confusion is certainly lessening and exists for a and also conveniently installs itself into a local virtual
number of reasons. Don’t get me wrong; the Kubernetes machine to save headaches with the mountain of config
community has done a great job in documenting the and post-install cruft required. Chris Binnie
application, but the barriers to entry definitely still exist Indeed, if you’ve got a laptop with the right CPU Chris’ book Linux
for some new users. extensions for virtualisation – which are actually optional, Server Security: Hack
I recently saw a how-to article offering people with but useful to have – and you haven’t tried Minikube and Defend teaches
minimum Docker experience a way into Kubernetes – and yet, you might just find your calling as a Kubernetes you to make servers
that leads me to issue number one. The first prerequisite developer by visiting the Kubernetes site at http://bit. invisible and mitigate
is having had a ‘Eureka!’ moment of how much containers ly/lud_minikube. I can’t recommend it enough. The path attacks. See www.
change the way software releases work (and how they to enlightenment with Minikube is without question a devsecops.cc
reduce night terrors when running infrastructure). shorter one than with other options. Try launching some
This comes about after building and breaking Docker
containers for a while and then muttering “Wow”.
I remember having this moment myself, even though The first prerequisite
I had used containers on a production estate in 2009.
Granted, those containers weren’t used for CI/CD is having had a ‘Eureka!’
pipelines, but you do need container experience from
the off. moment of how much
Secondly, there’s no getting away from the fact that
Kubernetes can grow monstrously complicated. I am containers change the way QUICK list
referring to adding persistent storage, multiple cloud Kubernetes
regions, several applications and extra redundancy to software releases work solutions
an otherwise default install. Integrating all the required • AWS
bells and whistles to provide production-viability can simple services as containers once you’re up and running • Azure
be mind-boggling. All this takes practice and, frankly, and I’m sure you will rapidly become a convert to this way • CenturyLink
usually lots of trial and error. of working. Cloud
Factor in the initial stumbling blocks with a number of Once you are comfortable with running a daemon set • Conjure-up
moving targets, and setting up Kubernetes is no small or two, a config map here and an ingress controller there, Kubernetes
endeavour. The targets that I refer to are there thanks you could happily progress to other installation methods. with Ubuntu
to the (fantastic) rate of innovation that Kubernetes My personal preference for your next step would be to on AWS, Azure,
undergoes. On a regular basis, features become try an AWS (Amazon Web Services) installation using the Google Cloud,
deprecated, syntax is no longer valid and brand new details highlighted on ‘Picking The Right Solution’ at the Oracle Cloud
(necessary or nice-to-have) add-ons require themselves Kubernetes’ website, http://bit.ly/lud_solution. • Google
to be learned promptly. I would opt for a cloud build so that mistakes can be Compute
Compound those challenges with sometimes even rectified promptly and new builds tried again for trial Engine (GCE)
more confusing developments in the software space and error. That ostensibly sounds simple, but given the • Kubermatic
surrounding the third-party tools that you integrate with list (see right) showing actively maintained Kubernetes • IBM Cloud
Kubernetes, and I’m sure some people will agree it’s a bit solutions, it will be worth reading up on them first to • Madcore.Ai
like running at full speed on a treadmill. assess their suitability for your needs. • Rancher 2.0
One saving grace – and again, don’t get me wrong, as Once you’re up and running you’ll potentially benefit • Stackpoint.io
I am still a huge advocate of Kubernetes and believe it is from resilient and scalable infrastructure running • Tectonic by
the future for many reasons – is a lovely installation tool on different continents, which is hard to rival. And, CoreOS
called Minikube. It takes almost all of the headaches out ultimately, you’ll appreciate what all the fuss is about.
www.linuxuser.co.uk 9
OpenSource Your source of Linux news & views
Comment
Your letters
Questions and opinions about the mag, Linux and open source
of the argument: Those with long memories will struggle
to forgive Microsoft’s concerted attempts to kill Linux.
top tweet But – as Collobora’s Michael Meeks commented to me for
@Pepulani7 this issue’s interview – companies aren’t a single human
@LinuxUserMag being (even if seen as a legal entity) and there have been
really glad Toni Microsoft employees patiently encouraging the Redmond
Castillo Girona giant to embrace open source development for years.
dedicated an Yes, Microsoft’s change is motivated by pragmatism; it
entire article knows that to compete it needs to embrace open source,
on Metasploit but the point is that it has changed and isn’t vaguely
shellcode in interested in destroying Linux, especially now it’s using
@LinuxUserMag slices of Linux and numerous open source projects for its
issue #192. Been own products!
really looking Of course, there’s a danger that a new Microsoft CEO
forward to this! might have different ideas or that, in the future, open
source software falls out of favour with the business
world – but that seems unlikely, as once it has been
embraced the open source model just can’t be beaten.
Right Microsoft is Ultimately people will decide, and many are already
trying to bolster its moving to GitLabs et al, but it seems deeply mean-
developer products
and support, and
GitHub was a good
GitHub hokum
Dear LU&D, I read with interest your little editor’s
spirited to beat Microsoft over the head for something
it might do in the future.
10
Facebook: Twitter:
follow us facebook.com/LinuxUserUK @linuxusermag
Above The command line isn’t a scary place. You can even
watch a text-based version of Star Wars A New Hope when
you’re bored: type telnet towel.blinkenlights.nl
Duplicating distros
Dear LU&D. I would love it if you would make a future
PinguyOSBuilder). This is actually a fork of remastersys,
which was discontinued in 2013. It’s a GUI-based system
and very straightforward to use, although we can’t
issue that covered how to make your own guarantee that it will work; we’ve tried it on Ubuntu
Linux distro and/or desktop environment. Not necessarily 18.04 and it did the trick. Alternatively, you can do all
writing one from scratch with C or something, but this malarkey manually by using a downloaded ISO of the
creating bootable media of your current system and distro you want to use, squashfs and imkisofs. Perhaps
making it installable. I would create a custom clone of we need to do a tutorial on it?
www.linuxuser.co.uk 11
OpenSource Your source of Linux news & views
Interview collabora
Meeting Mr Productivity
We travelled to the Collabora offices in Cambridge to visit Michael Meeks,
a veteran open source advocate and developer
M
eeks is a mild-mannered heavyweight of Tell us more about what you were doing on the
the open source world, who has worked GNOME side of things.
tirelessly for the GNOME project for My first contribution was for GCC [the GNU Compiler
many years and was an early employee of Nat Collection]. I was just irritated that there was a sort
Friedman and Miguel de Icaza (one of the founders of undefined behaviour thing happening but the
of the GNOME project) when they set up Ximian compiler wouldn’t tell you this. You turn all warnings
to develop the Ximian Desktop. We interviewed on and nothing would go, ‘Hey wait a minute, this
Meeks in Collabora’s Cambridge offices. For Meeks, could do almost anything.’
Collabora is the culmination of a number of moves in So I implemented a patch for this; sent it to the
the open source software business which saw him GCC people; and I didn’t get a reply for like two
join Novell when Ximian was acquired in 2003, and years. Two years later, I got a thing saying, ‘Oh, I was
work at SUSE until Collabora was formed. looking into fixing this. I discovered there’s a patch
on the mailing list that does exactly this. There’s
Michael Meeks Tell us about your background and how you got really no problems with it. So I just fixed up the help
Michael is general into open source… and merged it’, so that was not really a very good
manager at Collabora Sure. My mother taught me to program when I was validation of the model. So I moved on. I installed
Productivity, part of very small, which was fun. It was a BBC Micro and GNOME, and I started playing with a Mahjong game,
Collabora, an open I got really into it. I loved to write games. In my gap making it solvable and so on. And then I got into
source consulting year I became a Christian, and it turned out that all Gnumeric with Miguel [de Icaza] and hacked a lot
business that supports the games I was writing used stolen compilers and on that, doing file-format reverse engineering. The
many projects including stolen operating systems. It was all a bit risqué. So in first Excel filter is really open source – we could grab
LibreOffice, Linux kernel, the end, I switched to using this Linux thing, which at binary files and import them. And then the export
Wayland and more. the time was absolutely terrible. It literally killed my filters were even more exciting, because it was
hard disk the first time I installed [it]. There were no before security became an issue for Microsoft, so
graphics worth using. There was no way you could be there was no real auditing or fuzzing on their filters.
writing games. The typical export test was: look at the diff to
their file [which prints the lines that are different]
and then try to reduce it, and see if it crashes when
it loads in Excel and see if it crashes after playing
with it for a while. Because often you’d corrupt the
heap, or something really bad would happen and
subsequently you would crash – caused by your bad
file format. So yeah, there were all sorts of horrors
under the hood. The world has got much more
secure in recent times. Now it tends to crash on
import or tolerate bad input much better than it did
in Microsoft Word and, of course, LibreOffice too.
12
Quick guide
Easy VBA migrator
visual
trace
A key feature of Collabora Office 6.0,
which was released in June, is a new basic
interoperability tool called COLEAT line of coleat word
(Collabora OLE Automation Tool) which
can be used as a drop-in replacement business
for legacy systems that use components
such as VBA scripts. This is enabling many app collabora
organisations to migrate to open source. As
an example, Meeks says Ulster Hospital in re-direct online
Belfast, Northern Ireland, had a problem
with a system called Patient Centre,
created by Computer Services Corporation Above The API development work for COLEAT has opened the door for many organisations to
(CSC). The system was developed in Visual consider an open source alternative to systems that use legacy VBA scripts and unsupported
Basic 6, explained Meeks: “You can no versions of Visual Basic
longer really buy the compiler for VB6.
You need a professional edition, and it’s mainframe service that sits at the back- Office.” COLEAT has enabled the hospital to
hundreds of dollars on eBay as Microsoft end and it then provides all sorts of things migrate away, which Meek says, amounts
doesn’t sell it any more. But this thing for managing patient data. As part of that, to migration of 8,500 people with “several
essentially screen-scrapes an HPUX of course, it integrates with Microsoft thousand” now using Collabora Office.
we do is open source, so it’s all contributed back to look like Microsoft Office. These tools can run and
the community. And yes, we work very heavily on not even be changed. You can drop Collabora Office
LibreOffice as well, obviously. We have two directors in, and it’ll just work nicely. That’s called COLEAT
on the board there, and a membership committee – a Collabora OLE automation tool (see Easy VBA
member, and lots on the advisory board. We sponsor Migrator, above).
the conferences and the project. We’ve over a third You can trace these things, and you can see
of the commits to the code, and so on. Yeah, a huge what they’re doing. You can actually see inside your
investment there from Collabora. application for the first time and see what silly stuff
it’s up to and replace it with Collabora Office.
With Collabora Office 6.0, you’re calling it the
migrator’s choice. Are there any features that you have on a list to I switched
Yes, migrator’s choice for Collabora Office 6.0 –
there’s lots of good things in there. Lots of the work
target to make transitions even easier?
In terms of other features to target for people like to using this
we do is consultancy work, but I think when we can
invest in things ourselves, it’s nice to do things that
that, I think interoperability is traditionally the
bread-and-butter, the file format interoperability.
Linux thing,
help people into the project. And I think PowerPoint has been quite weak. We’ve which at the
One of the things there, for Windows users done a whole chunk of work in 6.0 just going and
particularly, is that they suffer from these nasty finding all of the Microsoft templates; making sure time was
little applications that people write using Visual
Basic. Like parking ticket systems – every council
that they import nicely; and that you don’t get bits
that we don’t understand of the document. We
absolutely
somewhere is giving out parking tickets and every export good open XML. Line spacing is fixed. There terrible. It
police department has one of these things. It’s are a few self-inflicted wounds that we’ve fixed. I
inevitably some bespoke tool that’s written with a hope that it gets a lot better in 6.02. killed my
little database, a bit of Visual Basic.
To print your parking ticket, there’s some kind of
But yes, just across the board, loads and loads
of work in inoperability. SharePoint integration is hard disk
Microsoft Word integration and mail merge-y type something we’ve done quite a lot – IQY files, which
thing at the back-end. Some of those things are people use for lists in their SharePoint service. We
so old, there’s no source code for them any more. can download and render those on Linux as well,
They’re just gone. They cost money to create, so why which is cool.
would you pay more money? The problem is just They have sort of queries built in. You get a
the sheer number of diverse little things like this. It spreadsheet view out of SharePoint giving you stuff.
makes it hard for people to move to LibreOffice. So Yeah, just all sorts of things, richer AutoText. The
we have this interoperability API that allows us to problem partly with this is that it’s hard to come
www.linuxuser.co.uk 13
OpenSource Your source of Linux news & views
14
We were keeping ahead of the other baddies out
there who would find things and report them. But Quick guide
when Google introduced OSS-Fuzz, we now have a Learning quantum computing in LibreOffice
thousand-machine cluster that runs this incredibly
wide and that got us massively ahead. The slightly LibreOffice appears in some surprising scripts. As LibreOffice now supports
frightening thing is that you can run AFL and it places – that’s something you learn VBA, the suite turned out to be the
makes progress through all these branches for when you spend time with Michael. quickest and most efficient way to get
weeks, and you think, “Ah, we’ve fixed them all.” Then While we’re queuing up for lunch, the job done.
you throw it on the 1,000-CPU cluster and bang, Meeks casually mentions that D-Wave
you’ve got another one-a-day for the next week. Systems, the world’s first quantum
[There’s also] a nice tool which basically simulates computing company, uses LibreOffice
keyboard input – so you can turn your keyboard to run its training programme.
input into LibreOffice into a file format. This found, It turns out that teaching people a
D-Wave Systems
within a few hours, one of our most vile problems radical new way to model problems on
in Impress, where we had a copy/paste that would a quantum computer isn’t that easy.
sometimes crash, but no one could reproduce it. Meeks tells us that the company uses
This found it and let us fix it, just like that, with a a simulator fronted by an XLSM (an
reproducible test that you can push through. It’s Excel Macro-Enabled Workbook file) Above If you want to learn how to use a
amazing tooling. that dynamically generates elements quantum computer from D-Wave, you’ll
The next step is then to connect this with in the document, and this uses VBA learn via a simulator that’s using LibreOffice
bisections. So when you find a regression, you can
then find the commit it was, and automatically
tell the cluster. But this is going to need another that it needs to be secure. So what we do is, we put
order of… you know, two orders of magnitude CPU each document inside its own chroot environment,
compute! [laughs] like a container. So the documents can’t see each
For LibreOffice, from a QA perspective, we also other as they run.
have this bisection. If individual users find a bug We put almost nothing in that chroot. There’s no
and it used to not be there, can they get it down to shell, there are no binaries you can run, really –
an individual commit of the user using binaries? there’s a PDF converter. But there’s almost no code,
They don’t have to compile anything. You pull the Git no libraries. If you break out into this thing by getting
repository down. It’s 3GB, 4GB, whatever, and then it to execute an assembler in some weird way, you’re If people
you can bisect in it, and you can look at versions not going to see anything. There’s nothing there.
in between. Of course, once you’ve found the Except on fonts, and the document, you send them are paying
developer who caused the problem this is much of
the problem in programming, it turns out. So then
already. You can see your own document. Hooray.
Beyond that, we then have a seccomp [secure
large QA
you have someone who feels responsible, and also computing] filter. We put this Berkeley Packet Filter departments,
you’ve found exactly where it is. That’s just been an [BPF] around all your system calls, so the system
incredibly powerful tool for getting bugs fixed and call interface is shrunk to a tiny amount to, again, they should
keeping quality while we speed up development.
This is the stuff we do for our product in terms
stop all of those system calls that once called local
privilege vulnerabilities to exploit. be automating
of Collabora Office. We do really quite a lot of work
there, and lots of hardening. Collabora Office 6 had
Also, there are almost no device nodes in that
chroot. There’s a random node. There are very, very
them out,
some things to turn off filters, so you can lock down few things that you need. But there’s no “Here’s because AI
and manage whole fleets of these things, and say “I a view of all your disks, and hey, why not have all
don’t want this Lotus Pro filter” or whatever, [as] it’s your network adaptors, too?” None of that is there. is coming
had lots of problems. Then, of course, when we put
that online, there’s an even more heightened sense
Which is kind of helpful. Again, we’re massively
constraining that attack surface.
to QA
So the expectation is, even if you break out, even
if you write a macro that does something stupid –
we should be able to enable macros. It’s an option,
actually, in Collabora Online, that you can have those
macros running. Even if you do something stupid
on a macro, nothing should happen. As soon as any
bad system call happens, we just pull the plug. The
whole thing is just torn apart and destroyed. So
yeah, it’s quite fun. My hope is that it’s reasonably
secure. Thus far, we haven’t seen many security
problems there, inside Online.
www.linuxuser.co.uk 15
OpenSource Your source of Linux news & views
opinion
L
inus Torvalds has announced Linux In fact, a number of sub-variants of Spectre v1 exist, such
4.18-rc4, noting that things “look pretty as the ability to perform speculative buffer overflows.
normal here, and size-wise this looks good We have covered Spectre (and Meltdown) in quite some
too”. Release Candidate 4 (rc4) represents detail before. As a refresher, these vulnerabilities affect
about the mid-way point of a kernel development cycle, computer microarchitecture, which is the implementation
which typically lasts a couple of months during which of a computer architecture, such as x86 or Arm, in a
the RCs come weekly. These follow the closing of the specific chip family from a given vendor. Spectre v1 in
‘merge window’, a period of time during which disruptive particular affects Speculative Execution, a mode of
changes are merged that are subsequently soak-tested high-performance microprocessors during which they
Jon Masters prior to final release. The 4.18 merge brought in a number attempt to guess ahead about which path a program will
Jon is a Linux-kernel of new features, including Mathieu Desnoyers’ take following a branch condition, before having all of the
hacker who has been ‘Restartable Sequences’ (which we’ll cover in more detail information necessary to know for sure that this is the
working on Linux next issue) and some additional Spectre security fixes for case. Usually, speculation is supposed to function as a
for more than 22 the Arm architecture. ‘black box’. If the speculated program flow turns out to
years, since he first One important removal in 4.18 is support for the Lustre be correct, it becomes committed ‘architectural state’
attended university filesystem. Lustre had been a part of the ‘staging’ sub- visible to the programmer (and user), but if it is incorrect,
at the age of 13. Jon tree maintained by Greg Kroah-Hartman (Greg-KH). The the speculated state is thrown away.
lives in Cambridge, purpose of allowing code in the kernel staging directory Spectre previously proved that cache side-channel
Massachusetts, and is that it be available to those who want to consume analysis, during which carefully crafted code monitors
works for a large not-quite-fully baked new drivers and the like at their the behaviour of shared memory resources to time for
enterprise Linux own risk. Lustre never quite fell into that category since it what has recently been accessed, can be used to infer
vendor, where he is has a very active user community and by many accounts data that has been loaded as a result of speculation. In
driving the creation is already in widespread production. But the code used the original Spectre v1 exploits, it is observed that a high
of standards for in those deployments is based upon an out-of-tree performance microprocessor might respond to a program
energy-efficient external version of Lustre, not the one in the kernel. bounds check, such as if (x < 10), by speculating that
ARM-powered After a number of years tolerating this messy situation, x is less than 10 even if it is not. In the case of the kernel,
servers. the kernel community finally got tired and decreed that code might be speculatively run using a user-supplied
Lustre should figure out where it wants to live and go x through a system call that is unsafe beyond a range
solve that before coming back if it wants to be upstream. intended. When combined with a suitable sequence of
The 4.18 merge window actually closed a day earlier instructions following the bounds check, it might be
than some might have been expecting. Customarily, possible to leak sensitive kernel data to an attacker.
new kernel releases come on a Sunday, but Linus had We previously mitigated this in the kernel by using
announced 4.18-rc1 from Japan while on a whirlwind tour tools such as Dan Carpenter’s Smatch to look for these
of Asia. He also noted that he recently hit his 15-year vulnerable code sequences, and inserted ‘clamping’
anniversary of working for the Linux Foundation. Time masking macros following the bounds check that will
really does fly! force the value speculated to lie within the range of the
bounds check.
Speculative buffer overflows The latest Spectre v1 twist acknowledges that
Another month, another computer microarchitecture following such a bounds check, there might be stores
vulnerability with a potential impact upon the Linux (writes) to memory that could be abused under
kernel. Vladimir Kiriansky and Carl Waldspurger speculation similar to a conventional buffer overflow
published a paper entitled Speculative Buffer Overflows: – except in this case, the buffer overflow only happens
Attacks and Defenses. The basic assertion is that the speculatively, as does the overwrite of data structures
industry’s previous understanding of Spectre variant following the buffer. While all of this will be thrown
1 (so-called ‘bounds check bypass’) was incomplete. out once speculation is unwound, there is a time
16
window during which kernel-code control flow could be
speculatively resteered based upon this overflow and
cause data leakage.
As a result of Vlad’s work, which apparently won a
significant bounty prize for its discovery, the existing
mitigations are being updated to account for the sub-
variant, adding additional uses of kernel macros such as
array_index_nospec that perform a clamp on the upper
bound. You can read Vlad’s paper at https://people.csail.
mit.edu/vlk/spectre11.pdf.
The Linux Kernel Mailing List (LKML) finally has an
official archive. For many years, various individuals and
groups maintained copies of the list, including gmane. fundamentally incompatible with some of the Spectre-v2
org, marc, and others, but there was never something mitigations such as retpoline, so the rollout is being
maintained on the official kernel.org infrastructure. That handled with some care. Nonetheless, CET is a great
has now changed with the unveiling of lore.kernel.org, technology and we will cover its upstreaming progress in
which features an online web-accessible archive as well more detail in a future column.
as Git-clonable copies, and a news-compatible reader at Baolin Wang (Linaro) posted a patch series entitled
nntp.lore.kernel.org for us old-school types. “Add persistent clock support”, which lead to a spirited
Matthew “Willy” Wilcox posted yet another iteration discussion about how Linux handles timekeeping
of his ‘Convert page cache to XArray’ patch series. As sources across architectures and power states. The
mentioned previously, XArray is an attempt to replace the generic concept of a timekeeping source is represented
traditional implementation of radix tree data structures in in Linux using the ‘clocksource’ framework. This is how
Linux with a structure that scales better and is designed your laptop keeps its view of time in sync with the RTC
(Real Time Clock) chip, which is used to
persist timekeeping across reboots and
Linus noted that he recently hit his suspend/resume cycles. In some cases,
a hardware platform may need to use
15-year anniversary of working for the a bespoke combination of time sources
that differ between runtime and suspend
Linux Foundation time. Boalin wanted to propose a new
framework (‘persistent clocks’) to do this,
but Thomas Gleixner shot this down quite
with the actual real-world uses of radix tree in mind. passionately, dismissing the patches and suggesting
Matthew is requesting inclusion of XArray in Stephen that we should “look at the problem itself”. He noted
Rothwell’s ‘linux-next’ kernel soak-testing tree ahead of that when multiple clocksources are in use, the existing
an effort to merge in 4.19. framework can be enhanced rather than adding a new
Chang S. Bae (Intel) posted an updated patch framework. Boalin said he would change course.
including “infrastructure to enable FSGSBASE”. The new Michael Kelley (Microsoft) posted patches entitled
FSGSBASE 64-bit instruction set extension in x86 allows “Enable Linux guests on Hyper-V on ARM64” which
the x86 segmentation registers (an archaic anacronism “enables Linux guests running on Hyper-V on ARM64
that still exist in some limited form on 64-bit x86) to be hardware”. The patches plug into all of the existing
updated from less privileged levels. This means that the Hyper-V frameworks, adding new hypervisor ‘hypercalls’
kernel entry code can be modified to save a SWAPGS that are Arm-specific, while updating other previously
instruction on every system call, reducing overhead. x86-only code to make it cross-architecture.
Yu-cheng Yu (Intel) posted four different patch sets Andrey Ponomarenko announced that “a new project
laying the groundwork to support Intel’s upcoming has been created to collect the list of computer hardware
Control Flow Enforcement (CET) technology. CET is devices with poor Linux compatibility based on the Linux-
a “processor family feature that prevents return/ Hardware.org data”. This has been tried before; the latest
jmp-oriented programming [ROP] attacks”. It’s also effort can be found at https://github.com/linuxhw.
www.linuxuser.co.uk 17
Feature Sysadmin Survival Guide
18
at a glance
• Storage & disaster recovery p20 • Administer with Fabric p24
Manage storage devices and develop a good Make changes to network machines with this
backup strategy. helpful Python library.
T
he field of Linux system a machine, it happens without any doubt from tickets, you have CI/CD workflows
administration has changed as to what is on it and what it is doing, for at least some of your code, and if you
tremendously over the last few and the machine shuts down as soon as have a team, you probably do some sort
years. Be it the exodus to cloud computing is possible; maybe because that service of team programming and code review.
or the proliferation of virtualised isn’t needed any more, or maybe because Your machines are almost all virtual, and
components and containerised apps, the next image is ready to take over for it,” provisioned automatically without much,
modern IT infrastructure has gone through says Matt. He adds that admins of such if any, interaction from you.”
a paradigm shift in a short span of time. No deployments spend their time looking for Abhas Abhinav, founder and hacker-in-
surprise, then, that organisations are on the ways to optimise spending and consolidate charge of DeepRoot Linux who supports
look-out for administrators who are and eliminate provided services. According over 250 servers for clients all over India,
equipped with the necessary skills to says that as system
handle this shift in enterprise technology. admin is getting
Matt Simmons, Linux Systems easier, the level of
Administrator for a leading American Today rarely do people have to commitment and
aerospace provider, believes system respect for the “craft
administration can be classified into three get acquainted with boot loaders, of system admin”
broad categories. What he calls traditional isn’t what it once
system admin is “where you only program kernels, file systems and device used to be. “On the
to automate small things on a per-machine other hand, [because
basis. The kind of role where you only write drivers, which limits them of] the fact that
quick shell scripts to back things up, and systems are now
maybe to grab some monitoring data that easier to provision or
you throw into Nagios, Icinga, or something to Matt this kind of administration is the procure because of the ubiquitous nature
like Cacti. Your machines are probably minority as well, but is growing rapidly. of VMs, many system admin skills are not
mostly physical.” He adds that admins of The majority that seems to be holding getting enough attention. Today rarely do
such networks spend most of their time steady, in Matt’s opinion, is a meeting of people have to get acquainted with boot
on managing resources per machine and a the two. He says the role in this segment loaders, kernels, filesystems and device
sizeable chunk of their budget goes towards has “probably shifted from ‘systems drivers, and this limits the sort of problems
procuring hardware or updating them: “This administration’ to what might be considered they can manage and solve.”
is the minority, and it is shrinking.” ‘IT Operations’.” Admins here use If you are a new system admin, this
The other end of the spectrum is all configuration management, and will surely feature shows you what you need to know
cloud: “You have automated jobs creating have some services in the cloud: “You write to get started. For the more experienced, it
images regularly according to controlled automation whenever you get the chance. equips you with knowledge of the tech you
rulesets, so that when you need to create You probably work almost exclusively need to be familiar with to stay relevant.
www.linuxuser.co.uk 19
Feature Sysadmin Survival Guide
D
ata is one of the most critical parts – or rather its more dexterous cousin,
of any business, which means logical volume management (LVM). While expert tip
managing it is one of the core the traditional partitioning schemes MBR RAID is not backup
responsibilities of any system administator. and GPT are much simpler, they are rather Don’t use Mirrored RAID as an excuse to
The key to a good storage management inflexible. LVM, on the other hand, combines avoid taking backups. You don’t want to
policy is that it should strike the right one or more devices into a single logical be in a situation where a fried machine
balance between keeping the data safe volume group. You can then dynamically renders both disks useless.
yet accessible. create, resize and delete volumes in a volume
Before you can design a storage policy group to reallocate space. It also offers other
for an organisation, you need to understand benefits such as snapshot management. distribute data across multiple disks. With
how Linux deals with storage and familiarise the plummeting cost of storage, a RAID setup
yourself with the tools at your disposal. All helps avoid data loss and also minimises the
it takes is a couple of clicks to add storage Mastering rsync is downtime associated with hardware failures.
to a cloud server; provision a drive and then While RAID can be implemented by dedicated
attach the disk to an existing VM. In the one of those evergreen hardware, it can also be implemented by
real world however, you’ll first to decide software. Use the mdadm command to build
on the type of device (e.g. hard disk, SSD) skills that’ll never go and use software RAIDs. No matter how you
along with the interface to connect it to implement it, make sure you are well versed
your server (e.g. USB, SATA, PCIe). When the out of vogue in the different RAID levels to pick the one
Linux kernel detects a device, it tells udev, that works best for your organisation.
which then creates a representation of that Another popular storage deployment skill
device in the /dev directory. These device Linux systems give you several options for that should be in the repertoire of an admin
files are the way the kernel provides access partitioning, with parted and its graphical is the ability to set up a network-attached
to devices for applications and services. The frontend gparted leading the pack. To storage (NAS) device. OpenMediaVault
storage devices are called block devices, manage volumes you need to master lvm. (www.openmediavault.org) can make
and a good practice is to refer to them using While you can (and should) supplement this process effortless. It supports all the
their Universally Unique Identifiers or UUIDs. all storage mechanisms with backups, it’s popular deployment mechanisms, including
The next piece of the puzzle is partitioning always a good idea to use a RAID system to software RAID, and can be accessed using
how to
Create a NAS with OpenMediaVault
1 2 3
Install OMV The web interface Set up storage
Installing OMV is pretty When you’re done, reboot the You’ll first need to format the
straightforward. Download the computer, which will drop you drives before you can use them.
ISO, transfer it to a USB disk and run to the login shell. Don’t log in but instead Head to Storage > Disks to view all the
through the basic steps. Since you have head to the web address displayed on attached disks. Select the drive and click
multiple disks connected to the NAS box, the console. This brings up OMV’s web the ‘Wipe’ button. After you’ve erased a
make sure you select the right installation interface, from which you can manage all drive, head to Storage > File Systems to
target during the partitioning step. aspects of the NAS server. create a file system on the drive.
20
rsync
backuppc
Below Grsync is a graphical front-end to rsync that exposes virtually all of rsync’s command-line options
all the leading network protocols (see the it you’ll first have to spend some time
walkthrough below). answering questions like:
Before you can press the disk into active
service, it needs to have a file system on • What data is to be backed up? quick tip
it. Most Linux distributions default to the • Where will backup data be stored? rsync and BackupPC
EXT4 journalled filesystem, but there are • How often will backups be performed? rsync is one of the handiest tool to copy
several others that you should be familiar • How long will backups be retained? and sync files across the network. But
with. There’s NTFS, which is useful for • How will backups be accessed or if writing scripts isn’t your forte you can
interoperability with Windows machines, restored? deploy and master BackupPC, which is a
and XFS, which works well for housing • What system or technology will perform backup app written for the enterprise and
database files. The needs of organisations the backups/restoration? uses rsync in the background.
are, however, much better satisfied with
next-generation file systems, such as The answers to these (and other) questions
ZFS and Btrfs, that perform better than depend on various factors such as cost designed specifically to handle large
traditional file systems and are more and the amount of acceptable downtime. amounts of data. One of the industry
serious about data integrity. For example, the availability advantage of favourites is rsync, which scales well and
Once you’ve got a handle on the storage remote cloud storage diminishes with an offers enough dexterity to power desktop
systems in the organisation, it’s time to increase in the amount of backed-up data. backup apps as well as enterprise ones like
work out a plan for backing up all the data. There are a plethora of reputable open BackupPC. Mastering rsync is one of those
Irrespective of the technology that powers source backup tools and a number are skills that’ll never go out of vogue.
4 5 6
Use RAID Users and shares Enable shares
Instead of using the disks Head to Access Rights Finally, enable a network service
individually, OMV can tie them Management > User to add or – which users will use to access
into a software RAID. Head to Storage > import users. Next, you’ll have to add the shared folders – under the Services
RAID Management and select the disks you shared folders from the Access Rights section, and register the shared folders
want to use in the RAID, as well as the RAID Management > Shared Folders menu. Use with the service under the Shares tab. Your
level. Wait for the RAID to initialise before the pull-down menu to select the volume for users can now access these shared folders
creating a file system. the folder. across your network.
www.linuxuser.co.uk 21
Feature Sysadmin Survival Guide
Network essentials
Networks are a crucial building block of a modern day organisation,
and managing them well is an essential skill
A
lthough administrators interact
with real-world network hardware
quick guide
less frequently than they once did, Master Webmin
familiarity with traditional networking still
remains a crucial skill. In addition to grasping Webmin (www.webmin.com) is a
the underlying technology and the protocols configuration tool that can be used to
at play, you must know how to configure your control all aspects of your remote server,
network peripherals and how to troubleshoot such as setting up a cron job, reading
the connections. You should be familiar logs and managing running processes.
with the process of configuring networking Using it you can dispense your admin
in Linux and also have knowledge of the duties from a web interface. Instead of Above Webmin saves you the trouble of
configuration files involved. manually editing configuration files and having to memorise numerous parameters
The nature and composition of your fiddling with command line switches,
organisation will heavily influence how you Webmin helps you configure different create and manage user accounts and
set up your network. A business that is aspects of your system, which then set up disk quotas. Webmin’s intuitive
starting out often has only one main server automatically update the relevant dashboard contains a list of modules
that pretty much provides all the networking underlying config files. Webmin can each of which is responsible for
infrastructure: a DNS, DHCP, file, mail and manage network services as well as managing some service or server, such
web server all rolled into one. But very few the host system. For instance, you can as the Apache web server, the firewall,
larger businesses would trust their entire use the tool’s interface to create and software packages and so on. When
IT infrastructure to an individual host. Most configure virtual hosts for the Apache installed, Webmin reads config files for
admins would avoid such a single point of Web server, and set up a Samba file- all servers and services on your system
failure, but a small business can’t afford to sharing server just as easily as you can from their standard install locations.
have a host for the individual service.
Irrespective of the size of the network you
manage, the one networking skill that you mangling. The iptables command is the A VPN is, in essence, a private network that
must grasp firmly is tuning the firewall. The user-space management tool for Netfilter. runs over a public network, and one of the
Linux kernel uses Netfilter to facilitate key Many distributions have graphical tools best tools to help you get the job done is
network process such as Network Address for configuring the Linux firewall and the OpenVPN. The package is available in the
Translation (NAT), packet filtering and packet excellent Shorewall utility (http://shorewall. official repositories of all mainstream Linux
org) makes it easy to configure the firewall. server distributions. Once you have it up
and running, spend some time configuring
it to securely expose the resources in your
The one networking organisation’s network to other users over a
public network.
skill that you must While it’s always a good idea to configure
and roll out these key network infrastructure
grasp firmly is tuning components manually, there are several
specialist distributions such as ClearOS
the firewall (www.clearos.com) and NethServer (www.
nethserver.org) that are designed to deploy
many of these common services.
Still, do yourself a favour and spend time
tinkering with the iptables command to Shape traffic
really appreciate its usefulness. Netfilter is a complex firewall application
Another aspect of the modern workspace that can regulate network traffic and also
is that it extends beyond the confines of shape it. Shaping network traffic or packets
the physical office. If your users are not involves altering the stream of packets
located locally, you need some way of passing through the network. Admins usually
Above You can use server distributions such as connecting them as if they were local by use it to regulate the bandwidth for certain
ClearOS to deploy network services with ease setting up a virtual private network (VPN). types of connections. A simpler utility to
22
shape bandwidth is trickle, with which conf file and applies any rules it finds listed products
you can manage the upload and download there. By default the file has an example Key infrastructure tech
speeds of applications in order to prevent entry that’ll make sense with a cursory look
one from hogging all the bandwidth. at trickle’s man page. Browsing through the Learn to administer these four useful
Trickle regulates speeds by delaying file, you’ll realise that trickle can do a lot server apps
the data transferred over a socket. Trickle more than just set upload and download
provides an alternate version of the BSD
socket API, with the effect that socket calls
are now handled by trickle. Speed is limited Shaping network
by controlling the amount of the data written
to or read from a socket. traffic or packets
You can run trickle in either the standalone
mode for shaping individual downloads, or involves altering the
configure the trickle daemon to set global
limitations and priorities. For example, the stream of packets
1
following command OpenLDAP
www.openldap.org
$ trickle -s -d 30 ncftpget ftp:// speeds. It can also define a per-application All companies need a directory
ftp.foo.bar/Linux/somedistro.iso priority, along with time- and length- server and OpenLDAP is the popular open
smoothing parameters. Time-smoothing source implementation of the Lightweight
invokes trickle in standalone mode (-s) enables you to define the time interval for Directory Access Protocol (LDAP). Despite
and sets the download rate to 30KB/s the application to transfer data. Large values being lightweight, it’s a fully featured suite of
for downloading the specified ISO image will produce bursts in sending and receiving apps and development tools.
file, using the ncftp non-interactive FTP data, while smaller values ensure a smooth
2
download client. You can use the trickle and continuous data transfer. Kolab
daemon to set global limitations and Trickle does have some limitations. For www.kolab.org
priorities. When you invoke trickle without one it can only work on TCP connections, so Kolab is an open source, unified
the -s switch it checks to see if the trickled you can’t use it to regulate a UDP stream communications, collaboration and
daemon is running, reads the /etc/trickled. connection such as DNS. Furthermore, since groupware suite. It’s built with security,
it uses dynamic linking and loading, it can privacy and user control in mind and provides
only work with applications that use dynamic solutions for an email server, directory
expert tip libraries (Glibc). Statically linked applications service, web service, Calendar, Tasks,
No slashing are thus not compatible with trickle. To address books and more.
When using rsync, don’t put a trailing determine if you can use trickle with an app,
slash at the end of the source directory. use the ldd command, which gives you a list
This copies the directory along with its of all shared libraries. For example:
contents, instead of scattering the files
inside the backup directory. $ ldd /usr/bin/ncftp | grep libc.so
3
Docker
www.docker.com
The shiny new and popular piece
of tech for managing and running apps in
virtual containers is also a boon for admins.
You can use it to deploy conflict-free and
production-like development environments
for your users.
4
MariaDB
https://mariadb.org
Developed by the original authors
of MySQL, MariaDB is one of the most
popular database servers in the world. Many
companies use it directly or via apps built
Above Instead of configuring iptables manually, you can use a specially tuned firewall distribution around it, so it pays to make sure you are
such as IPFire (pictured above, www.ipfire.org) or the graphical Shorewall utility familiar with it.
www.linuxuser.co.uk 23
Feature Sysadmin Survival Guide
Automation
products
Automate admin tasks
Conduct admin business over several
machines with Fabric’s Python library
Smart admins are lazy – they automate
mundane repetitive tasks using scripts from fabric.api import run
def uptime():
A
s a savvy admin you should try to Kickstart can install not only the operating run ('uptime')
automate all your regular tasks, system but also all the applications you
such as flushing caches and expect to run.
1
removing temp files, locking out idle To make everything run as quickly as Define a function
accounts, backups and so on, using scripts. possible, you can store Kickstart files and Enter these lines in a file called
Scripts standardise these administrative Linux packages on a local Apache web fabfile.py and then call the
chores and free up admins’ time for more server, which then serves as your network function with fab uptime. You can also
pressing tasks. You should spend time installation server. To build your installation run functions on multiple remote servers
hashing out a plan to perform repetitive server, start by installing a barebones with fab -H localhost,192.168.0.100
tasks with as little involvement or CentOS system. Once the hardware and
2
intervention as possible. operating system are set up, make sure Change context
Bash and Python are two of the most the server has a fixed IP address, say Fabric’s context managers
popular scripting languages for automating 192.168.1.100. Then install the Apache web are used with Python’s with
server from the distribution’s repositories. statement. You can use the settings
Next, copy over the installation files context manager if you temporarily need
The best work is from the CentOS DVDs to this server with to run a command as a different user, such
something like rsync -arv /media/ as with settings (sudo_user='mysql'):.
work you don’t have to CentOS_7 /install.
Ideally you should set up the /install
do. Eliminating work partition on a separate disk or a logical from fabric.api import *
volume for more flexibility. If you create env.hosts = ['192.168.0.100','192.
isn’t lazy, it’s smart the install directory outside Apache’s 168.0.101']
document root directory (/var/www/html/), def upgrade_all ():
you must create a configuration file to point sudo ("apt update")
system administration tasks. Generally the web server to the correct directory, sudo ("apt -y
speaking, lengthier scripts for important such as: upgrade")
tasks such as backups and user resets are
executed on a schedule. To that end, spend # nano /etc/httpd/conf.d/install.
3
some time getting familiar with the time- conf Update several machines
based job scheduler, cron. Alias /install/ /install/ The env.hosts variable lists all
Another task that you must automate <directory /install> computers in your network.
is the installation and maintenance of Options Indexes When you call the function with fab
the computers in your network. Manual AllowOverride None upgrade_all it’ll connect to the remote
installs don’t scale and are prone to errors, Order allow,deny machines, refresh their repos and install
especially when done repeatedly. Fully Allow from all any updates.
Automatic Installation (FAI, https://fai- </directory>
project.org) is a set of Perl scripts that
enables you to run an unattended Debian Before you can access your install server from datetime import datetime,
install. Similarly, Kickstart helps you you’ll have to make sure the firewall on date, time
automate the installation of RPM-based the machines in your network allow HTTP now = datetime.now()
systems such as CentOS and Fedora. traffic over port 80 using iptables -I def fetch_logs():
INPUT -m state --state NEW -m tcp -p get (remote_path="/tmp/
tcp --dport 80 -j ACCEPT. logs.tar.gz", local_path="/logs/
Expert tip Other machines on your network will {}_log.tar.gz".format(now))
Matt Simmons on scripting now be able to connect to the installation
The best work is work that you don’t server. Next you need a kickstart file
4
have to do. Eliminating work isn’t lazy, to automate the installation. The most Download remote logs
it’s smart. Cut extraneous work and convenient way to create one is to use the The get command is useful for
concentrate on only that which needs to graphical Kickstart Configurator tool from downloading logs from a remote
be done. the repositories. Besides the package system. This example saves them to the
selection, you need to pay attention to two local PC with a timestamped file name.
24
important things in the kickstart files to work
with an installation server: the installation
method and the network setup.
The clients we’ll be setting up should
fetch packages from the network server
we’ve just set up, instead of from the CentOS
installation DVD. To change the installation
medium, head to the Installation Method
section in the Kickstart Configurator, toggle
the HTTP setting and enter the location of
the installation server – http://192.168.1.100/
install/ – in the adjacent textbox. Then jump
to the Network Configuration section and
make sure that the network device you add
uses the DHCP server.
Place the generated kickstart files under
the web server’s document root directory,
in a directory such as /var/www/html/
kickstarts. To install a client using the Above Generate as many kickstart files as you want, each with different package selections
kickstart files, boot from the minimal boot
CD, and at the boot screen point to the Pre-Execution Environment (PXE). A PXE
kickstart file you want to use, such as linux server acts as the network boot server and Expert tip
ks=http://192.168.1.100/kickstarts/ broadcasts a DHCP ‘discover’ request that Abhas Abhinav, DeepRoot Linux
gnome-workstation.cfg. Once you have includes the name of the boot server and Automate everything that you would
everything set up, rolling out new machines the boot file. The PXE-enabled remote client need to do more than once. After five
responds to the request, downloads the times, generalise it. And then, share it
boot file and then executes it to begin the under a free-software licence.
The clients we’ll be installation, which itself has already been
automated by you.
setting up should fetch install on any system from which you wish
Configuration tools to manage clients. Both Fabric and Ansible
packages from the That takes care of the deployment. But what rely on SSH to conduct their business, and
if you have to make a configuration change you can install them using your distribution’s
network server in a deployed machine, such as the firewall package manager.
change we listed earlier to allow incoming One difference between Fabric and
HTTP traffic? There are several tools Ansible is that you can get started with
becomes just a matter of pointing to the including Puppet, Chef, Foreman, SpaceWalk, Fabric without much delay, while with
correct kickstart file. You can set up new Ansible and Fabric that help you orchestrate Ansible you’ll have to spend some time
machines, reset old ones for new employees, various configuration operations. setting it up. However, many experienced
or change a web server into a mail server The advantages of Fabric (www.fabfile.org) admins consider Ansible to be ideal for large
with a single command. and Ansible (www.ansible.com) is that unlike and complex networks since it can better
While the installation is automated, the other solutions these two have no server model multi-tier infrastructure. On the other
you still have to initiate it manually. The daemons and are agent-less. In essence hand, Fabric uses Python for authoring
key to fully automating the process is the they’re really just a set of commands that you which is much simpler to understand
products
Run automated installs
Save time and effort by automatically provisioning machines using these solutions
1 2 3
MAAS Foreman Fog
www.ubuntu.com/server/maas www.theforeman.org https://fogproject.org
Canonical’s MAAS is easy to set Foreman’s a popular choice for With Fog you can image an
up and can provision Ubuntu, CentOS, provisioning machines and can deploy installation and deploy it to other
RHEL and SUSE servers. It was designed lots of mainstream distributions. One of computers on the network. It also makes
to compliment Canonical’s service the reasons for Foreman’s popularity is light of regular administration tasks such
orchestration framework called Juju that interoperability; it can work with a host as installing software, and can manage
enables you to easily deploy services with of popular application provisioning tools large networks that may be spread over
its charms architecture. such as Puppet and Chef. multiple locations.
www.linuxuser.co.uk 25
Feature Sysadmin Survival Guide
P
erhaps the one biggest reason for
the move towards computing in the
cloud is convenience. You get
technically superior access without the
overhead of maintenance and upkeep. Cloud 3 1
computing offers several other advantages
to the modern IT infrastructure, and like any
other resource on the network, it has to be
looked after by the system administrator.
There are various models of cloud
services. Of these, Infrastructure-as-a-
Service (IaaS) is the most pertinent to system
administrators. IaaS enables businesses to 2 4
purchase resources on demand, as and when
required, instead of having to buy hardware
outright. These are typically delivered in
the form of virtual private servers (VPS)
that are made up of virtualised peripherals.
5
You should have the skills to evaluate IaaS
providers and select the one that best meets
the requirements of your organisation.
Virtually all cloud vendors have a web-
based graphical interface for interacting with
the virtualised resources. The good ones quick guide
even have APIs that enable you to access all Manage KVM with virt-manager
their functionality via scripts for automation.
The Virtual Machine Manager is the open 1 VM details Click this button to view
source graphical front-end for creating details about the virtual hardware
VirtualBox’s KVM-based VMs, and is available as attached to the VM
virt-manager in the repositories of most
VBoxManage CLI can distributions. The app uses the qemu- 2 Virtual Hardware You can configure
kvm hypervisor, which is a version of the virtual peripherals or add new ones
be used to automate QEMU machine emulator modified by the
KVM developers. 3 Graphical console Use this to peek
essential VM tasks The virt-manager app is written in inside a running VM via VNC or SPICE
Python and is very intuitive to operate.
You can use the app to easily create 4 Create VM: This button brings up a
The best way to get to grips with this new new VMs, monitor them and make five-step wizard to create new VMs
dimension of the IT infrastructure is to sign configuration changes. virt-manager also
up for an account and leaf through their APIs includes a VNC and SPICE client that 5 List of VMs: virt-manager’s main
to understand how to integrate them with displays a full graphical console to the dashboard displays all added VMs
your scripts and other automation tasks. running VM. running on local and remote hypervisors
KVM, or Kernel-based Virtual Machine,
has been the default hypervisor on Linux
since 2007. It is a set of kernel modules that servers that have processors with hardware can use to automate essential VM tasks
when loaded converts a Linux server into a virtualisation extensions, either AMD-V or such as snapshots.
hypervisor. KVM depends on libvirt, which Intel VT-x. VirtualBox (www.virtualbox.org) The more useful virtualisation tech you
provides a convenient way to manage VMs is another hypervisor that you’ll need to must get to grips with is Linux containers,
and other virtualisation functionality, such as familiarise yourself with for the sake of end especially Docker (www.docker.com). Docker
storage and network interface management. users on your network. The good thing about enables you to bundle any Linux app, with all
You can only create KVM-based VMs on VirtualBox is its VBoxManage CLI that you its dependencies, into its own environment.
26
docker
portrainer.io
quick tip
Docker and Portainer.io
Using Docker via the terminal isn’t all
that cumbersome and the tool is well
documented. To make your life easier you
can use Portrainer.io (https://portainer.
io), which is an open source, web-based
graphical front-end that supports all
Above You can grab a host of Debian-based appliances that have been optimised for virtual machines features exposed by the Docker API.
and the cloud from www.turnkeylinux.org
You can then run multiple instances of the the attributes of the VM. If you want to make You can now modify the Vagrantfile to
containerised app, each as a completely changes to a VM, you’ll need to edit it. You’ll customise the installation. If you are running
isolated and separated process, with near- need to be well-versed with the anatomy of a web server inside the VM, you’ll want to
native runtime performance. a Vagrantfile in order to modify or create one forward port 80 by adding this line:
as per your needs. For now just use vagrant
Provision virtual up to create an actual VM from this file. This config.vm.network "forwarded_port",
Next you should familiarise yourself with tells Vagrant to create a new VirtualBox guest: 80, host: 8080
Vagrant (www.vagrantup.com), which helps machine based on the base image specified
you make consistent virtual environments in the Vagrantfile. It’ll do so by copying the You’ll now be able to access port 80 on
available to your users with a few keystrokes. virtual hard disk files from the remote server. the guest via port 8080 on the host. Type
Vagrant supports all major virtual platforms vagrant up to start the VM once again. You
such as VirtualBox and VMWare, and plays can also make changes to the Vagrantfile
nicely with all the well-known software You’ll need to be while the VM is running; in that case, type
configuration tools such as Chef, Puppet, vagrant reload to load the VM with the
Ansible, Fabric and more. well-versed with modified settings. Similarly, you can put all
To get started with Vagrant, fetch and the operations you might do with a freshly
install the latest binary from its website. the anatomy of a minted CentOS box – such as installing some
You’ll now have to create a Vagrant apps – inside a Bash shell script and then
configuration file that defines all the Vagrantfile to edit it point to it from the Vagrantfile with:
characteristics of the VM from a template.
You can search for templates based on config.vm.provision "shell", path:
various operating systems and pre-defined Once it’s done, you’ll have a fully featured "post-install.sh"
purposes from https://app.vagrantup.com/ CentOS 7 VM running headless in the
boxes/search. So for example, vagrant init background. Use the vagrant ssh command Now bring up or reload the VM. Thanks
centos/7 creates a VM based on the CentOS to connect to the machine via ssh. You can to this line, Vagrant will execute the post-
7.5.1804 distro. now interact with the VM like any other install.sh script after the VM is up and
The command creates a file called CentOS installation. When you’re done, running. There’s a lot more you can do with
Vagrantfile under the current directory, and type exit to drop back to your host and use Vagrant, though, so make sure you read
is the main configuration file that defines all vagrant halt to turn off the VM. through its documentation.
www.linuxuser.co.uk 27
Feature Sysadmin Survival Guide
E
verything from databases to A good sysad should be fully aware of the
daemons keep logs of their activity limited usefulness of this data, in terms of quick guide
and managing these is as time, and must automate the process of Analyse with Logwatch
important a function as deploying servers. filtering and summarising it using tools such
It’s the responsibility of system as Logwatch (see right). Furthermore, it’s One of the least interesting part of an
administrators to gather actionable bits of important to note that the syslogd daemon admin’s job is to scroll through logs in
information from this deluge of messages. is a passive tool that awaits inputs from order to spot potential issues. This is
Log data usually ends up in the /var/log apps and doesn’t go out and actively query where log filtering tools like Logwatch
directory in a variety of files usually placed them. For this you need a monitoring tool come in. It parses, analyses and filters
by the syslog daemon. Syslog has been the logs, and then generates daily reports
standard Unix format for several years, but on your system’s log activity. The utility
there have been a couple of replacements, Syslog has been the is available in the official repositories
with rsyslog being the most recent and of all popular server distributions,
popular one. Make sure you are fully versed standard Unix format including CentOS and Debian.
with the configuration of the logging daemon
on your servers. for several years
Another aspect of managing logs is to
control their size and archive duration, which
for some types files such as access logs is like Nagios (www.nagios.org), one of the
mandated by regulatory policies. To help you most popular and extensively used network
keep the logs manageable you need to rotate monitoring tools, which you can use to
them, which involves flushing the old ones streamline the management and monitoring
to an archive file. The logrotate utility is a of your network. You can use it monitor nearly Above Logwatch reports are categorised by
popular choice for this task as it implements all devices and services that have an address services; you can control the level of verbosity
a variety of log management policies. and that can be contacted via TCP/IP.
how to
Use osquery
SELECT * from logged SELECT * from SELECT * from kernel_ SELECT pid, name,
_in_users; iptables; info; uid, resident_size from
SELECT * from last; SELECT * from SELECT name, size, processes order by
listening_ports; used_by, status from resident_size desc limit
kernel_modules where 10;
status="Live" order by SELECT name, path,
size; pid FROM processes WHERE
1
Check on users on_disk = 0;
You can use osquery
2 3
to keep an eye on the Check firewall Check kernel
4
users logged into your servers These queries help You’ll also want to Watch processes
with the first command shown keep an eye on the run these queries The first query
above, while the second shows firewall. If the first one doesn’t periodically and compare their displays the 10
the list of previous logins. Look produce any output it means output against older results for largest processes arranged
for logins from unknown IP there’s no firewall – which isn’t any changes. The first retrieves by size, while the second
addresses, and even more so if a good thing, especially on a information about the current displays processes that don’t
there are multiple users logging server. The second lists all the kernel to help identify outdated have a binary associated with
in from an unfamiliar host, listening ports and will help you kernels, while the second lists them. You should immediately
which should be a red flag. find back-doors on the server. all loaded kernel modules. terminate that process.
28
ON SALE NOW!
30
Native Union Eclipse Charger
3-port, high-speed USB charger with cable management
get a
gift worth
£69.99
www.linuxuser.co.uk 31
Feature Machine learning in security
Special report:
ML in security
While GDPR raises concerns at InfoSec Europe in
KEY INFO
Infosecurity Europe is London and the shift to cloud technology continues,
the largest event for
information security AI, automation and machine learning gather pace
professionals on the
L
continent, with over aurent Gil started to get driving tickets issued stalwart Elastic were promoting the need to think
400 exhibitors and in Los Angeles. He didn’t live in LA, but the differently and store more data for threat-hunting.
over 19,500 visitors. person using his driving licence number did “Everyone limits their data acquisitions,” Neil Desai,
and had obtained it from the high-profile Equifax hack. Solutions Architect for Elastic, told us after his talk on
Understandably, Gil’s not a fan of non-AI-based security detecting with ML. Security has lived in this peculiar silo
platforms, because he says it only took one person in a for a long time, where limited slices of data have been
200-strong team at Equifax to forget to patch Apache used to hunt down attackers. Storing terabytes of logs
Struts for his licence to be stolen. Gil also happens to just hadn’t been practical for many companies, and the
work as a Product Strategy Architect at Oracle, and was more data that was stored the worse the search
at InfoSec Europe 2018 in London speaking on Machine performance became, taking hours to
Learning and how Oracle + Dyn uses ML for detection as complete. However, times have changed
part of its DDoS protection and malicious bot mitigation. and admins “need to be able to search
Although there was a strong preoccupation with GDPR all the data in seconds or minutes,
and how to secure data at InfoSec Europe this year, AI, not hours”, says Desai.
automation and machine learning are evidently growing Another issue for security
trends in the information security business. This is not professionals is visibility, he
surprising given the industry’s traditional view that the explains. Half the traffic on the
adversary tends to have all the fun. A bad actor can internet is now protected by
employ all manner of tricks and pick away at the defence, HTTPS, according to the EFF’s
while the defenders try to make the best of limited last report in early 2017.
resources and cover all attack surfaces. If this didn’t complicate
Machine Learning, in particular, promises to even things enough, Desai says –
the playing field and companies such as open source quoting Matt Graeber, world-
32
renowned pen tester – “A sufficiently advanced threat
actor is indistinguishable from a competent system spotlight
administrator.” Which effectively means you’ll be lucky Fighting botnets with ML
if an attacker trips your security-related devices and
generates security-related events. Unless you are able, Once he’d stopped
or legally allowed to, decrypt SSL traffic, your chances of shaking his fist at
finding something is quite restricted, says Desai. Equifax, Laurent Gil
(right) presented
a case study of
Attackers are hitting a stopping a nasty
botnet attack on
single host and hopping “a very famous media
site in the UK” using
laterally until they get to the Machine Learning.
AI is good at identifying attacks through
information they want similarities, but this attack “was almost
impossible to identify, as they were burying
Additionally, the traditional approach of pulling a single request within millions of requests,”
logs from the data centre and the perimeter of your explained Gil. First, they got lucky spotting an
architecture via event, firewall, IDS and netflow logs just early-morning attack, but they wanted to see if
isn’t effective any more. Desai says that attackers are they were any patterns to what was inside the
hitting a single host and hopping laterally until they get packets. Traditional techniques didn’t work in this
to the host that either has the information they want, case, such as limiting the number of requests by
or access to the information they want to steal – while IP address or geolocation, as the IPs were spread
the admin is none the wiser because they are too far in across different addresses and countries. “We
front. The only appropriate response to such threats is couldn’t even look at user agents as the guys
to collect logs from a company’s clients themselves for were very smart and using browsers that looked
analysis, which could mean hundreds or thousands of real,” explained Gil.
workstations. In the past, searching this amount of data But what finally clinched it was ML analysis of
would be impractical and also create volume and scale the HTTP headers: “We found that the botnet was
issues. However, ML solutions thrive on data, and data using a particular header in the HTTP requests in
that’s stored for longer, as it enables an investigator to a certain order.” So the team were able to inject
follow the trail, find the initial host and infection point. vectors into its machine learning techniques
that used 126 binary features and catch all the
Machine learning for all bad actors with 100 per cent probability: “All the
Like many ML products in different fields, Elastic wants 14,544 malicious requests were caught.” said Gil.
to bring ML to the masses without the need for a data “Even the hacker probably didn’t know there was
scientist on staff who understands the mathematics an order to the way they did the headers.” For Gil
and algorithms involved. In fact, ML aims to democratise this was a perfect example of what ML in security
threat-hunting. There’s a recognised four-step process is about: “Reducing human error while enabling
followed by many threat hunters, which was originally and enhancing human oversight.”
defined by Sqrrl – a company spun out of the NSA
specialising in advanced threat-hunting. This involves
creating: a hypothesis for an investigation; consideration events. So once the initial fields are defined, ML will
of the right tools to use; what the patterns and TTPs come up with a baseline for determining what’s normal
(Tactics, Techniques and Procedures) of a cyber attacker or expected, but you don’t have to know everything about
that you’re looking for might be; and some kind of the process and the algorithms involved. The use of
automated analysis. Desai says that Elastics’s ML can automation, orchestration and AI are seen as significant
handle everything beyond the hypothesis, which means drivers for change in security, and as companies seek to
that threat-hunting can done by anyone once the initial employ the right staff with the appropriate skills, they
fields that ML analyses are defined. Previously, says are increasingly experiencing a skills shortage. According
Desai, only the senior staff got to do the whole process, to 2018’s report from ISACA – a non-profit members
because they have the experience and an understanding association for information governance, control, security
of what’s going on. and audit professionals – 59 per cent reported having
ML also goes beyond the regular search methods that unfilled security positions, while only 25 per cent of
an admin would do to determine if their hypothesis is applicants were ‘well qualified’ for the role. ML could
correct, such as bar charts and pie charts, basic stack effectively fill some of the gaps, but as Desai commented
analysis and basic graphing. Desai says Elastics ML in regard to SOCs (Security Operations Centres), an initial
can uses normalisation, baseline analysis (to identify alert could be chalked up as a false positive through lack
starting points) and statistical analysis on time-series of understanding by a tier-one analyst.
Tutorial Essential Linux
Resources
Git
If not already
installed, install
through your
package manager
or from https://git-
scm.com
OpenSSH
Above We can track any number of remote repositories from a Git project. Typically, one is named origin, and is the primary place for
Access to a sudo- us to push code, but more complicated setups are possible – and useful
privileged user on a
remote server So far, we’ve been using the GitHub.com site to host our Now that Microsoft has acquired GitHub, we know that
(recommended but Git repositories. GitHub is a great site, and it’s a very many readers have objected to using the hosting platform
not essential) quick and easy way to get a project up and running with anymore and are considering other options. Although
Git. It’s no surprise that it’s used to host a whole range of Micosoft has changed its positon on open source
repositories, from simple personal projects to the code software significantly in recent years, we’ve decided to
for Git itself and the Linux kernel. included some alternatives to GitHub.
Nevertheless, Git is not tied to GitHub, and there
are a number of reasons you might decide you want to Simulate a remote server
host a Git repository on your own server. One reason This tutorial will work best if you have a remote server set
is if you want to make a repository private. By default, up with SSH access, so that you can shell into the server
repositories hosted on GitHub are public, so anyone can by running the following command (replacing myusername
see the code. You can make a repository private, but only and myremote.example.com with the appropriate
if you pay $7 per month for a Developer account, or $9/ username and hostname for your server).
month for a Team account.
You might say that this is money well spent – and $ ssh [email protected]
rightly so – but if you have space on your own server, and
don’t need any GitHub-specific tools, it is also money that However, if you haven’t got access to a remote server, you
you don’t need to fork out. Self-hosting means you are in can simulate one locally. That way, at least you’ll know
complete control over who has access to your data. It’s what to do if you do ever need to set up a Git repository
also surprisingly easy to do, as we shall see. on a remote host. To start with, edit the /etc/hosts file:
34
$ sudoedit /etc/hosts Left Adding public
Figure 1
The file should contain something like the following: keys to ~/.ssh/
$ ssh-keygen authorized_keys
127.0.0.1 localhost.localdomain Enter file in which to save the key (/ on the remote server
computer home/user/.ssh/id_rsa): enables users to
::1 localhost.localdomain Enter passphrase (empty for no authenticate with
computer passphrase): their private keys
rather than have to
Enter same passphrase again:
Below these lines, add the following: remember a password
$ cd ~/.ssh
127.0.0.1 myremote.example.com $ cat id_rsa.pub >> authorized_keys
$ mkdir -p ~/git-repos/sky-fire-4000.git
$ ssh -i id_rsa [email protected] $ cd $_
If this doesn’t work, you might need to start the SSH Once inside, we run the following command to initialise
service first. Try sudo service ssh start, or sudo a bare repository.
systemctl start sshd if you are using Systemd.
The first time we connect, SSH will prompt us to accept $ git init --bare
the RSA fingerprint for the new host. Type yes.
If you type ls, you’ll see some strange files and
Create a user for Git directories inside: branches, config, description, HEAD
It’s a good idea to create a new user to control the Git and so on. Actually, these are not so strange: if you go
repositories, so that later on we can allow people to push inside any Git repository and look inside the hidden folder
code without giving them full SSH access to the machine. .git, you’ll see exactly the same files and folders inside
Log on to your remote machine and create a new user there. These folders hold all the information about the
there in the usual way, giving it a strong password. If Git repository, but in a special compact form. Normally,
you have followed the previous section to create a fake these folders (the Index and HEAD) are hidden inside the
remote host on your local machine, you only need to .git directory, so we only see the files in the working
create a new user on your own machine: directory when working with a GitHub repository. The git
www.linuxuser.co.uk 35
Tutorial Essential Linux
36
appears, you don’t need to modify /etc/shells further.
Figure 5
Otherwise, you should append the absolute path to the The centre
git-shell executable. Since gituser has not got sudo does not hold
access, you’ll need to log in with a privileged account in You’re not
order to carry out these next few steps: required to
have one
$ ssh [email protected] central remote
$ cat /etc/shells # If git-shell already repository you
appears, then skip the next command. push to as Git is
$ which git-shell | sudo tee -a /etc/shells decentralised.
# Do not omit the -a, or you will lose the In fact, for some
existing contents of /etc/shells! projects, it might
be preferable
Still logged on with our privileged user, we can now run want to access gituser through another shell – for to have no
the following command to change the shell for gituser to example, in order to create a new repository. In that case, centralised
git-shell: assuming we still have access to a sudo-privileged user repository at
on the remote server, we can log in with that user and all; in this setup,
$ sudo chsh gituser -s $(which git-shell) then switch to gituser with the su command, using the each collaborator
$ logout -s option to change the default shell: has their own
copy of the
The first thing that we notice is that SSH access to $ ssh [email protected] repository, and
gituser no longer works: [myusername]$ su gituser -s $(which bash) they use tools
Password: such as Git’s
$ ssh [email protected] [gituser]$ git init --bare ~/git-repos/sky- e-mail support
fatal: Interactive git shell is not fire-8000.git to transfer
enabled. commits around.
hint: ~/git-shell-commands should exist and This means we have the best of both worlds: we have full At the other end
have read and execute access. access to the gituser account, but our collaborators only of the scale,
Connection to myremote.example.com closed. have the partial access provided through git-shell. it might make
sense to use
We can still, however, perform any legitimate Git-based Convert a code base to a Git repository multiple remote
activities through the gituser user: So far, we have assumed that we are using Git with our repositories.
repository from the beginning. In an ideal world, this We can use
$ mkdir sandbox && cd $_ would be the case, but it is probably more common that git remote
$ git clone [email protected]:/ we start creating a code base, and only later decide add/remove
home/gituser/git-repos/sky-fire-4000.git to convert it into a Git project in order to share it with to give them
others. Git provides a nice mechanism for doing that. all descriptive
If we make some modifications, we see that we can still Suppose we have a directory full of articles about names.
push them to the remote server: Linux and that we want to convert it into a Git project:
$ ssh [email protected]
$ su gituser -s $(which bash)
$ git init --bare ~/git-repos/linux-
articles.git
www.linuxuser.co.uk 37
Tutorial Essential Linux
We now log back out of the remote host and into our about the name origin, but it tends to be used for
local copy of the repository. At the moment, we have one the server where our repository is hosted. In addition,
empty bare repository sitting on the remote server and whenever we use git clone to clone a repository, Git
a second normal empty repository on the local machine. always marks the location we cloned from using the
While there is code in the local working directory (namely name origin.
the Linux articles that we started with), it hasn’t been Now, all that remains to do is to perform an initial
added to the index, so as far as Git is concerned, these commit that contains all the files that we already had in
are both empty repositories. the directory, and push that to the remote server.
We can now run the git remote add origin
command inside the local repository to tell Git that the $ git add .
Below right Finding remote repository is the host for the local one. From $ git commit -m "Initial commit of existing
out your public IP inside the linux-articles directory, use: Linux articles."
address requires $ git push -u origin master
you to connect to an $ git remote add origin gituser@myremote.
external service. An example.com:/home/gituser/git-repos/linux-articles If we want to be able to run git pull, we also need to
alternative to using tell the repository that origin is the default upstream
Google is to run Technically, this command isn’t doing anything special: location. We run the following commands for that.
the command curl all it does is let us use the short identifier origin rather
ipinfo.io/ip than the long address. If we were to omit this command, $ git config branch.master.remote origin
we’d have to type the whole address every time we $ git config branch.master.merge refs/heads/
wanted to push something to the server: master
Bottom right GNU
Savannah hosts a $ git push -u [email protected]:/ Use Gitea
number of important home/gituser/git-repos/linux-articles master Gitea (pronounced ‘git tee’) is an open source Git hosting
GNU projects, as package, written in Go, that makes it easier to host your
well as projects The git remote command enables us to maintain a list own Git projects. If you miss GitHub’s functionality, such
contributed by the rest of remote branches with short identifiers. We can use as the nice graphical interface or the ability to make
of the Free Software git remote add to add a new address and git remote pull requests, you might like Gitea. Using Gitea is, in a
Community remove to remove an existing one. There’s nothing special sense, like hosting your own private version of GitHub.
com: you can allow people to create user accounts,
fork repositories, submit pull requests and so on. It is
GNU Savannah intended to be a lightweight solution, and can be run on
Savannah (https://savannah. at http://http://savannah.gnu.org/ simple hardware such as the Raspberry Pi.
gnu.org) is a version control site maintenance/HowToGetYour To get started, log on to the machine where you want
developed by the GNU foundation. ProjectApprovedQuickly. It’s to host your repository. We found that the easiest way
As you can see from Figure 7 the certainly more work than starting to install Gitea was to download the binary from their
site interface is a bit different a project on GitHub, but the website like this:
from GitHub and Gitea. To get process means that users can
started, you’ll need to go to https:// have increased confidence in your Figure 6
savannah.nongnu.org/account/ software and be sure that it is truly
register.php. Creating your own free. Savannah prohibits history-
project and hosting it on Savannah rewriting commits to ensure that
is a bit harder. the full version history of software
Being so closely linked to is always available.
the Free Software community, If you are comfortable with the
Savannah is committed to hosting slightly zealous focus on the Free
only software projects that are Software movement, and if you can
free, depend only on free software get your project approved, you’ll be
and run on free operating systems. part of a devoted and prestigious Figure 7
Unlike GitHub, the software that software community that includes
Savannah runs on is itself free the repositories for all the GNU
software, as are all its components. software that we use every day in
This means that if you want to Linux. These include things such as
create a project on Savannah, you’ll the Bash shell (https://savannah.
need to satisfy the Savannah team gnu.org/projects/bash) and other
that your project is free software essential programs such as the
from top to toe. GNU C compiler (https://savannah.
There’s a helpful guide to getting gnu.org/projects/gcc) – so this is
your Savannah project approved big-league stuff.
38
GitLab and Bitbucket
GitLab (www.gitlab.com) is a direct them everything they need. It’s
competitor to GitHub and a similar worth noting that Gitlab.com runs
244 words
site in many ways. Unlike GitHub,
the Community Edition is free and
the Enterprise Edition, so you can
get the extra features for free if you
open source, and also enables don’t care about self-hosting.
you to create unlimited private Bitbucket (https://bitbucket.
repositories. GitLab has a web org) is another GitHub alternative,
interface and its own hosting, like owned by Atlassian. You can use
GitHub, but it is also possible to Bitbucket online for free, although
use it to self-host repositories, very you’ll have to pay if you want to host
similarly to what we did with Gitea your own instance (as with GitHub).
in this article. If you’re interested in Bitbucket enables you to create
self-hosting with GitLab, there are an unlimited number of private
very good instructions available at repositories for free, but they’re
https://docs.gitlab.com/ce/install/ limited to five contributors each.
README.html. We recommend the We enjoyed using Bitbucket,
Omnibus installation if you’re using though we found it was lacking in
a compatible Linux distribution. certain features: there is no option
We were impressed with the to create new branches, and the
range of features available in issue tracker is so-so. Part of the
GitLab, even in the free Community reason for this is that Bitbucket
Edition, and we think that most is intended to be integrated with
developers will find that it gives other Atlassian products.
$ wget -O gitea https://dl.gitea.io/
gitea/1.4.2/gitea-1.4.2-linux-amd64
$ chmod +x gitea page, as shown in Figure 5. Here, we can create new Top left Bitbucket
repositories, or migrate them from GitHub or other sites. enables you to host
You can then run the following to start the program: Sharing this site with other people is a routine port- private repositories
forwarding job. First, we need to find out our internal for free, and has
$ ./gitea web IP address. We can find that by running the ifconfig great integration with
command, and looking for the entry corresponding to our products such as Jira
In a browser, navigate to http://localhost:3000. You current network connection. In our case, the first couple and Trello
should see a form as in Figure 4. The first thing we need of lines of that entry were:
to do is to set up a database for Gitea to use internally. Above left GitLab may
Gitea supports MariaDB/MySQL, SQLite, TiDB and others, wlp2s0: flags=4163<UP,BROADCAST,RUNNING not have a community
and it’s up to you to set this up. As an example, if you ,MULTICAST> mtu 1500 the size of GitHub’s,
have MariaDB set up and configured with a root user, you inet 192.168.1.7 netmask but it could be a better
can run the following to create a database called gitea 255.255.255.0 broadcast 192.168.1.255 choice, especially if
and a user, also called gitea, to administer it. you need a private
and the IP address we want is the one immediately repository
$ mysql -u root -p following inet – 192.168.1.7. To carry out the actual port
mysql> CREATE DATABASE 'gitea' DEFAULT forwarding, we recommend using MiniUPNP (http://
CHARACTER SET 'utf8mb4' COLLATE 'utf8mb4_ miniupnp.free.fr). Having installed it, run the following
general_ci'; command to broadcast your Gitea instance on port 617,
mysql> CREATE USER 'gitea'@'localhost' replacing 192.168.1.7 with your own internal IP address.
IDENTIFIED BY 'password';
mysql> GRANT ALL PRIVILEGES ON 'gitea'.* TO $ upnpc -a 192.168.1.7 3000 617 TCP
'gitea'@'localhost';
mysql> \q Lastly, we need to find our public IP address, which we
can do by searching “what’s my IP address” on Google,
You should replace password with an actual password. as in Figure 6. Supposing it’s 2.4.6.8, for instance, if
Now go back to the page http://localhost:3000/install somebody now logs on to a computer running on a
in your browser and type in the details for the user and different network and navigates to http://2.4.6.8:617,
database you’ve just created. You should be able to they’ll get to our Gitea instance, and can create their
leave the other details as they are; scroll down and press own account. Gitea does not have the same features as
‘Install’ at the bottom. Git for browsing through repositories, but if we publicise
This takes you to a page where you can create a user the URL of our repositories to other people, they can fork
account. Create one, and you’ll be taken to the main them and submit pull requests exactly as on Github.
www.linuxuser.co.uk 39
Tutorial Azure PowerShell
Azure Resource
Manager overview
http://bit.ly/
azure_ov
40
Translating the above to the appropriate package
manager commands, apt in this example, we can execute
Figure 1
the following from our local shell environment.
Module Cmdlet categories of Interest
sudo apt-get update
sudo apt-get install curl apt-transport-https AzureRM.Profile. Account, Context, Environment, Error,
curl https://packages.microsoft.com/keys/ Netcore Subscription and Tenant
microsoft.asc | sudo apt-key add - AzureRM.Compute. Availability Set, Container Service, Disk,
sudo sh -c 'echo "deb [arch=amd64] https:// Netcore Image, Snapshot, Virtual Machine among
packages.microsoft.com/repos/microsoft-debian- other entities
jessie-prod jessie main" > /etc/apt/sources.
AzureRM.Network. Application Gateway, Route, Load Balancer,
list.d/microsoft.list'
Netcore Network Interface, Security Group, Network
sudo apt-get update
Watcher, Virtual Network and VPN Client
With package sourcing now in place, the next step is AzureRM.Storage. Storage Account and Storage Account Key
to install the PowerShell binaries using your package Netcore
manager’s install command. In the case of Debian AzureRM. AD Application, Managed Application,
or Ubuntu, this is the well-known apt-get install Resources.Netcore Resource Group and Resource Group Template
command sequence supplied with the -y switch to (ARM)
auto-confirm prompting:
AzureRM.Websites. Web App
sudo apt-get install -y powershell Netcore
Once installation completes, fire up PowerShell using PowerShell modules that make up Azure PowerShell. Use Above Azure
the pwsh command to establish a PowerShell session. the cmdlet Get-Command to inspect the various cmdlets PowerShell’s five main
To verify the PowerShell installation, type the following: included in each of the modules. cmdlet modules
For example, to inspect the cmdlets in module
$PSVersionTable AzureRM.Compute.Netcore, issue the following command
statement from the PowerShell command prompt:
This will output PSVersion information along with
PSEdition, which shows as Core. Traditional UNIX tools Get-Command -Module AzureRM.Compute.Netcore
are simultaneously available from PowerShell, as
demonstrated by executing cat to print Debian OS details This will command output several cmdlets including New-
to console: AzureRmVM, which is used to create virtual machines.
www.linuxuser.co.uk 41
Tutorial Azure PowerShell
Right Azure has an extensive set of command line options making Figure 2
infrastructure it flexible enough to create required dependent
infrastructure resources as necessary (see Figure 2),
resources use in the
Virtual Machine Resource Group
creation of a
virtual machine
or to reference existing resources during the virtual-
machine creation process.
Name
Real-world scenarios often require that dependent Virtual Network Subnet Configuration
infrastructure resources be instantiated and managed Public IP Address DNS Name
independently to maximise control over specific creation
parameters and shared infrastructure configurations. Inbound Rules Network Security
We’ve created a custom script called AzMyVmCreate.ps1, Group
available on the coverdisc or from FileSilo, which provides Network Card Login Credentials
an example of creating an Ubuntu Server virtual machine
with full enumeration of virtual network, IP address,
subnet configuration, in-bound rules, network card, Note that it’s also possible to create virtual machines
network security group and login credentials. using other Linux images; see the Linux Offerings box
The cmdlets New-AzureRmVirtualNetwork, (far-left) and table (below-left) to learn how to identify
New-AzureRmPublicIpAddress, New- virtual-machine SKU details for other Linux distros.
AzureRmNetworkSecurityRuleConfig,
New-AzureRmNetworkSecurityGroup, New- Scripting VM creation
AzureRmNetworkInterface, New-AzureRmVMConfig and AzMyVmCreate.ps1 culminates in the code block shown
New-AzureRmVM execute sequentially as part of this full directly below where the virtual machine configuration
enumeration example. For reference, the relevant parts is instantiated and passed to New-AzureVmVM to do the
of the custom script which calls these particular cmdlets main work of virtual machine creation:
looks like this:
Linux ..
offerings $virtualNetwork = New-AzureRmVirtualNetwork $vmConfig = New-AzureRmVMConfig -VMName
available -Name $virtualNetworkName -ResourceGroupName $vmName -VMSize Standard_D1 | '
in Azure $uniqueGroupName -Location $groupLocation Set-AzureRmVMOperatingSystem -Linux
Marketplace -AddressPrefix "10.0.0.0/16" -Subnet -ComputerName $vmName -Credential $cred | '
A variety of $subnetConfig -Verbose Set-AzureRmVMSourceImage -PublisherName
Linux virtual .. Canonical -Offer UbuntuServer -Skus 16.04-LTS
machine images $publicIp = New-AzureRmPublicIpAddress -Version latest | '
are available for -ResourceGroupName $uniqueGroupName -Location Add-AzureRmVMNetworkInterface -Id $nic.Id
downloading $groupLocation ' -AllocationMethod Static ..
from the Azure -IdleTimeoutInMinutes 4 -Name $publicIpAddress New-AzureRmVM -ResourceGroupName
Marketplace. .. $uniqueGroupName -Location $groupLocation -VM
These images $nsg = New-AzureRmNetworkSecurityGroup $vmConfig
are prebuilt -ResourceGroupName $uniqueGroupName -Location ..
from a number $groupLocation '
of publishers. In -Name $networkSecurityGroupName Execute the script from your Linux-based Azure
the case of Linux -SecurityRules $nsgRuleSSH PowerShell context. The input arguments expect the
this includes .. following parameters:
OpenLogic, $nic = New-AzureRmNetworkInterface -Name
which offers a $networkInterfaceCardName -ResourceGroupName param(
prebuilt version $uniqueGroupName -Location $groupLocation ' [string]$resourceGroupName,
of CentOS; -SubnetId $virtualNetwork.Subnets[0]. [string]$resourceGroupLocation,
credativ, which Id -PublicIpAddressId $publicIp.Id [string]$vmComputerName,
maintains -NetworkSecurityGroupId $nsg.Id [string]$vmUser, [string]$vmUserPassword
a Debian )
offering in the
Marketplace; Publisher Name Offer Name To execute this script at the PowerShell prompt, type the
and Canonical, following command:
which offers
OpenLogic CentOS
Ubuntu Server. credativ Debian ./AzMyVmCreate.ps1 "WebGrpA" "NorthCentralUS"
The table to the
Redhat RHEL "WebUbPZ2" "adminuser" "s0m3Passw0r3"
right includes
publishers SUSE SLES Obviously, you’ll want to change the password to a
and offers for SUSE openSUSE-Leap more secure and personally memorable version. Upon
common distros. successful execution of this script, a XML file is written
Canonical UbuntuServer to the local working directory containing the identifiers
42
of dependent objects captured during resource creation.
Figure 3
The contents of the text-based XML file includes a list
of identifiers of dependent objects created, including
the resource group, virtual network, IP address, network
interface settings and VM name. Inspect the contents
using cat:
cat vui741900344-WebUbPZ2.xml
Figure 3 shows the contents of the text-based XML file Figure 4 Above File output
generated by AzMyVmCreate.ps1. of the custom
PowerShell script
Declarative structure AzMyVmCreate.ps1
The previous example demonstrates how to create an
Azure resource using imperative Azure PowerShell Left ‘less’ output from
commands. However, Microsoft’s approach to creating the cmdlet Export-
resources such as virtual machines is not imperative, AzureRmResource
butrather declarative. Group
The foundational entity of the preferred declarative
approach is the Azure Resource Manager template, a
JSON file that defines one or more resources to deploy
to a resource group. It also defines the dependencies
between the deployed resources. This template-based
approach helps make it possible to more consistently Start-AzureRmVM and Stop-AzureRmVM for such
and repeatedly deploy cloud resource configurations that administrative or controlling actions. To stop a running
are highly dependent and/or complex. The skeleton of a virtual machine in Azure, use a cmdlet such as Stop-
JSON template file is: AzureRmVM as follows:
www.linuxuser.co.uk 43
Tutorial Computer security
VirusTotal
www.virustotal.com
Censys
https://censys.io/
api
Shodan
https://developer.
shodan.io
Koodous
https://docs.
koodous.com/rest-
api/getting-started
md5decrypt
http://md5decrypt.
net/en/Api
Above Yes, using public APIs instead of their web interfaces is uglier – we’ll give you that
44
hash as a prefix to the API call. All of the hashes on the
website are stored in upper case, so you have to convert
the lower-case hash to upper case before using the API:
That’s it! You will get a list of possible hashes that have
this prefix, along with the number of times they have
been seen. Then it’s up to you to locally iterate over all of
these possible hashes (on average, 478) to see if there’s
one that matches up. Automating this is easy. We have
written a simple Bash script (see pwnedpasswordchk.sh
on the coverdisc) that can look for a particular password
or for a bunch of them stored in a file (pictured, right).
Get the script from the coverdisc along with the provided
file passwords.txt and run it this way:
./pwnedpasswordchk.sh -f passwords.txt
pam-auth-update utility (see man pam-auth-update). Above The look
We dare you to check some of your old passwords using However, our module is basically a proof of concept so in on your face once
this, by the way… order to enable it you have to edit /etc/pam.d/common- you found out your
So far so good, but we can do better. Let’s imagine you password and add the following line before the first password was on
want to make sure your GNU/Linux users don’t choose pam_unix.so entry by hand: this database…
passwords that have been pwned. Linux Pluggable
Authentication Modules (PAM) are a perfect technical password requisite pam_havebeenpwned.so
solution for this. Every time a user runs the passwd
command to change their password, a PAM module Next, append the following option to the pam_unix.so
should query the API and reject this new password if entry: try_first_password. With these changes, our
it’s found in the dataset. Writing PAM modules is out of module will be the first one being executed whenever
the scope of this tutorial (although we will cover it soon), the passwd command runs. If it succeeds (that is, if the The REST
but suffice it to say we’ve implemented such a module password has not been pwned yet), it will stack the new security cheat
for you. Bear in mind that messing up with PAM can password with a call to pam_set_item(), ready to be sheet
used by the next module in line, which happens So far you have
to be pam_unix.so. Thanks to its try_first_ been consuming
Public APIs are front-ends to password option, this module will try first REST APIs. At
the previous stacked module’s password, some point,
massive databases holding vast which happens to be the one our module has though, you
just stacked. If it satisfies it, this module will may be willing
amounts of valuable data return PAM_SUCCESS and /etc/shadow will to develop and
be updated (see man pam_unix). All set! Now deploy your
it’s testing time. Spawn a new terminal within own. As with
easily lead to a total system lock-down or even worse, your VM and run the passwd command. Type your current everything else
a compromised system. So don’t use this module on a password and then type mariobros as the new one and in this field, you
production system and test it on a virtual machine first. press Return: must adhere to
This module is far from complete, but it will give you good practices
an idea of what you can do with public APIs. Get the New password: and ensure your
module sources from the coverdisc (havebeenpwned- THIS PASSWORD HAS BEEN PWNED! API is secure by
pam) and upload it to a Debian-based virtual machine. passwd: Authentication token manipulation providing access
Install the necessary packages to build the module first: error control checks,
apt-get install libssl-dev libcurl4-openssl- encryption, input
dev libpam-dev. Our module uses curl to make the Our module returns PAM_AUTHTOK_ERR and /etc/ validation and so
API call and OpenSSL to generate the SHA1 hash of the shadow is not updated. Try now with another password on. The OWASP
password typed by the user. Next, build and install the (hopefully one that has not been pwned) and this time REST Security
module itself: cd havebeenpwned-pam && ./buildPam. /etc/shadow will be updated. In case there is an error Cheat Sheet is
sh. The buildPam.sh script will build and copy the shared communicating with the API – no internet connection, a good place to
library to /lib/x86_64-linux-gnu/security/pam_ HTTP response error codes and so on – our module start; see http://
havebeenpwned.so. On Debian-based systems, the right returns PAM_AUTHTOK_ERR and ends its execution bit.ly/lud_rest.
way to enable or disable PAM modules is by means of the without stacking this new password. Feel free to modify
www.linuxuser.co.uk 45
Tutorial Computer security
Take the
challenge!
When it comes
to computer
security,
participating in
Capture the Flags
(CTFs) will make
you think out of
the box. Although
tangentially CTFs
are different
from real
world security,
we strongly
encourage you
to participate.
We have set up a
challenge for you:
get it from the
coverdisc (ctf/ Above Ah, how beautiful JSON looks after piping it through the lightweight JQ processor
ctf.txt) and if you
get stuck, read
ctf/sol/sol.txt. this behaviour if you like. If you are interested in writing
Good luck! PAM modules, have a look at http://bit.ly/lud_pam. Messing up with PAM
Look for cracked hashes can easily lead to a total
As of this writing, the Md5 Decrypt website (http://
md5decrypt.net) holds 9,950,699 cracked hashes. system lock-down or a
In order to use its API, you have to register using an email
address first. If you don’t fancy using one of your own compromised system
emails, use one from https://dropmail.me instead. Go to
http://md5decrypt.net/en/Api, enter your email address
and click ‘Sign Up’. You will receive your key code by email and fill in the form; you will then find your keys at
after a while. Using their API is a piece of cake. Let’s https://censys.io/account/api. Censys provides a
imagine you want to look for the md5 hash 76657802e933 Python library that implements its API. Install it: pip
5a13a3f4143907f08868; run curl like this: install censys. You can use this library to write your
own Python scripts. We’ve written a simple script that
curl "http://md5decrypt.net/en/Api/api. allows you to quickly query Censys to get a list of exposed
php?hash=\ devices of your choice, along with their open ports. Get it
> 76657802e9335a13a3f4143907f08868&\ from the coverdisc (censysports.py). Now, let’s imagine
> hash_type=md5&email=<YOUREMAIL>\ you want to see which services and devices you have
> &code=<YOURKEY>" exposed on your network (say, 192.168.1.0/24):
Of course, automating it is as easy as calling the API from ./censysports.py -u <API ID> \
curl. We have written a Bash script to use this API; get it > -s <SECRET> -n 192.168.1.0/24
from the coverdisc (md5decrypt.sh). You can look for one
hash at a time or provide the script with a file containing You can pass any valid query to the script using the
a list of hashes (type:hash), one per line (see the hashes. -n flag (see https://censys.io/ipv4/help), so let’s do
txt file on the coverdisc). Give it a try: something more interesting: let’s look for publicly
exposed SCADA systems in Spain. Run the same
./md5decrypt.sh -f hashes.txt -d 3 \ command as before but modify the -n flag accordingly:
> -e <YOUREMAIL> -c <YOURCODE>
-n "tags: scada and ( location:country_
Identify your attack surface code:ES \
Tutorial files Censys (https://censys.io) is an amazing database full > or location.country:ES)"
available: of exposed devices on the internet, and it’s free to query.
filesilo.co.uk You have to register first though, to get both your API ID You can easily combine different fields to form Boolean
and your secret keys. Go to https://censys.io, register expressions as well. Say you want to make sure there are
46
no more Heartbleed-vulnerable servers on
your network:
> <SHA256>/analysis|jq
what next?
Koodous also provides a Python library that
-n "192.168.1.0/24 and wraps its API. This wrapper is not completely Detect malicious
tags:heartbleed" functional, so we made some changes to
it. Get it from the coverdisc (pykoodous. traffic with MalTrail
Censys is great, but so is Shodan (https:// py). Let’s imagine you want to recover the
www.shodan.io). Point your browser to
https://account.shodan.io/register, fill
analysis for a particular sample; run it using
these commands:
1 Introducing MalTrail
in the form and you will get your API key. MalTrail is a malicious-traffic detection
Now install jq, a lightweight command-line ./pykoodous.py -s <SHA256> -T system that uses a bunch of publicly
JSON processor to format the responses <APIKEY> \ accessible blacklists, along with some
coming from Shodan: apt-get install jq. > get_analysis|jq static data from AV reports and custom
Next, use curl to call the API and pipe its user-defined lists to do its job.
VirusTotal (www. See http://bit.ly/lud_maltrail.
virustotal.com) will
To use these APIs you need help you identify not
only malicious files,
a tool like curl and whatever but also whether an
URL may be harmful
script language you fancy or not. Register first to
get your API key. Next,
use the /url/report
2 Install and start the sensor
output to jq to get a prettier output. Let’s API call to determine whether a particular MalTrail needs Python Pcapy; install it
imagine you’re interested in getting as much URL may pose a threat: first and then clone the repository: apt-
information about a particular host on your get install python-pcapy && git
network as possible; run curl this way: curl "https://www.virustotal.com/ clone https://github.com/stamparm/
vtapi/\ maltrail.git. Start the sensor: cd
curl https://api.shodan.io/shodan/ > v2/url/report?apikey=<APIKEY>\ maltrail; sudo python sensor.py. It
host/\ > &url=<URL>&scan=1"|jq updates its data on first run.
> <HOST_IP>?key=<YOURKEY>|jq
www.linuxuser.co.uk 47
Tutorial Python: Kivy
Over the previous issues, we’ve looked at a lot of called python-kivy-examples, which provides a number
Resources different ways that you could use your Raspberry Pi. of sample pieces of code, unsurprisingly. These are handy
But, while we’ve developed code that was specific to have locally on your development machine and provide
Kivy for each project – such as how to talk to Bluetooth code snippets without having to be online. They are
https://kivy.org devices or communicate over the GPIO bus – we haven’t located in the directory /usr/share/kivy-examples. Take
really covered how to interact with people. This issue, a look for excellent examples of all of the things you could
Pygments therefore, we’ll take a look at Kivy, which is ideal for add to your own program.
http://pygments. creating graphical displays and dashboards to make it
org easier for your users to interact with your apps. Hello world
Kivy is a multi-platform library for Python that gives Since Kivy is a full GUI toolkit, it will involve a bit more
you the tools you need to build a GUI for your Python boilerplate code wrapping the actual functional bits.
projects. As well as the Pi, Kivy is available on Linux, For example, the code below is an example of a classic
Windows, macOS and even Android – so you could take ‘Hello World’ program:
the examples given here and easily reuse them in lots of
other projects. from kivy.app import App
Along with dashboard displays, Kivy provides several from kivy.uix.label import Label
different interface options, such as buttons, touch- class MyApp(App):
screens and even multi-touch interfaces. By the end of def build(self):
this article, then, you should have the tools you need return Label(text='Hello world')
to add a really useful (and attractive) interface to your if __name__ == '__main__':
Raspberry Pi project. MyApp().run()
Install Kivy As you can see, the first step is to import the App class
Installation is pretty easy if you are using Raspbian, or and implement your own subclass based on it. In this
some other Debian variant, on your Raspberry Pi. You can case, the subclass is called MyApp. The primary method
install it with the following command: that you must implement is the build() function.
This method gets executed when your app starts and
sudo apt-get install python-kivy generates the actual windows and other display elements.
In the above example, the code simply generates a Label
If you’ve moved to writing code using Python 3, install the object in order to show the text Hello world. Save the
package python3-kivy instead. There is also a package code to a Python script file – myapp.py for example – and
48
run it with python myapp.py. Moving beyond a simple
text display, what tools does Kivy provide for displaying
data? Since we’re developing graphical displays, you
probably want to start with being able to display images
in your program. This is handled through the Image class.
For example, you could add the following to the earlier
boilerplate to load an image:
www.linuxuser.co.uk 49
Tutorial Python: Kivy
haven’t examined how they’re actually laid out on the widgets in a given column need to be the same width and
screen. As with many other graphical toolkits, Kivy gives any widgets in a given row need to be the same height. If
you a set of classes, called layouts, that manage the you want to be more flexible in the sizing of your widgets,
physical organisation of the various elements. As a basic you may want to use the stack layout instead. In a stack
example, the following code draws two buttons, one layout, widgets in a given row or column no longer need to
above the other. be the same height or width.
You can move up another dimension by using the page
layout = BoxLayout(orientation='vertical') layout object. This creates a multi-page layout where
btn1 = Button(text='Hello') widgets can be added to each page, and you can flip
btn2 = Button(text='World')
layout.add_widget(btn1)
layout.add_widget(btn2) As with many other
The first line creates the layout object, orientated graphical toolkits, Kivy
vertically. Then two button objects are created. All of
the objects that generate graphical elements for your gives you a set of classes,
program subclass the basic class for widgets, so you can
use the add_widget() method of the layout object to called layouts, that manage
add them to the display. The boxlayout can be thought
of as a one-dimensional layout, where you can have your the physical organisation of
widgets grouped either vertically or horizontally. Moving
up one dimension, you use the grid layout object. various elements
Unlike many other graphical toolkits, you do not have
Below Kivy builds up direct control over where widgets are placed within the
from the low-level grid; they get placed simply in the order they’re added back and forth between them using the borders. A more
interface libraries to the layout object, in row-first order. You need to give structured layout object is the anchor layout. In this
to high-level objects the grid layout at least one cols or rows parameter, and case, you define an anchor_x and anchor_y parameter,
such as widgets the sizing of the widgets need to match. That is, all of the defined as top, centre or bottom. This essentially pins the
children of the anchor layout to one of the borders.
Kivy Architecture
Interacting with the world
Many of your projects will also need to accept input
from users. This could come in a variety of forms –
we’ve already seen buttons in some of the earlier code
examples. Moving beyond the button, Kivy provides a
Widget Kv language toggle button, which behaves in much the same way as
High level
50
provides highlighting for the widget. Any of the languages
supported by pygments gets highlighted within the code
editor window. For example, the following code creates a
code input widget that highlights Cython code.
Manage events
We’ve looked at all sorts of display options, as well as
ways that users can input their own data or interact
with your program. But how do you manage the actions these event handlers, you become responsible for all of Above Kivy’s examples
triggered by these interactions? Whenever you actually the required functionality. For the sections that you are provide a showcase
do anything, such as clicking a button or typing text, not interested in overriding, you can use super to use the application that lets
events are triggered within Kivy. Widgets all implement code from the base class. you see almost all of
the EventDispatcher to manage these events in a The last category of events are those generated the widgets available
background thread of execution. As you’ve seen above, by the Clock object. Because of the way Kivy’s event
events are managed by callback functions that are loop manages event handlers, you can’t have any kind
bound to one or more of these actions. There are several of infinite loop in you code that executes some code
available as part of the widget class. For example, the repeatedly, because this will end up blocking the rest of
following code notifies you of any changes to the state of the system.
a button. But you may have a legitimate reason for wanting
to repeat a chunk of code in your event handler. If so,
def callback(instance, value): the correct way to do this is with the Clock object. With
print('My button <%s> state is <%s>' % Clock, you can schedule function calls for some point in Using env
(instance, value)) the future, either once or repeatedly. For example, the variables
btn1 = Button(text='Hello world 1') following code fires your callback function every second. You can easily
btn1.bind(state=callback) control most
def my_callback(dt): of Kivy’s
This binds the callback function to the state property pass functionality
of the button, so that any time it changes, the callback Clock.schedule_interval(my_callback, 1.0) through the use
function gets fired. You can even create your own with the of environment
register_event_type() method of the widget class. In this example, the callback function doesn’t actually variables. For
Along with these class events, there are also input do anything. The parameter dt is the delta time, and is example, you
events; things like mouse clicks, touches and scroll- always handed in to your callback function. could set KIVY_
wheel events are all handled by the class MotionEvent. WINDOW to x11.
These are dispatched by the on_motion() method of the Non-widget classes This will force
Window class. They then generate on_touch_down(), on_ There are several other classes available that provide Kivy to use the
touch_move() and on_touch_up() events in the Widget nice-to-have functionality for your code. The Animation X11 windowing
class. You can override any of these event handlers; the class provides a set of pre-packaged transitions that environment.
code below is an example. smoothly move widgets from one state to another. For You also have
example, you could have the text for a Label object fade control over
def on_touch_down(self, touch): into view, without having to write and debug a bunch of audio, camera
if super(OurClassName, self).on_touch_ your own code. usage and
down(touch): We didn’t get a chance to talk about textures here, the OpenGL
return True but there is an included Atlas class that enables you to implementation
if not self.collide_point(touch.x, bundle a number of textures together into a single image to use, among
touch.y): file. This helps simplify image downloading, for example, other elements.
return False among other things. These options
print('You touched me!') The last extra class is UrlRequest. This provides a can be placed
return True way to interact with things on the internet outside of within a
the main loop. This way you can do something such as configuration
You should have noticed the references to super in the downloading a file from a website without blocking your file, too.
above example. This is because when you override one entire interface while you are waiting for it finish.
www.linuxuser.co.uk 51
Tutorial Programming: Go
part one
Resources
Go
Download through
your package
manager or from
https://golang.
org/dl
A web browser
Ideally Chrome
4.0+, Firefox 3.5+ Above Use Go to get a dynamic video-sharing server up and running
or Opera 25+
Until about ten years ago, nearly all of Google’s server Go is not meant to incorporate hundreds of fashionable
code was written in C++. While C++ programs run language features; it has been designed to be the best
extremely quickly, they are also prone to a number tool for creating large code-bases of the type generally
of problems, such as the lack of built-in support for used at Google.
concurrency and slow build times caused in part by the
way C++ handles dependencies between modules. About this project
Google released the Go language in 2009. It was Go is an open-source language, and it’s very easy to get
intended to resolve some of the problems it was having started with it. The Thanksgiving 2011 Google Doodle,
with C++ code, speeding up build times and leading to which features a turkey with interchangeable feathers
a more readable and testable code base. It has had a and which was written entirely in Go, was deployed in
lot of success on this front: in 2012, Brad Fitzpatrick under 24 hours by a Google employee who was a total
(the creator of LiveJournal and now a Google employee) newcomer to the language.
decided to rewrite the dl.google.com server – from which One Google site that relies particularly heavily on Go
users download Google Chrome – in Go, expecting to take is YouTube, so we thought it would be fun to create our
a small performance hit in exchange for a more readable own version: LU-Tube, a social video-sharing site for Linux
code base. In fact, not only was the resulting code much Users. Over the course of the next three articles, we’ll be
more readable, but when it was compiled and run, it ran developing a simple web server that will enable users to
just as fast as the C++ original, and actually used slightly watch, search and upload videos, all of it written in Go.
less memory. Each month, we’ll release a working copy of code from
52
the previous issue, so that you can get stuck in even if
Figure 1
you haven’t followed the previous articles. Since this is
the first issue in the series, we’ll be starting from scratch. func loadVideo(id string) (*Video, error) {
We won’t be assuming any Go knowledge, but it would filename := id + "/videodata.txt"
help to be familiar with programming in a language such videoData, err := ioutil.ReadFile(filename)
as C or C++ and to know a bit about web applications. if err != nil {
return nil, err
Base types }
To get started, we’ll create a directory called lutube and
title := string(videoData)
move into it. This is where our server will live. The videos
themselves will live inside a directory named videos.
$ mkdir lutube && cd $_ Title field filled in by looking at the videodata.txt file. Above Error checking
$ mkdir videos in Go does not involve
func loadVideo(id string) *Video { any specialised
Create a new file called lutube.go inside the lutube filename := "/videos/" + id + "/videodata. language features
directory and open it inside a text editor. Add the txt" beyond simple
following line at the top; this will tell Go that this file videoData, _ := ioutil.ReadFile(filename) if-statements and
contains the main function that should be run at the start title := string(videoData) function passing
of the program. return &Video{Id: id, Title: title}
}
package main
This is our first example of a function written in Go.
Before we write anything, let’s talk a bit about how we’re You’ll notice that Go uses the keyword func to declare a
going to store and represent videos. A video on our site function, and that it puts the return type of the function
is more than the clip itself; it’s the title, and later on the (in this case, a pointer to a Video struct) at the very end
keywords, the user who uploaded it and so on. We’ll give of the line. We can see a few other examples of typical Go
each video a unique ID, corresponding to a subdirectory syntax; for example, we often use the symbol := to create
of the videos directory. Inside that subdirectory will live and initialise a variable on the fly. The type of the variable
is inferred from its value.
The third line is quite interesting, and demonstrates
Go has no special error- that Go functions are allowed to return multiple values.
In this case, the ioutil.ReadFile function returns both
catching syntax; instead, the text of the file and a value that indicates whether or
not the function has encountered an error.
errors are passed around We’ve used the symbol _ for this error value,
signifying that we’re not going to use it. That’s not great
as ordinary values programming practice, so let’s modify our function so
that it behaves appropriately in case there’s an error.
First, we’ll change the signature of the loadVideo
the video file itself, plus other files storing any additional function so that it too can return an error value:
information. For now, let’s create a Go struct that will
encode a video. To begin with we’ll stick to the video’s ID func loadVideo(id string) (*Video, error) {
and its title.
Then, we’ll read in the error value; if there is an error, then
type Video struct { we will return it:
Id string
Title string videoData, err := ioutil.ReadFile(filename)
} if err != nil {
return nil, err
You’ll notice a few differences between this and the }
way that structs are declared in C. Perhaps the most
important is that in Go we always put the type of a The value nil is used to show the absence of a value. Go
variable or field after the variable name. In some ways, has no special error-catching syntax; instead, errors are
this is similar to the syntax in languages such as Haskell, passed around as ordinary values. In this case, we are
Ada or Rust, where we’d write Id: string; in Go, we drop saying: if there is an error, then return nil for the Video
the colon. struct, and return the error value as well.
We’ll store the title inside a file called videodata.txt, We should also modify our existing return statement
living inside the video’s directory. Let’s write a function so that it returns a nil error value if everything has gone
that will take in an ID and return a Video struct, with the well, like this:
www.linuxuser.co.uk 53
Tutorial Programming: Go
import "io/ioutil"
Right It’s considered Figure 2 Before we compile and run the code, the last step
better practice is to add some extra imports to cover some of the
to group import import ( functionality we’ve used. We need to add imports
statements like this, "io/ioutil" for "net/http" to cover the networking code, and for
rather than write them "net/http" "log" and "fmt" to cover some of the other functions
out separately "fmt" we’ve used. We could add these as separate import
"log" statements, but Go allows us to combine them into one.
) Replace the line import "io/ioutil" at the top with the
compound statement shown in Figure 2.
Now we are ready to compile and run our code! Open a
command window and navigate to the lutube directory.
Importing packages Compile and run the program:
In most programming languages, it is very difficult
to keep track of which packages rely on which other $ go build lutube.go
ones. In C++, it is difficult to know when a particualr $ ./lutube
#include directive is necessary. We can delete
the line, of course, but if the code then compiles it Now open a web browser and point it to http://
doesn’t necessarily mean that the module wasn’t localhost:8080/watch. The text ‘Under construction.’
used: it might be #included by some other header appears on the page, as in Figure 3. When you get bored
file. Go forces developers to tackle these problems with this page, press Ctrl+C on the terminal to close the
early and often, so there is as little confusion as server so we can move on to the next step.
possible later on.
In Go, for example, an unused import is not just HTML templates
wasteful, but will lead to the compiler rejecting the This is all very exciting, but we still can’t watch any
program outright. In that way, we ensure that the videos. What we want to do is programatically construct
packages we import are precisely the ones we need an HTML page that will contain the video itself, plus its
to run our program, helping to keep binaries as small title. We could construct this in a number of different
as possible. ways, but probably the neatest way is to use Go’s HTML
Another important feature is that Go forbids two- template package.
way dependencies between packages, preventing In Go, an HTML template is a special HTML file where
loops that could lead to much larger executable sizes. some sections are left blank, to be filled in later. For
Lastly, the way that Go imports binaries is much more example, we can use the following template to display the
efficient than in C++: a dependency is only loaded title of our video.
once, even if it is pulled in by multiple packages.
<h1>{{.Title}}</h1>
54
Here, the <h1> tags are normal HTML code. The
{{.Title}} in the middle is a placeholder for a field named
Figure 5 Naming
Title. In order to render this template, we’ll call the func getAvailableVideos() ([]string, in Go code
template.ParseFiles and Execute functions, passing error) { Go is unusual in
in a pointer to a Video struct. These functions will read videoDirectories, err := ioutil. that the names
the Title field of that struct and insert it in place of the of functions and
ReadDir("videos")
placeholder, appropriately enough. variables have a
if err != nil { return nil, err }
Let’s see how this works in practice. First, we’ll create distinct syntactic
availableVideos := make([]string, 0)
a directory that will simulate a video: meaning.
for _, f := range videoDirectories { Specifically,
$ mkdir videos/zoo availableVideos = if an identifier
$ echo "Trip to the zoo" > videos/zoo/ append(availableVideos, f.Name()) begins with a
videodata.txt } capital letter,
$ touch videos/zoo/video.mp4 return availableVideos, nil it’s visible to
} other packages
We have used touch to create files that will stand in for that import the
actual video files. If you have a video on your computer Above Go arrays are similar to the arrays in C, but they are package it lives
in MP4 format, then copy it into the zoo directory and much easier to deal with thanks to a number of functions in. If it begins
rename it to video.mp4. (If you’re wondering why we with a lower case
chose that title, look up the history of YouTube.) id := request.URL.Path[len("/watch/"):] letter, it’s only
We can now call loadVideo("test") to load this into video, _ := loadVideo(id) visible in its own
a Video struct that contains the ID test and the title templ, _ := template.ParseFiles("watch.html") package. This is
"Trip to the zoo" that is contained in the /videodata. templ.Execute(writer, video) exactly the same
txt file. as the public
Before we go back to programming in Go, let’s create There are a few new things in this code. In the first line, and private
the HTML template for our video page. Create a file we use a special Go feature that enables us to take an keywords in C++,
called watch.html in the lutube directory containing the existing array and cut out a particular range of indices. except that now
following HTML code: The resulting object, called a ‘slice’, occupies the same we can tell at a
area in storage as the original array elements, but may glance whether
<h1>{{.Title}}</h1> be used as though it were an array in its own right. For something is
<video src="/videos/{{.Id}}/video.mp4" example, if a is an array, then a[1:2] cuts out the second public or private
controls="controls"> and third elements of a (since indices start at 0). In this without going to
Your browser does not support the video case, we want to take the URL path (/watch/zoo/) and its definition.
tag. remove the /watch/ part at the beginning, so we slice out
</video> all indices starting from the length of the string
"/watch/" up to the end of the URL.
In our example, once we serve the page the {{.Title}}
placeholder will be replaced with Trip to the zoo and
the {{.Id}} placeholder with zoo, giving the path zoo/ What we want to do is
video.mp4.
The slash // before {{.Id}} is important: it means programatically construct
that the browser fetches the video relative to the root
directory, rather than relative to the current URL /watch/ an HTML page that will
zoo (which doesn’t correspond to an actual folder). All
that remains is to write the code that will parse and contain the video itself,
display this code. Go back to lutube.go and replace the
code inside the watchHandler function with the following plus its title
four lines.
Figure 4 The rest of the code loads the video object, and uses
it to render the HTML template. Since we pass video
in to the Execute function at the end, this is the object
whose Title and Id fields will be used to populate the
placeholders in the template. Left We can use
We’ll be ready to go as soon as we’ve sorted out our HTML templates
import statement at the top. We need to add "html/ to construct pages
template" in order to unlock the HTML template dynamically. The
functionality. We also need to remove "fmt", because HTML5 <video> tag
unused imports are a compiler error in Go. Once we’ve makes it very easy to
fixed that up, we can run go build lutube.go and ./ embed videos
www.linuxuser.co.uk 55
Tutorial Programming: Go
Right HTML template Figure 6 us to iterate over an array. This template expects to
iteration. Go’s HTML receive an array of strings, which it refers to as {{.}}.
template package <h1>LU-Tube</h1> The block starting with {{range $a = .}} and ending
provides a number of <p>The video-sharing site for Linux with {{end}} will repeat once for every element of the
other useful features Users</p> array; inside the block, we can refer to that element as
besides this one for {{$a}}.
programmatically <h2>Our videos</h2> Now we’re going to write a homeHandler function that
building web pages in <ul> will handle requests to the base URL. This function is so
an elegant way {{range $a := .}} similar to the watchHandler that we’re going to leave it
<li><a href="/watch/{{$a}}">{{$a}}</ up to you to write it (you can check your work against the
version that we’ll supply next issue).
a></li>
Your implementation should call the
{{end}}
getAvailableVideos()function to get a list of available
</ul> videos, and should then parse and render the template
home.html. You should pass the list returned by
56
Compile and run the server again, and then navigate to
Function literals and closures in Go http://localhost:8080 in your browser. You should see
Like other languages (even C), Go allows us to pass something like Figure 8.
functions around as arguments to other functions.
Handling errors
244 words
The way functions are declared means that the
syntax is often easier to understand than it is in C, Go has no special error-handling functionality beyond the
where such declarations are notoriously difficult error type and the ability of functions to return multiple
to understand. For example, the equivalent of the C values, which means that they can return an error value
declaration int f(int (*g)(char c), char *p) in Go alongside their usual return value(s). We’ve already seen
is func f(func g(c byte) uint8, p *byte) uint8. a few examples of error handling, but we’ve also been
It’s easier to see what’s going on in the Go version, ignoring a lot of the errors in our code. This can cause
and becomes easier for more complicated examples. some confusing behaviour; it’s always better to deal
Go also enables us to use anonymous function with errors early on so that you have a better idea what’s
literals, in much the same way as JavaScript. For happened when things go wrong.
example, we could call the above function f as f(func As an example, we’ll deal with the errors in the
(c byte) { return 6 }, nil). watchHandler function. Inside that function, we call the
Function-passing makes error-handling code less loadVideo function, but then throw away the error value:
repetitive. For example, our handler functions involve
much the same error-handling code, but it is difficult video, _ := loadVideo(id)
to separate out the error code from the code used for
handling the requests. The solution is to use function The only reason why loadVideo might fail is if the
passing: we write a separate function that takes in particular video ID does not exist. You can see this in
the request-handling code as a function parameter, action by running the server and navigating to http://
and then calls it, performing the appropriate error- localhost:8080/watch/notthere: since we are ignoring
handling. We can then wrap our functions inside this the error, we get a blank page, which isn’t particularly
new function, meaning that we only have to write the helpful. A better thing to do would be to redirect the user
error-handling code once. to the home page if they accidentally type in an incorrect
link, by replacing the line above with the following lines.
www.linuxuser.co.uk 57
Feature KDE Plasma
Application overview
hot corner
Moving your pointer to the
top left of the screen (and
continuing to head in that
direction) activates an overview
of an activity’s apps
58
Paul 0’Brien explores the latest KDE
Plasma desktop environment, which
is more mature than ever before
K
DE Plasma 5.13 is the 14th iteration (since the original
version was 5.0) of the fifth generation of the desktop
environment created by KDE. KDE Plasma sits alongside its
fellow KDE Frameworks and KDE Applications projects and is subject
to its own release schedule, which typically means main releases
every three months, bug-fix releases in between and periodic LTS
releases (most recently version 5.12). Initially released on 15 July
2014, KDE is built using Qt5 and KDE Frameworks 5, using OpenGL to
enable hardware acceleration for excellent graphical performance.
www.linuxuser.co.uk 59
Feature KDE Plasma
optimised to ensure
work more closely with the Plasma desktop. music playing from
they run smoothly The changes make using Plasma and the within the browser.
internet feel much more cohesive than ever Other improvements
before. Download progress is now displayed include – courtesy of KDE
ensure Plasma stays at the leading edge of in the Plasma notification pop-up just like Connect – the ability to send
good-looking desktop environments, and transferring files with Dolphin, and a native a link from your browser directly to
remains more than capable of challenging notification is displayed when the download your Android phone or tablet, and browser
the best Apple and Microsoft have to offer completes. The new ‘Clear History’ button tabs can be opened directly from KRunner
in terms of visuals. in the notification widget now becomes via the Alt+Space keyboard shortcut.
A key new feature for version 5.13 is particularly useful. Enabling Plasma Browser Integration is
‘Plasma Browser Integration’, a suite of new The Media Controls Plasmoid widget, straightforward, though not configured by
features which make Mozilla Firefox, Google itself redesigned for the latest release, default: simply add the relevant plug-in from
Chrome and Chromium-based browsers can now mute, pause and skip videos and the add-on store of your preferred browser.
KDE Connect is a particularly useful
and often overlooked KDE feature which
becomes ever more useful with each
release. Connect is a project to enable
communication across all your devices.
For example, with KDE Connect you can
receive phone notifications on your desktop
computer, transfer files between your
desktop and your phone, quickly find your
phone, control music playing on your desktop
from your phone, use your phone as a remote
control for your desktop, or even synchronise
your device clipboards! This is achieved via a
component installed on your desktop and a
KDE Client app installed on your phone, with
the two devices being connected to the same
network. Only Android is currently supported.
The Settings pages have started to
receive a long overdue redesign for the latest
Above An extension needs to be installed from your browser’s store to enable integration features release. The KDE Visual Design Group has,
60
tips & tricks
Be a power user
How to make the most of your
KDE Plasma experience
1
Install Plasma add-ons
The Software Centre has an
entire section dedicated to
Plasma add-ons, sorted by category, so
make the most of it! Window Manager
scripts provide useful inspiration for
how to use Window Rules, and Plasma
widgets are perfect for customising your
various activities.
2
Make the most of KRunner
Above The settings screen gets a Kirigami based make-over to improve its usability There’s no need to manually
dive through the application
uh, retooled many of the tools in System menu every time you want to launch an
Settings and their improvements – together The team has application. Provided the desktop has
with the use of KDE’s Kirigami lightweight focus or after pressing the super key,
user-interface framework – give the pages started a rolling just start typing the name of the app
a slick new look that adheres to the Kirigami you want to open and hit Return when
Human Interface Guidelines. As a bonus programme of the list is suitably refined.
they’re also more touch-friendly than before,
improvements to the
3
as well as revealing new functionality. The Use jump lists
team has started a rolling programme of Jump lists enable you to
improvements to the theming tools section, theming tools launch an application directly
which includes icons, desktop themes and to a specific section or feature. You can
cursor themes. The splash screen page can see this in action by adding the
now download new splash screens from features and design improvements for the Software Centre to a panel, then
the KDE Store, and the fonts page can now 5.13 release. Once again, the Kirigami UI right-clicking the icon. You’ll see an
display previews for sub-pixel anti-aliasing. framework is put to good use and the team option to jump directly to updates.
Additional visual improvements come has improved the appearance of lists and Application support varies of course,
courtesy of a new design for the lock, unlock category pages, which now use toolbars but it is useful.
and login screens, which now display the instead of big banner images. Lists can now
4
wallpaper of the current Plasma release by be sorted and use the new Kirigami Cards Be a Dolphin power user
default. This makes the screens feel much widget. Star ratings are shown on lists and Dolphin is a fully featured and
more integrated with the rest of the system app pages to more easily identify high- accomplished file manager,
compared to previous iterations. The lock quality items, and the overall store-browsing despite appearing simple at first glance.
screen also includes a fade-to-blur transition experience is considerably improved. All Check out the Preview option to get a
to show the controls, allowing it to more Upstream data (the cross-distro XML look at your files, the Split feature to
easily be used like a screensaver. format to provide metadata for software make file management operations far
Discover, the KDE Plasma software components) is now shown on the application easier, and root around in the Control
and add-on installer, receives additional page, including all URL types, while app icons menu to find useful options to match
use the local icon theme to better match your your preferences.
desktop settings.
5
quick tip Under the covers, work has continued Monitor services
Dolphin add-ons on bundled app formats. Enhanced Snap KDE Plasma includes a useful
The available Dolphin add-ons enable support now enables users to control app utility for understanding which
a wide range of features to be added permissions and it’s possible to install Snaps services are running in the background
to the standard file manager setup, that use classic mode. Deep linking of Snap and which start with your computer.
with nearly 500 options available in packages is now supported courtesy of the Available in the ‘Start and Shutdown’
the store! Dropbox integration is a firm snap:// URL format. Flatpak hasn’t been left section of System Settings, the module
favourite for teams, as are a range of behind, with an added ability to choose the allows services to be manually stopped
root utilities. preferred repository to install from when and started.
more than one is configured.
www.linuxuser.co.uk 61
Feature KDE Plasma
Plasma Vault
enables you to create
encrypted folders to Above It’s easier than before to find interesting and top-rated software in the KDE software store
securely store files Vaults fill this void by making the attack
quick tip
surface smaller – instead of having all
data unlocked at once, you can do it piece Terminal quick-access
well, with only a handful of those we tried by piece – it is more granular.” The latest When browsing a directory within the
not working and no general ill effects of release introduces a new CryFS back-end, lightweight Dolphin file manager, a
having the proxy installed. commands to remotely close open vaults great shortcut to know is that you can
Plasma Vault has several new from your Android smartphone or tablet with instantly open a terminal window in
enhancements for Plasma 5.13. Plasma Vault KDE Connect, offline vaults, a more polished the current directory, attached to the
enables a user to create encrypted folders interface and better error reporting. Dolphin window, by pressing F4.
to securely store files. As the developer of A new feature that may sound trivial, but
Plasma Vault Ivan Cukic notes, the utility which will make a lot of people’s Plasma lives
covers a different use a little more bearable, is the addition of the While these are the main improvements in
case to full-disk ability to fall back to software rendering if Plasma 5.13, there are numerous other small
encryption or an the OpenGL drivers unexpectedly fail. While tweaks, including a Plasma Calendar plug-
encrypted home in theory this is a scenario that shouldn’t in for astronomical events showing lunar
folder. “[They do] not occur, Nvidia graphics users will know it phases and astronomical seasons (equinoxes
cover the possibility that happens more often than one might like – and solstices), an improvement to the
someone might access and the software rendering fall-back will at Digital Clock widget which enables copying
your system while it least make the unfortunate process slightly the current date and time to the clipboard,
is running. Plasma more graceful. additional KRunner plug-ins to provide easy
access to Konsole profiles and the character
picker – and the Mouse System Settings
quick guide page has been rewritten for libinput support
Creating and using activities on X and Wayland.
Assuming you like what you see and
By default, the first time you fire up can control how an activity is displayed, want to try KDE Plasma 5.13 – and, unless
KDE Plasma you’ll be greeted with a with a text name or an icon, a keyboard you specifically want to stay on the 5.12
single activity. The activity switcher shortcut can be configured, and activity LTS release, why wouldn’t you? – there
won’t be visible, as it only shows when tracking can be disabled if you don’t are several ways to get it up and running.
you have more than one active, but if want the apps to appear in your launcher The most straightforward is to use a live
you don’t make the most of Activities, history. Activities can even have unique image, a number of which are available.
you’re missing out. To get started, press settings for screen brightness, wireless KDE’s own Neon image builds the latest
the toolkit button at the top right of the settings and indeed anything else via KDE Frameworks, Plasma and Applications
screen and select Activities. You’ll be custom scripts – ideal if you want to courtesy of a continuous integration system
presented with your current activities configure a ‘presentation mode’ or on top of an Ubuntu LTS base. Several
(likely only ‘default’) and a ‘Create similar. Activities can also have their versions are available, including the User
activity’ button to add more. These are own desktop settings such as wallpaper, Edition which features the latest officially
groups of applications and a unique mouse actions, icon arrangement or released KDE software on a stable base and
desktop where you can add widgets. You even file filtering. is ideal for everyday users. The Developer
Edition Git-Unstable version features
62
pre-release KDE software built the same your software sources.
day from new feature branches, while the Use the commands sudo quick guide
Developer Edition Git-Stable version has add-apt-repository Using Window Rules
pre-release KDE software built the same day ppa:kubuntu-ppa/
from bug-fix branches. Outside of the official backports and sudo One of the best things about KDE
KDE images, the latest released Plasma apt update && sudo Plasma is that it’s instantly accessible,
version is also available in the openSUSE apt full-upgrade to add yet has a huge number of powerful
Tumbleweed-based Krypton stable image, the repo and install, but features. Window Rules are a great
both in 32-bit and 64-bit flavours (bearing remember that packages example of this: not something you’d
in mind that Neon itself is only available in in the backports PPA have reduced immediately think of as being useful,
64-bit form). level of support compared to the main repos but a must-have when you get used
GeckoLinux, which focuses on out-of-the- and may introduce bugs. to having them! Window Rules, also
box usability, is built directly from openSUSE The move towards Wayland from X.Org known as Window-Specific settings,
packages and repositories, offers the Plasma server is now underway in a number of can be access from the ‘Window
5.13 desktop environment on top of the Leap projects, and KDE Plasma is no exception. Management’ section of the revamped
15.0 base system in its NEXT image. It also After Plasma 5.12 was released in System Settings application. They
includes KDE Applications 18.04 and the KDE February, the team got together to discuss enable you to select a window using
Framework at version 5.47.0. The GeckoLinux improvements to Wayland support for a whole host of options including
ROLLING Plasma can also be updated to the 5.13 release. Support in 5.13 is still application, role, type, title or even
Plasma 5.13. considered very early, with improvements machine, then apply a vast number
including the return of Window of custom attributes including size
Rules, the use of high-priority EGL and position, activity, maximum or
Neon builds the latest Contexts and initial support for minimum size and priority above other
screencasts and desktop sharing; windows. The title bar and frame
KDE Frameworks, Plasma but work on Wayland is expected can be disabled, the colour scheme
to accelerate over the coming changed, and opacity applied. Windows
and Applications on top of months. The KWin/X11 code is can be set as uncloseable, unfocusable
now considered feature-frozen, and much more, providing ultimate
an Ubuntu LTS base as confirmed by KWin maintainer versatility to how windows behave on
Martin Flöser, which will enables your system. Like to watch video? You
the teams to focus on support. could remove the title bar and frame
Manjaro KDE Edition is a rolling-release This doesn’t mean that 5.14 will default to from the player, make it semi-opaque,
distribution, which provides the latest stable a Wayland-based session – that still feels pin it on top of every other window and
Plasma, KDE Applications, KDE Frameworks some time off – but things are definitely carry on with your work.
and Qt releases. Should Slackware be your moving in the right direction.
preferred distro, Alien BOB provides an
installable Live image for Plasma 5, based on
the development tree ‘slackware64-current’.
If you’re a Kubuntu 18.04 LTS user, or
using another Ubuntu distro with KDE
installed, you can now install Plasma 5.13
by adding the Kubuntu Backports PPA to
quick tip
Add your online accounts
Online accounts can be added
to Plasma to enable automatic
integration with key applications.
You may need to manually install the
‘kaccounts-integration’ package to get
started, then select ‘Online Accounts’
in system settings.
www.linuxuser.co.uk 63
Feature DNA Computing
computing with
a Double Helix
The molecule of life is being pressed into service for computing with some
impressive and surprising results. Mike Bedford reveals how
T
he silicon era has had a good innings. the same for DNA computers. Knowing something
Since the dawn of silicon chips, we’ve about the structure of the DNA molecule and how
at a glance seen memory increase in capacity by it replicates is key to understanding how it can be
a factor of 128 million and processors used in computing.
•Early
DNA roots p64
DNA research
become several billion times faster. Yet the
writing is on the wall as semiconductor DNA roots
enabled massively parallel manufacturers now struggle to produce the ever Despite its undeniably futuristic nature, DNA
computation for niche smaller feature sizes needed to give us the computing can trace its ancestry to a 1994 paper
applications. year-on-year improvements we’ve grown to by Leonard Adleman of the University of Southern
expect. It might be bit premature to sound the California. The topic he chose was the Travelling
death knell for silicon anytime soon, and history Salesman Problem. The task involves deciding if
•Logic
DNA logic p65
gates can be made
suggests that most such problems have been
solved sooner or later, but some scientists are
it is possible for a salesman to visit a list of cities
in any order, with fixed start and end locations,
from DNA, but they won’t taking no chances. passing through each city only once given a list of
get any prizes for speed. So while electronics engineers are continuing to available direct flights between cities. The normal
extract yet more performance from conventional method of solving the problem is basically a trial
silicon circuits, other researchers have their and error approach but the time taken increases
• Non-deterministic sights set on a whole host of revolutionary and dramatically with the number of cities. Not so
DNA p67 unexpected technologies. Here we check out one with Adleman’s computer-in-a-test-tube.
A non-deterministic potential new technology: computing with DNA. In this pioneering work, just seven cities
architecture offers the Although we’ve become accustomed to interconnected by 14 flights were used, but the
promise of universality computing being first and foremost about method has potential to be scaled up. Here,
and parallelism. electronics, in one sense we shouldn’t be too however, we’ve simplified things further by using
surprised that DNA is a potential successor. After just four cities and six flights. The four cities are
all, this double helix molecule contains all the Atlanta, Boston, Chicago and Detroit, and the
•DNA-based
DNA storage p68
storage could
data necessary to replicate a human being and,
in that sense, is a form of data storage. When
available one-way flights are from Atlanta to
Boston and Detroit, Boston to Atlanta, Chicago
offer data retention that is we bear in mind that the development of the and Detroit, and Chicago to Detroit. The starting
measured in the hundreds electronic stored program computer relied on city is Atlanta and the final city is Detroit.
of years. the development of electronic memory, we can The method involves representing the cities and
speculate that this chemical memory might do flights by 8-base DNA strands according to strict
64
rules. The sequence for the DNA strand representing a
flight is the complement of the final four bases of the Quick guide
‘from’ city, followed by the complement of the first four DNA structure
bases of the ‘to’ city. The top part of the diagram on p66
shows the strands for Atlanta and Boston bonded to the Each strand of the double helix of a nucleotides are bonded together
strand for the Atlanta-to-Boston flight, the only flight DNA module is formed of molecular with strong chemical bonds in a
they will jointly bond with (with the double-strand helix groups called nucleotides. A strand, the bases can also bond to
straightened out for clarity). nucleotide is a combination of other bases via a weaker so-called
Calculating the solution involves preparing samples three smaller types of group: a hydrogen bond, linking the two
of DNA for the cities and flights and mixing them sugar, one or more phosphate stands. These wind around each
together a pinch of each so that chemical reactions groups and a nitrogenous base. other to form a double helix. A will
can take place. Even such a small amount of each type There are four types of bases only bond with its complementary
of DNA contains about a hundred trillion molecules called Adenine (A), Thymine (T), base T, and C will only bond with
which, almost certainly, would result in all possible Guanine (G) and Cytosine (C), and G – so for every A in one strand
combinations of cities and flights being generated by the nucleotides containing these there is an adjacent C in the other
chemical reactions. While all those combinations will bases can appear in any order strand. Shown here is a segment
be legal in the sense that ‘A’s would only bond to ‘T’s along a strand of DNA. While the of DNA, straightened for clarity.
and ‘C’s to ‘G’s, not all would be valid solutions to the
problem. For example, some wouldn’t start at Atlanta
and end at Detroit and others would miss out a city.
The bottom part of the diagram shows the only
valid solution and a couple of invalid ones. Although
the correct answer was generated in a few seconds,
it was then necessary to isolate it from all the invalid
ones. This job took about a week of supplementary
Above DNA displacing the original shorter part of the double strand an output. This is what has happened in part F, and we
computation has in so doing. We can now build on this basic mechanism to can also see that the output from Gate 1 will react with
been used to solve see how this reaction could be used to implement an OR Gate 3. This has happened in part G, the single strand
the computationally gate, one of the logic gates that form universal electronic being the output of the OR gate.
intensive Travelling computers. This isn’t necessarily a practical OR gate and, Inputs of 1 and 0, therefore, have generated an
Salesman Problem at the very least, it is a gross over-simplification, but we output of 1 and, again, this is what is expected of an
using a massively trust that it provides an inkling of how DNA can be used OR gate. It is fairly clear that a different input (such
parallel architecture to implement logic circuitry. In the following description, as anything:brown:orange) would react with Gate 2 to
we are referring to the bottom part of the same diagram. produce an output, that would in turn react with Gate 3
You’ll notice that the single DNA strands are called to generate the same output – and both inputs together
QUICK TIP inputs and outputs, while the double strands are the would also generate an output.
Ultra-low gates. These are not directly equivalent to logic gates, The above description raises a couple of questions.
power though; in fact, three such gates are needed to construct First, how is the output from one gate routed to the input
Supercomputers a single OR gate. The other thing to point out is that, of another and, second, why is the green:brown:black
are hugely unlike the case with electronic circuits where 0s and strand considered an output of the OR gate as a whole,
power hungry 1s are represented by different voltages, 0s and 1s are while the intermediate purple:brown:green strand is not?
but one based represented by the absence or presence of a particular In fact, the answers to both questions are related. DNA
on DNA, like the gates aren’t hard-wired to each other as the
University of gates on a silicon chip are. Instead, the gates
Manchester’s DNA gates aren’t hard-wired and the input and output strands are free to
proposed move in solution and will do so by diffusion.
machine, to each other; instead, the gates A single input strand will, therefore, come into
could be much contact with lots of gates, but will only react
more frugal. and the input and output strands with the ones with a complementary strand.
Researchers In our example, therefore, Input 1 could well
say it could are free to move in solution come into contact with Gate 3, but it won’t
perform 2 x 1019 react since it was designed to react only with
operations per Gate 1. In other words, different gates have
joule – that’s single strand of DNA. In part D we can see the OR gate, different input and output strands, even though they all
a billion times comprising three DNA gates; with no inputs, no reactions represent a binary 1. This also explains why the presence
more than take place so there is no output. Inputs of 0 and 0, of the purple:brown:green strand isn’t considered to
an electronic therefore, give an output of 0, which is what would be be an output of the complete OR gate. In any practical
equivalent. expected of an OR gate. In part E we introduce an input circuit, the output of the OR gate will be routed to the
and it’s clear that this will react with Gate 1 to generate input of another logic gate, but that logic gate will be
66
designed to respond to the green:brown:black strand course of action from the alternatives provided in the
and not purple:brown:green. program. The University of Manchester’s novel DNA-
Caltech researchers Erik Winfree and Lulu Qian have based architecture changes all that.
already created circuits made from multiple DNA logic The university’s Professor Ross King likes to give the
gates to perform arithmetic calculations. Their circuit example of solving a maze to show how this might work.
used 130 strands of DNA and was able to calculate Each time a junction is reached, rather than following
the square root of a 4-bit binary number in 6 to 10 one of the two paths at a time as a conventional program
hours. More recently, researchers at the University of would, the code specifies that either path could be taken
Washington have used so-called DNA origami to act as – and the inherently parallel nature of the architecture Below The toehold
scaffolding to anchor the various DNA strands in close enables both to be taken simultaneously. Once these two moderated strand
proximity to the strands they are designed to react with. paths reach further junctions, the computer would find displacement enables
This very much reduces the time taken for reacting itself following four paths concurrently – and so on, with logic gates to be
strands to diffuse towards each other, and has reduced the number of parallel paths increasing exponentially. constructed from DNA
times for simple computations from hours to minutes.
The fact is, though, that these chemical reactions
are never going to come close to reaching the speed of
electronic circuits, and fast computation is only going to
be achieved by offsetting slow chemical reactions with
a massively parallel architecture. Certainly logic circuits
www.linuxuser.co.uk 67
Feature DNA Computing
Formerly a pipe dream, DNA makes this feasible, as King that describes the very nature of the species and the
QUICK TIP explained. “To achieve this we utilise DNA’s ability to traits of the individual. But outside a living being, the
Time vs. mass replicate. At any choice point in a computation, our NUTM sequence of DNA could, in theory, represent anything. If
DNA technology [Non-deterministic Universal Turing Machine] replicates we bear in mind that there are four different bases, each
could reduce itself and takes all possible choices.” base could potentially represent two bits of information.
processing time This DNA research uses Thue strings, which offer a Things aren’t quite that simple because DNA reactions
overheads, but at quite different model of computation to the one we’re aren’t 100 per cent repeatable and this would lead to
a different sort familiar with, as Professor King elaborates. “The Thue corrupted data; they are slow so reading and writing
of cost. Tiny as string model is a rewriting system. For example, if you see would take place at a snail’s pace; and currently costs
DNA molecules the string ‘ac’ you can rewrite it as ‘ca’, and vice-versa. would be astronomical. But methods have been devised
are, repeatedly The non-determinism comes from the fact that there may to counter the effects of data loss by using redundancy,
doubling their be multiple choices of possible rewrites in a string”. prices will certainly reduce if and when the technology is
number, as is Unfortunately, it appears that programming it is by commercialised, and a slow access speed isn’t a problem
required for no means a trivial task. “We know from the theory of for its intended application.
some tasks, computer science that the Thue system is universal – That application is data archiving, and here the
means you’d that is, it can be used to compute anything any other benefits of DNA are its longevity and data density.
eventually need computer can do. However, that does not mean that When we recall that DNA testing was a key element in
more DNA than programming it is easy; it is even more difficult than identifying the ‘skeleton in a car park’ as the remains of
the mass of the a classical Turing Machine. Indeed, neither I nor my Richard III a few years ago, the fact that DNA can survive
earth. colleagues have coded anything significant. It would be
an interesting challenge to your readers to emulate the
Thue system and to use it to, say, encode the addition of Researchers at
two numbers”.
So what of the future for NUTMs? Will we ever see Microsoft and the
them on the desktop? The fact is that DNA reactions
will never be fast, but this can be more than offset University of Washington
by the massive parallelism on offer. So, for general
purpose computing, with minimal scope for high levels of encoded 215 petabytes
parallelism, simple problems would actually take longer.
In the elite world of high-performance computing, into a gram of DNA
though, things couldn’t be more different. We could
Below DNA storage envisage a scenario in which DNA-based NUTM
is already invading co-processors work alongside conventional silicon for hundreds of years, even in non-ideal conditions, is
popular culture as processors in supercomputers to achieve a massive evident. In fact, DNA has been extracted and sequenced
demonstrated by increase in performance and, at the same time, a huge from Egyptian mummies and from extinct animals such
Netflix’s sci-fi TV reduction in energy consumption. as mammoths and sabre-toothed cats, which makes the
show on, 3%, which 10-100 year lifetime of optical discs, magnetic storage
showcased protein- DNA storage and flash memory look rather puny by comparison.
bonds, similar to DNA, While DNA computers are likely to be niche products, Meanwhile, data density is absolutely staggering.
being used to create a there is evidence that DNA as a storage media may soon Researchers at Microsoft and the University of
vast database on the be much more mainstream. In nature, the sequence of Washington have encoded 215 petabytes (215,000
world’s population. nucleotides in each strand of DNA represents the data terabytes) into a gram of DNA; at this density, all the
world’s data could be stored in 10 tonnes of DNA, which
could fit into a trailer. By way of comparison, it’s been
estimated that the same data on a stack of CDs would
reach beyond the moon.
Doug Carmean of Microsoft Research is widely
reported as saying that they aim to have a “proto-
commercial system in three years storing some amount
of data on DNA in one of our data centers, for at least
a boutique application”. Indeed, speculation is rife that
Microsoft plans to add DNA storage to its cloud-based
services in the not too distant future.
So will we shortly have DNA in our desktop PCs
for day-to-day work? Probably not thanks to the
speed deficit, but the molecule of life might soon be
immortalising our cherished photos and irreplaceable
data instead. And let’s face it, if we ever do have DNA
processors injected into our bloodstream, that will surely
be the ultimate in personal computing.
68
£5
M MER
TRY THREE
SU ALE!
S
ISSUES FOR
BIG SAVINGS ON OUR BEST-SELLING MAGAZINES
SAVE SAVE SAVE
67% 72% 72%
For great savings on all of our magazines, see the entire range online
myfavouritemagazines.co.uk/summer182
*
& get 6 issues free
resources!
for
subscribers
Contents
www.linuxuser.co.uk 71
PiTutorial
Project Pi Laser Tag
Minecraft
Pi Laser Tag
Pew pew pew! A Utah State student creates a Pi-powered
shooting game with potential for future enhancements
T
erran’s first publicly released project uses GUI, there was definitely a learning curve that had to be
Terran a server – essentially any old desktop overcome. Using Pygubu to aid in that process was one
Gerratt you might have spare – and Raspberry Pi-
powered guns to create a fun infrared laser
of the best project decisions I made. The starting GUI
was created purely from code, while the game report
tag game for three players. Terran says the project GUI was created using Pygubu.
Terran is an relies heavily on Wi-Fi, which is why he decided to use I finished it in a fraction of the time, with many
electrical the Raspberry Pi Zero W for the 3D-printed guns. additional features I would not have known about had I
engineering attempted to write it from scratch. The best aspect of
student at Utah What inspired you to make your own laser tag game? Pygubu is the ability to see the adjustments made to a
State University During the past year or two, I have been more and more GUI in real time, without having to save, exit and run the
and is employed at interested in home-automation projects. I first created code each time.
WiTricity. He likes a simple infrared-relay module using an Arduino to
wakeboarding control a fan in my house. I then began realising how What was the most challenging aspect of the project?
with his wife. easy it was to send information through infrared and Configuring and tuning the infrared signals was
wanted to take it to the next level. If I could turn on a definitely the most difficult part of the project. I used
Like it? fan using a dinky remote, then surely I could shoot it the Linux Infrared Remote Control (LIRC) library which
from a 3D-printed gun? took the analogue and hardware side of transmitting
To dive into more signals and put it into software.
detail of Terran’s What made you decide to use MQTT? Were there other This was good because I wanted to keep the amount
project head to options you considered? of physical components to a minimum. On the other
http://bit.ly/ I decided to use MQTT as it was simple to set up and hand, the library brought a great deal of struggles with
PiLaserTag. embed in the code already running on each of the
devices. It was also a great place to start, as there are
numerous resources and examples online that helped If I could turn on a fan
Further me mould the messaging service into a network for
reading keeping track of game statistics. Looking back on using a dinky remote,
the project and realising how I ended up using the
This is Terran’s messages, using ROS would be an alternative to MQTT then surely I could shoot
first public as it has a lot of the needed features prepackaged and
project, follow him ready to go. information from a
on Instructables
or LinkedIn Can you give us an overview of how the server and guns 3D-printed gun?
(http://bit.ly/ communicate with each other?
TerranGerratt). The project actually used two methods of
Terran says his communication. The first was through infrared codes, it. There were many things the LIRC library provided,
next project as the guns needed to ‘shoot’ each other, while all of but a number of features I had to forego unless I added
will be to create the actual game functions and replies were handled additional hardware to each Pi.
”a distributed using MQTT. At the beginning of each game, a few game
system robot preferences could be chosen on the server and then Are there any things you’d change now, and are you
to accomplish sent to each player. planning to add any more features?
mundane tasks Once all of the players were logged into the player Because of the software limitations of the libraries
around the lobby, the server started the game. Each Pi was I used, I would love to create an analogue version of
house, such as assigned a unique infrared signal so the receiver the game. Instead of sending the infrared signals using
automated blinds would be able to distinguish which player had sent the the LIRC library, a more robust system could be made
or light switches.” infrared code. The tagged player would then send a by adding a few components and sending the signals
reply to the tagger to notify that he was tagged. A copy through hardware.
of every message sent between the players was sent to I also plan to implement additional game modes to
the server, which compiled all of the messages into a add uniqueness and branch out from traditional laser-
report that was displayed at the end of each game. tag games. It was my intention from the beginning to
have multiple game modes. All of the programming was
You mention using Pygubu for creating the UI? done so that new game modes could easily be added to
Yes. As this was my first attempt of creating a python the existing code.
72
Make your own
Terran has included STL
files on Instructables so you
can 3D-print your own guns,
along with a wiring diagram
for the various components
such as the LEDs, trigger and
transmitter/receiver.
Keeping score
Terran installed Mosquitto
MQTT Broker Service on
an old desktop to act as
a framework between the
laser-tag game’s server and
the Pi-powered guns.
1 2
www.linuxuser.co.uk 73
Tutorial Pi gaming controller
Resources
02 Connect the buttons
The next part can get a little messy. We need
to connect the left side of each button to a GPIO pin
on the Raspberry Pi (use the pinout chart to help with
Raspberry Pi The Raspberry Pi is such a versatile little computer. this). The GPIO pin will provide power and a method of
Breadboard Having access to the GPIO pins enables us to interface control for our button using Python. The right side of each
Jumper wires directly with any number of hardware peripherals, and button needs to be connected to GROUND to complete
Tactile Push in this case, create our own. the circuit. We could plug each cable into a different
Buttons Using some basic bits of tech, including a breadboard, GROUND pin on the Pi, but in order to keep things tidy
Pinout chart some jumper wires and tactile push buttons, we’re going and minimise the number of wires between the Pi and
https://pinout.xyz to build our own prototype video games controller that’ll the breadboard, we’ve linked multiple buttons to a row on
work with third-party Pi games, but could also come in
handy if you’re coding games of your own. Our prototype
won’t be pretty, but there’s nothing to say you couldn’t
3D-print a case and turn the controller into a legitimate-
looking gadget.
The beauty of a project like this is that you don’t need
to be an electrical engineer; in fact, you don’t need to
know much more than the basics of an electronic circuit,
because all the thinking is done with code in Python.
So grab a few wires, some buttons and a breadboard,
boot up the Pi and let’s jump into Python IDLE.
01 Position buttons
You’ll need to decide how many buttons your
controller is going to have. We’re starting with five for the breadboard and then wired that to a GROUND pin on
the Pi. We’ve done this once for the top and again for the
button of the board, so we’re only using two GROUND pins
in total.
In the image you can see the black and white GROUND
cables at the top of the breadboard are connected to the
Pi via a blue wire. It’d make sense to colour-coordinate
and do the same thing at the bottom.
74
We’ve gone with BCM 2, 3, 4, 5 and 6 for the sake of
simplicity for this project. 07 A functioning controller, part 2
Next, we’ll need to add code to recognise the
pressing of each button and pass a string to our newly
from gpiozero import Button created buttonFun function to let Python know which
leftButton = Button(2) button we’ve pressed and print the appropriate text.
while True:
08 Keyboard keys
At the moment our buttons function merely to
print which button was pressed, but you might want to
leftButton.wait_for_press() use this controller with third party games or software. We
print("Left button pressed.") can do that by coding each button to represent a letter
or function of a regular keyboard. We’d need to install
05 Troubleshooting
The most common error we encountered while
coding was something along the lines of: ‘gpiozero.exc.
pyautogui (pip install pyautgui) and import it:
09 WASD
Now change each print command to press and
assign the appropriate keyboard keys and we have a fully
functioning game controller. Changing up, left, down, right
to WASD and OK to SPACE creates a controller that would
work in most games. Give it a go in Minecraft Pi Edition.
Of course the real fun is creating your own game, so grab
pygame (pip install pygame) and build something.
def buttonFun(direction):
if direction == "left": press('a')
This means that we’ve accidentally used the same GPIO elif direction == "down": press('s')
pin number more than once, and Python can’t connect elif direction == "up": press('w')
to it. Like all syntax errors it’s an easy fix once you know elif direction == "right": press('d')
where to look, double check the BCM numbers in the elif direction == "select":
brackets of each Button. press('space')
It’s also worth making sure everything is connected on
the hardware side. Those jumper wires are only designed Once you've done this, your setup is complete!
for creating breadboard prototypes, so they do tend to
pop out on occasion.
GPIO label boards
www.linuxuser.co.uk 75
Tutorial Raspberry Pi: Inky pHAT
Resources
Pimoroni Inky
pHAT
www.pimoroni.com
76
01 Set up the hardware
As with all Pimoroni products, setting up your
Inky pHAT is easy. Simply remove the product from the
display. To run this type python3 hello.py "your
name", replace “your name” with a name of your choice,
of course. Try your own to start with! Press Return and
packing. Note that it ships with a pre-soldered header the Inky pHAT will begin to flicker as it loads the badge
so it’s ready to go, straight out of the box. Ensure that graphic and then adds your name to the display.
your Raspberry Pi is turned off, then place the Inky pHAT
onto the GPIO pins, with the display in line with the USB
ports. Gently push down to secure it into place. The pHAT
is designed perfectly to fit the Pi Zero model, creating a
04 Display a calendar
One nice example program included in the
examples folder is the graphical calendar. This program
slick and slimline hardware setup. uses the code now = datetime.datetime.now() to read
the current date and time from your OS and store the
www.linuxuser.co.uk 77
Tutorial Raspberry Pi: Inky pHAT
How E-ink
works 10 Create your own image: part 3
In GIMP, return to the Image menu, then Mode >
Indexed. Select the option for a ‘custom palette’ and click
the Palette selection dialogue button at the bottom-
right of the window. When the palette window loads,
E-Ink actually uses right-click and select the Import Palette option. Now
real ink to display navigate and select the Inky-pHAT.gpl palette file from
text and images. the download in Step 9. You can export the image as a
The display consists PNG file. Click ‘File’, scroll down the menu and select the
of a capsule film ‘Export as’ option. Save and name your file into your
which holds a clear folder. A dialogue box will appear with a number of saving
fluid containing tiny options; ensure that you tick ‘Save background color’,
particles. Electrodes then click ‘Export’.
are used to apply a
positive or negative
electric field, and
07 Cleaning the screen
Over time the Inky pHAT will encounter screen
burn – a discolouration of areas on the e-paper display.
colour particles with This occurs over time due to non-uniform use of the
the corresponding pixels. Fortunately, this can be resolved using the cleaner
charge move the program in the examples folder. This basically displays
top of the capsule, solid blocks of red, black and white to ‘clean’ the Inky
making the surface pHAT’s display. In the LXTerminal window type sudo
of the ‘paper’ appear clean.py and press Return. The program takes a little
a particular colour. while to complete, but do leave it to finish.
import sys
from PIL import ImageFont
import inkyphat
import textwrap
78
be able to display around 250 characters before the text properly displayed you can adjust the font size and the
becomes unreadable. The last line sets the spacing and textwrap.fill() value until it looks correct. You can also
is relative to the font size. A value of 30 or 31 renders the change the ink color to RED.
text with accuracy; if you change the font size remember
to adjust this value too. inkyphat._draw.multiline_text((x, y), txt,
inkyphat.BLACK, font)
font = ImageFont.truetype(inkyphat.fonts. inkyphat.show()
FredokaOne, 14)
message = "This is an example of text"
txt = textwrap.fill(message, 31) 16 Run the program
Once you have completed the Text display
program, add a suitable line of text to line six of the
www.linuxuser.co.uk 79
MAKE YOUR OWN PROJECTS WITH
THE RASPBERRY PI
Learn the electronics, computing and coding skills you need to make your own
projects with the Raspberry Pi, and let your imagination run wild
ON SALE
NOW
Group test
Recovery distributions
Linux is very resilient, but when things go wrong you can use one of these
specialised distributions to repair and rescue a broken installation
www.linuxuser.co.uk 81
Review Recovery distributions
n ALT Linux Rescue has a boot option that’ll bring up networking and start n Besides MorpheusArch, you can use the LinDiag utility on a regular Arch
the SSH server for remote access to the machine installation as well as on Fedora and Debian
Overall Overall
4 5
ALT Linux’s Rescue edition is designed for MorpheusArch Linux is designed for Arch users who
experienced sysadmins who need an all-in-one are familiar with the CLI-based data-recovery tools
Live environment to fix issues. You won’t get much that it bundles. Its assistive LinDiag script isn’t very
mileage if you aren’t familiar with the tools. useful unless it’s bundled along with the distribution.
82
SystemRescueCd Ultimate Boot CD
A good option if you can use the tools A CLI-only system that’s surprisingly
or have the patience to read the docs easy to navigate and use
n SystemRescueCd ships with a minimal Xfce desktop, along with some n UBCD’s menu divides the packaged utilities into categories based on the
essential apps such as Firefox, KeePassXC and others system area they influence, such as BIOS, HDD and memory
Overall Overall
8 8
A well-rounded recovery solution that’ll help Lives up to its name with its ultimate collection of
resolve most common issues. It has enough docs tools and utilities to troubleshoot problems in every
to guide inexperienced users, while advanced ones component of the system. The lack of a graphical
will appreciate its customisation options. desktop is no impedence.
www.linuxuser.co.uk 83
Review Recovery distributions
84
Review Acer Chromebook Spin 11
Hardware
86
Above The inclusion of two USB-C ports for data transfer and charging is a welcome touch
Pros
An affordable price for
It’s durable, has decent enough performance, comes a Chromebook which
comes with a stylus and
with an included stylus and carrying case carry case. It also boasts
a durable frame design
much lighter without giving up some of the durability. Still, even with its modest components, Chrome that’ll last for years.
Ports are also a huge win here – you have two USB-C OS continues to prove itself competent for its
ports, which handle data transfer and charging. In a intended use cases: generally web browsing and
Chromebook at this price range, we would have been word processing. Cons
happy with just one. You’ve also got two USB 3.0 While it won’t be able to keep up with something The IPS screen isn’t as
ports, a microSD card slot and a headphone jack. like the Google Pixelbook, rocking an Intel Celeron bright as we’d like (and
The screen also reflects (if you’ll excuse the processor as it does, it was able to keep up with a you can forget about
non-pun) the durable nature of the Chromebook, decent workload: typing this review with 10 tabs Full HD resolution), its
as it’s covered by Corning Gorilla Glass, so it should open in Chrome and a music player running in the battery life is less than
be pretty resistant to cracking or shattering. The background. Quite honestly, that’s exactly what we stellar, and performance
touchpad is a sour point, however. The surface wanted to see from this device. is merely adequate if
is competent, but it’s extremely sensitive, not to However, one of the most important aspects of actually a little lacklustre.
mention that the plastic finish and tough centre- a Chromebook is its battery life, and unfortunately
click was a turn-off. However, it does support the Spin 11 disappointed here. While in our casual
gestures, so it’s not all bad. use we didn’t really need to stress about charging Summary
Things start to take a turn when you look at the the device, it didn’t exactly perform admirably when If you’re looking for
display. It features a 1,366x768 IPS display, and while we looped a local video at 1080p running in VLC: the a cheap laptop for
viewing angles are quite good, it’s not very bright. At device died after just 7 hours and 34 minutes. education or for general
this price and form factor an HD display isn’t exactly For a Chromebook, this score is well below browsing, and you don’t
out of the ordinary, but we would have appreciated a the category average. For instance, the similarly need frivolous features
brighter panel, if not Full HD resolution. configured (and older) Dell Chromebook 13 scored and a high-resolution
The speakers, also, are a bit lacklustre; while they almost double when running the same test, at display, the Acer is a
do manage to fill the room, there’s not all that much around 14 hours. great device. Just bear
in the way of detail in the sound. So the Spin 11 is Overall, then, the Acer generally falls in line with in mind the lower-
fine for watching some YouTube videos; just don’t
expect a pleasant experience while listening to music
or watching ‘real’ films. The Acer isn’t a powerhouse,
similar devices in the US. But when you take the
included stylus and carrying case into consideration,
it actually starts to look like a bargain.
than-average
battery life. 8
and we weren’t expecting it to be. Bill Thomas
www.linuxuser.co.uk 87
Review Linux Mint 19 Cinnamon
Distro
88
Above Taking snapshots of your installation is easy with the built-in Timeshift tool
Pros
If you want to spruce up the look of your installation, the A strong user community,
dedicated devs and plenty
Extensions utility enables you to add a large number of of custom apps make this
ideal for everyone.
effects such as wobbly windows and workspace scroller
In addition to a custom beginner-friendly backup If you want to spruce up the look of your Cons
utility, the latest release of Linux Mint also features installation, the Extensions utility enables you The distribution is missing
Timeshift. The tool, which was originally introduced to add a large number of effects such as wobbly several useful applications
to the Mint community with Mint 18.3, is designed windows, a workspace scroller, a Compiz-like 3D – such as an IM client –
to help you create snapshots of your working cube and so on to the desktop. out of the box.
installation, using either rsync or btrfs. Should you While the distribution still includes Synaptic,
accidentally break the system, such as when you’re Software Manager is the primary application
testing experimental software, you can revert the for handling software. Unlike its peers on other Summary
system to a previous snapshot state. Best of all, you distributions, Mint’s Software Manager can easily be Linux Mint has a
can even automate snapshots on a weekly, monthly, navigated using just the keyboard, and now supports history of producing
or even hourly basis depending on your needs. You searching within categories. well-rounded editions.
can also choose to make only incremental snapshots With its focus on performance, and the inclusion The latest LTS release
with rsync, which helps save disk space. of a backup and snapshot tool out of the box, Mint will be supported until
Several essential components such as the 19 is a polished release and should appeal to novices 2023 and features a
Nemo file manager and the Update Manager and experienced users alike. Although APT remains number of custom tools
have undergone improvements. The latter now the underlying package-management system, the such as the Software
advises users to deploy all updates as they become distribution continues to improve its support for Manager, and useful
available, since system stability and longevity is Flatpak with the latest release, now featuring single- new additions like
guaranteed by Timeshift snapshots. Should your click install of packages. the Timeshift
installation grow to include packages from different
third-party repositories and PPAs, the Update
Manager now identifies the source of each of the
Thanks to a long history of stable releases under
its belt, Mint has created a distinct identity for itself,
which is rare for derivative distributions.
snapshot tool for
troubleshooting. 9
available updates. Shashank Sharma
www.linuxuser.co.uk 89
Review Fresh free & open source software
Desktop Shell
Cinnamon 3.8.4
The visual component of the desktop gets an update
Cinnamon is the shell or user interface
of the popular Cinnamon desktop.
This component is responsible for the
various elements you see in the desktop
environment, including panels, menus, hot corners
and such. Cinnamon’s user interface is written in
JavaScript and its core libraries are written in C.
The Cinnamon project is headed by Linux Mint,
but the desktop is developed separately from
the distribution. One of the main reasons for the
desktop’s popularity is that it adheres to the
traditional desktop metaphor with a single panel
located at the bottom of the screen, a design
reminiscent of GNOME 2. The desktop environment
is made of several components such as the Muffin
window manager, the Cinnamon Control Center and
the Cinnamon shell.
Most of the work in this minor new release is Above Don’t confuse the Cinnamon Shell with its namesake, the Cinnamon Desktop
behind the scenes, though the effect of some of the
changes are visible on the desktop. For example,
the network applet no longer displays unmanaged
networks, and system settings now sports a Pros Cons Great for...
keyboard backlight section. The update should Offers a traditional Although it’s not as The best of both worlds:
have made its way into the repositories of your desktop layout without demanding as other traditional layout and
distribution, so you should switch to it as soon as skipping various modern desktops, you can’t run it modern features.
you get the notification. functionalities. on low-spec machines. http://bit.ly/lud_cinnamon
Programming Language
90
FTP Server
System Optimiser
Stacer 1.0.9
A graphical system monitor and resource optimiser
System monitoring is an essential
task for any user. Although you’ll find
several system monitors in distribution
repositories, Stacer offers the added
benefit of an attractive user interface that’s
intuitive to navigate. Besides monitoring, you can
also use the app to optimise and fine-tune several
aspects of your installation – much like Bleachbit,
but with a much nicer interface.
Stacer has a PPA repository for Ubuntu and also
hosts precompiled RPM and DEB files in addition
to an AppImage for other distributions. The app’s
dashboard displays information about the system
and has dials to depict the activity levels of the
processor, memory and disk, along with bars for
download and upload speeds. You can also configure
the app to alert you in case the readings on any of
the dials exceeds a defined level. Above You can use Stacer to modify some of GNOME 3’s quirks, like missing desktop icons
Use the icons on the left to switch to the different
components/areas of the app. You can add and
disable startup apps and control other installed
services. The app also helps you free up disk Pros Cons Great for...
space by removing unwanted files from particular An attractive-looking, Limited set of monitoring Clearing package caches
locations, such as the Trash and package caches. In intuitive app to monitor capabilities that don’t go and controlling services
the same vein, it can also display a list of all installed and tune several aspects much beyond the very with the minimum of fuss.
apps and enables you to uninstall any of them. of your installation. basic options. http://bit.ly/lud_stacer
www.linuxuser.co.uk 91
Get your listing in our directory
Web Hosting To advertise here, contact Chris
[email protected] | +44 01225 68 7832 (ext. 7832)
recommended
Hosting listings
Featured host: Netcetera is one of
www.netcetera.co.uk Europe’s leading Web
03330 439780 Hosting service providers,
with customers in over 75
countries worldwide
About us
Formed in 1996, Netcetera is one of IT infrastructure provider offering
Europe’s leading web hosting service co-location, dedicated servers and
providers, with customers in over 75 managed infrastructure services to
countries worldwide. It is a leading businesses worldwide.
What we offer
• Managed Hosting • Cloud Hosting
A full range of solutions for a cost- Linux, Windows, hybrid and private
effective, reliable, secure host cloud solutions with support and
• Dedicated Servers scaleability features
Single server through to a full racks • Datacentre co-location from quad-
with FREE setup and a generous core up to smart servers, with quick
bandwidth allowance setup and full customisation
92
SSD web hosting
Supreme hosting
www.bargainhost.co.uk
www.cwcs.co.uk 0843 289 2681
0800 1 777 000
Since 2001, Bargain Host has
CWCS Managed Hosting is the UK’s campaigned to offer the lowest-priced
leading hosting specialist. It offers a possible hosting in the UK. It has
fully comprehensive range of hosting achieved this goal successfully and
products, services and support. Its built up a large client database which
highly trained staff are not only hosting includes many repeat customers. It has
experts, it’s also committed to delivering also won several awards for providing an
a great customer experience and is outstanding hosting service.
passionate about what it does.
• Shared hosting
• Colocation hosting • Cloud servers
• VPS
• 100% Network uptime Enterprise • Domain names
www.linuxuser.co.uk 93
Free Resources
Welcome to Filesilo!
Download the best distros, essential FOSS and all
our tutorial project files from your FileSilo account
what is it?
Every time you
see this symbol
in the magazine,
there is free
online content
that's waiting
to be unlocked
on FileSilo.
why register?
• Secure and safe 1. unlock your content 2. enjoy the resources
online access,
from anywhere Go to www.filesilo.co.uk/linuxuser and follow the You can access FileSilo on any computer, tablet or
instructions on screen to create an account with our smartphone device using any popular browser. However,
• Free access for secure FileSilo system. When your issue arrives or you we recommend that you use a computer to download
every reader, print download your digital edition, log into your account and content, as you may not be able to download files to other
and digital unlock individual issues by answering a simple question devices. If you have any problems with accessing content
based on the pages of the magazine for instant access to on FileSilo, take a look at the FAQs online or email our
• Download only the extras. Simple! team at [email protected].
the files you want,
when you want
94
Log in to www.filesilo.co.uk/linuxuser
tutorial code
All the code for the Azure PowerShell
tutorial, plus computer security.
Short story Stephen Oram
Facebook: Twitter:
follow us facebook.com/LinuxUserUK @linuxusermag
Near-future fiction
Obnoxious
I
t’s embarrassing,’ said Nialle as he put his
empty coffee cup down.
She picked up his cup and put it in the
sink. ‘Yeah, but it’s a seriously cool jacket,’
she said.
He shrugged and lifted the wearable tech jacket from
its hook. He turned to kiss her goodbye, brushing the
hair from in front of her face and kissing her hand by
about
mistake as she also tried to brush it away. competition was palpable. He edged forward in unison
Stephen Oram She chuckled. ‘You’ll be glad of it once you’re on that with his fellow sufferers. He could feel the tension
Stephen writes crowded train.’ rising around him as the train seemed to be filling up
near-future She was right, as always, and he knew it. But, that quicker than they were moving forward. The man in
science fiction. didn’t make him cringe any less at the thought of using front of him stepped on and paused in the doorway.
He’s been a it. He shouted as he closed the front door, ‘See you Nialle could see some space in front of the stalled man
hippie-punk, tonight.’ so he grabbed the handrail and pulled himself up, one
religious-squatter She shouted something back, but he didn’t catch it. foot in and one foot out.
and a bureaucrat- At the station, as she’d predicted, the platform was
anarchist; crammed full of commuters standing patiently. A halo
he thrives on of irritation and early morning grumpiness hung silently The doors beeped and
contradictions. above them.
He has publised He stepped into a tiny space between two opened. The crowd moved
two novels, headphone-wearing men. They shuffled a little further
Quantum forward and, although it was less than half a step, they forward gently and politely,
Confessions still felt the need to scowl and make their annoyance
and Fluence, known. but the undercurrent of
and is in several The train was due, and slowly but surely they were
anthologies. His nudged closer and closer to each other as more competition was palpable
recent collection, commuters arrived. He’d become expert over the past
Eating Robots few months at judging the number of people on the
and Other Stories, platform and how many of them could fit onto a train. He touched the man’s back and he moved just
was described He looked around; he reckoned he would just about enough for Nialle to get both feet onto the train. The
by the Morning get on. doors beeped and closed behind him. It was a crush,
Star as one of A man to his left took a step backwards as the train but he knew that the seasoned travellers would have
the top radical appeared from around the corner. Immediately the made themselves slightly larger when everyone was
works of fiction crowd pushed him back into his place; there was no trying to board so they could now relax and create
in 2017. http:// room for slacking at this stage of the game. themselves some precious space.
stephenoram.net The doors beeped and opened. The crowd moved This was his cue.
forward gently and politely, but the undercurrent of He pinched the corner of his jacket collar.
The crowd near him shuffled a little and started to
move away. Hostile looks were thrown his way, and
there was much tutting and some occasional under-
the-breath swearing.
The wearable technology was emitting a strong scent
NEXT ISSUE ON SALE 23 August of stale body odour, creating enough space around him
Open Source Storage | Purism's mobile OSes | Firewalls to make the embarrassment worth it.
He smiled; once again, his jacket had done its job.
96
The source for tech buying advice
techradar.com
9000