Back To Basics
Back To Basics
Basics
Hype-free Principles for
Software Developers
Jason Gorman
ABOUT THE AUTHOR
ABOUT CODEMANSHIP
You can find out more about Codemanship training and coaching at
www.codemanship.com
INTRODUCTION .............................................................................................................. 3
SUMMARY OF PRINCIPLES ................................................................................................... 6
SOFTWARE SHOULD HAVE TESTABLE GOALS .................................................................. 7
CLOSE CUSTOMER INVOLVEMENT IS KEY ...................................................................... 10
SOFTWARE DEVELOPMENT IS A LEARNING PROCESS..................................................... 13
DO THE IMPORTANT STUFF FIRST .................................................................................. 16
COMMUNICATING IS THE PRINCIPAL ACTIVITY ............................................................. 17
PREVENTION IS (USUALLY) CHEAPER THAN CURE ........................................................ 21
SOFTWARE THAT CAN’T BE PUT TO USE HAS NO VALUE ............................................. 25
INTERFACES ARE FOR COMMUNICATING ....................................................................... 28
AUTOMATE THE DONKEY WORK ................................................................................... 33
GROW COMPLEX SOFTWARE USING THE SIMPLEST PARTS ............................................ 37
TO LEARN, WE MUST BE OPEN TO CHANGE ................................................................. 40
ABOUT CODEMANSHIP ................................................. ERROR! BOOKMARK NOT DEFINED.
W hile the media often refers to the start of the “digital age” when
multimedia applications and hypertext took off in the late
1980’s, it’s worth remembering that people have been
programming digital electronic computers since they were invented during
the Second World War.
Software development’s not quite the lawless and anarchic wild frontier
people make it out to be. Developers today have seven decades of
practical experience of writing software under commercial pressures to
draw on, and there are many insights that have been built up over that time
that an aspiring young programmer needs to know.
Changes in our economy, big rises in university fees and high youth
unemployment are leading more and more of us to the conclusion that
apprenticeships may be the best route into software development careers.
I had to read dozens and dozens of books and try a whole range of
approaches to see through all that smoke and begin to make out the shapes
of real insights into software development.
That was a journey of over a decade. The mist really didn’t clear until
about 2002 for me. I felt I wasted a lot of time wading through what
turned out to be marketing hype – just a whole sea of buzzwords and
brand names – to get at those few important insights and tie it all together
in my own head into a coherent whole.
The last decade has been a process of reapplying those insights and
projecting them on to the hype, so that when I coach someone in what
we’re calling “Test-driven Development” these days, for example, I know
what it is we’re really doing. More importantly, I know why.
Don’t get me wrong; there’s nothing inherently bad about DSDM, XP,
Scrum, Lean, Kanban, RUP, Cleanroom or many of the other
methodologies on offer. The problem is that when we master development
in such specific terms, we can miss the underlying principles that all
software development is built on. And when our favourite method falls out
of favour with employers, we risk becoming obsolete along with it if we
can’t adapt these foundational principles to a new fashionable
methodology.
We can also very easily end up missing the point. The goal is not to be
“Agile”, the goal is to be open and responsive to change. The goal is not to
be “test-driven”, but to drive out a simple, decoupled design and to be able
to re-test our software quickly and cheaply.
W
hy?
Most teams don't know why they're building the software they're building.
Most customers don't know why they're asking them to either.
By all means, if it's your time and your money at stake, play to your heart's
content. Go on, fill your boots.
Failing to understand the problem we're trying to solve is the number one
failure in software development. It stands to reason: how can we hope to
succeed if we don't even know what the aim of the game is?
It will always be the first thing I test when I'm asked to help a team. What
are your goals, and how will you know when you've achieved them (or are
getting closer to achieving them)? How will you know you're heading in
the right direction? How can one measure progress on a journey to
"wherever"?
Teams should not only know what the goals of their software are, but
those goals need to be articulated in a way that makes it possible to know
unambiguously if those goals are being achieved.
We should have started with the why and figured out what features or
properties or qualities our software will need to achieve those goals.
Not having the goals clearly articulated has a knock-on effect. Many other
ills reported in failed projects seem to stem from the lack of testable goals;
most notably, poor reporting of progress.
But also, when the goals are not really understood, people can have
unrealistic expectations about what the software will do for them. Or
rather, what they'll be able to do with the software.
There's also the key problem of knowing when we're "done". I absolutely
insist that teams measure progress against tested outcomes. If it doesn't
pass the tests, it's 0% done. Measuring progress against tasks or effort
leads to the Hell of 90% Done, where developers take a year to deliver
90% of the product, and then another 2 years to deliver the remaining
90%. We've all been there.
But even enlightened teams, who measure progress entirely against tested
deliverables, are failing to take into account that their testable outcomes
are not the actual end goals of the software. We may have delivered 90%
of the community video library's features, but will the community who use
It's all too easy for us to get wrapped up in delivering a solution and lose
sight of the original problem. Information systems have a life outside of
the software we shoehorn into them, and it's a life we need to really get to
grips with if we're to have a hope of creating software that "delights".
In the case of our community video library, if there's a worry that users
could be unwilling to donate popular titles, we could perhaps redesign the
system to allow users to lend their favourite titles for a fixed period, and
offer a guarantee that if it's damaged, we'll buy them a new copy. We
could also offer them inducements, like priority reservations for new titles.
All of this might mean our software will have to work differently.
The customer is someone who holds the power - the ultimate power - to
decide what the money gets spent on, and whether or not they're happy
with what the money got spent on.
They shouldn't need to check with a higher authority. If they do, then
they're not the customer.
We need to distinguish between customers and people who can often get
confused with customers.
As we'll discuss in the next post, feedback cycles are critical to software
development. Arguably the most important feedback cycle is the one that
exists between you - the developers - and the customer.
1
CHAOS report - http://www.projectsmart.co.uk/docs/chaos-report.pdf
What can often happen is that the real customer's far too busy and
important to spend any time with you, so they delegate that responsibility
to someone who works for them. This is often signified by months or
years of things appearing to go very well, as their proxy gives you
immediate feedback throughout the process.
It usually hits the rocks when the time finally comes for your software to
be put in front of the real customer. That's when we find out that software
can't be left to someone else to decide how it should be. It's like paying
someone to tell you if you like your new hair style.
Teams can waste a lot of time and a lot of money chasing the pot of gold
at the end of the wrong rainbow.
You need to find out who your customer really is, and then strap them to a
chair and don't let them go until they've given you meaningful answers.
Here's the thing. Most customers are actually paying for the software with
other people's money - shareholders, the public, a community etc. They
have a professional responsibility to be a good customer. A good customer
takes an active interest in how the money's spent and what they get for that
money.
It doesn't mean you need to take creative control away from writers,
directors, actors, designers and others involved in making the movie. It
just means that you are aware of how it's going, aware of what the money's
being spent on, and in an informed position to take the ultimate decisions
about how the money's spent.
E
verything in life is, to some degree, an experiment.
With each attempt, we learn and we're able to do it better the next time.
This is a fundamental component of our intelligence. We're able to train
ourselves to do extraordinary - sometimes seemingly impossible things -
by doing them over and over and feeding back in the lessons from each
attempt so we can improve on it in the next.
When we create and innovate, we're doing things we haven't done before.
And when we're doing something for the first time, we're not likely to do it
well as we would on the second attempt, or the third, fourth of fifth.
Now, I don't know about you, but personally I've got a bit more pride than
to make my customer pay for a crappy omelette.
And we don't really start from scratch with every iteration, like we would
with omelettes. Typically, we take the program code from one iteration
and make the necessary changes and tweaks to produce a new, improved
version. Unless the first attempt was so off-the-mark that we'd be better
off throwing it away and starting again. Which does happen, and why it's
highly advisable not to use all your eggs in that first omelette (so to
speak).
The one thing we should never do is assume we can get it right first time.
We won't. Nobody ever does.
An important thing to remember is that the shorter the feedback cycles are
when we iterate our designs, the faster we tend to learn and the sooner we
converge on a good solution.
So we learn faster and we learn better when we rapidly iterate and change
less things in each iteration.
F
ire!
Not really. But if there actually was a fire, and you had just
seconds to grab a handful of items from your home before
running into the winter night in your underpants, would you just
grab things randomly?
There's a risk you'll end up in the freezing night air having managed to
save only the TV remote and a box of Sugar Puffs. Not quite as useful in
that situation as, say, your wallet and your car keys. You'd feel pretty
stupid.
So just think how stupid you'd feel if you only had 3 months to create a
piece of working software and, when the time and the money ran out, you
hadn't got around to incorporating the handful of key features that would
make the whole effort worthwhile.
Some features will be of more use and more value to your customer and
the end users than others.
Studies like a recent one on menu item usage in Mozilla's Firefox web
browser 2 show that some software features are used much more than
others. We see this kind of distribution of feature usage on many different
kinds of application.
If I was leading a team building a web browser like Firefox, I would want
to have "Bookmark this page" working before I worried about "Firefox
Help".
When you have a close working relationship with the customer, and
unfettered access to representatively typical end users to ask these kinds of
questions, it becomes possible to more effectively prioritise and tackle the
more important problems sooner and leave the less important problems for
later.
2
http://blog.mozilla.org/metrics/2010/04/23/menu-item-usage-study-the-
80-20-rule/
H
ello there.
But when you write a program, the compiler isn't the only target of your
communication. Programs have to be read and understood by
programmers, too. So, as well as learning how to write programs that
make sense to machines, we have to write them so that they make sense to
humans, too.
Various studies, like one conducted by Bell Labs3, estimate the amount of
time developers spend reading and understanding code at anywhere
between 50-80%. Understanding code is actually where we spend most of
our time.
We need to understand code so that we can change it, and change it will.
When code fails to communicate clearly to programmers, that code
3
http://onlinelibrary.wiley.com/doi/10.1002/bltj.2221/abstract
This can lead to evils such as unnecessary duplication and software having
multiple design styles, making it harder to understand. It's also entirely
possible - and I've been unlucky enough to witness this a few times - to
find out, when all the pieces are wired together, that the thing, as a whole,
doesn't work. (Yet another example of how the high "interconnectedness"
of software can bite us if we're not careful.)
It's therefore very important to try to ensure this doesn't happen. Over the
years, we've found that various things help in this respect.
It's also a very good idea for teams to integrate their various different
pieces very frequently, so we can catch misunderstandings much earlier
when they're easier to fix.
I might give you a vague statement like "a want a tall horse", and to firm
up our shared understanding of what we mean by "tall", you could show
me some horses and ask "Is this tall enough? Too tall? Too short?"
It's also often the case that our customer doesn't know precisely what they
mean, either, and so examples can be a powerful tool in their learning
process. They might not know exactly what they want, but they might
know it when they see it.
L
ife is full of examples of how it can pay dividends later when we
take more care up front to avoid problems.
Taking a moment to check we've got our keys tends to work out
cheaper than paying a locksmith to let us back into our house. Taking a
moment to taste the soup before we send it out of the kitchen tends to
work out cheaper than refunding the customer who complained about it.
Taking a moment to check the tyres on our car before we set off tends to
work out much, much cheaper than the potential consequences of not
doing so.
We can grossly underestimate how much time - and therefore, how much
of the customer's money - we spend fixing avoidable problems.
Decades of studies 4 have shown beyond any reasonable doubt that the
effort required to fix errors in software grows exponentially larger the
longer they go undiscovered.
A basic programming error might cost 10-20 times as much to fix after the
software's released as it would had the programmer caught it as soon as
they'd written that code.
http://www.research.ibm.com/haifa/projects/verification/gtcb/gtcb_v3_0_
2_presentation/index4.htm
Or, to put it more bluntly, teams that take more care tend to go faster.
This is one of life's rare instances of allowing us to have our cake and eat
it. We can produce more reliable software (to a point), and it can actually
cost us less to produce it.
The trick to this is to look for mistakes sooner. Indeed, to look for them as
soon as we possibly can after making the mistakes.
Those examples can become tests for the software that we can use as we're
writing it to make sure it conforms to what the customer expects.
5
http://www.stevemcconnell.com/articles/art04.htm
In order for software to be correct, not only have all the individual pieces
got to be correct by themselves, but all those pieces need to work together
correctly.
This is why good teams tend to integrate their individual pieces very
frequently, so any slip-ups on the way different pieces interact with each
other are caught as early as possible.
If we test as we code, and break our code down into the smallest, simplest
pieces, it becomes possible to work in very short code-and-test cycles of
just a few minutes each. This level of focus on the reliability of potentially
every individual function (or even every line of code) can lead to software
that has very few errors in it by the time it's ready to be put into use.
And studies have shown that such a focus throughout development can
allow us to create better software at no extra cost. Hurray for our side!
Finally, to the "(usually)" in the title: we find there are limits to this
principle. Yes, it pays to check the tyres on your car before you set off. It
can significantly reduce the risk of an accident. But it cannot remove the
But if the dangers presented by potential tyre failures are much higher -
for example, if that car is about to race in the Grand Prix - you may need
to go further in ensuring the safety of those tyres. And in going further, the
costs may start to escalate rapidly.
But here's a warning. Too many teams misjudge where they lie in the
grand scheme of care vs. cost. Too many teams mistakenly believe that if
they took more care, the cost would go up. (Too many teams think they're
McLaren, when, in reality, they are Skoda.)
It's very unlikely that your team is already doing enough to catch mistakes
earlier, and that you wouldn't benefit from taking a bit more care. Most of
us would benefit from doing more.
H
ere's a little ditty that tends to get left out when we discuss the
principles of software development.
You cannot eat a meal that's still in the restaurant's kitchen in pots and
pans. You cannot listen to a pop song that's sitting on the producer's hard
drive in a hodge-podge of audio files and MIDI. You cannot drive a car
that's still on the assembly line.
And you cannot use software that's still just a bunch of source code files
sitting on different people's computers.
Software, like food, music and cars, costs money to make. In the majority
of cases where teams are doing it commercially, that's someone else's
money. If our ultimate goal is to give them their money's-worth when we
write software, we really need to bear in mind that the value locked in the
code we've written cannot be realised until we can give them a working
piece of software that they can then put to good use.
Typically, the sooner the customer can get their hands on working
software that solves at least part of their problem, the better for them and
therefore for us, since we all share that goal.
Teams that have put insufficient care into checking that the software
works - that is, it does what the customer needs it to do, and does it
reliably enough - will likely as not find that before their software can be
released, they have to go through a time-consuming process of testing it
and fixing any problems that stand in the way of making it fit for its
purpose.
It's the customer's problem we're trying solve, and the customer's money
we're spending to solve it. Common sense dictates that the customer
should decide when to release the software to the end users.
B
asic Principle #5 states that the principal activity in software
development is communicating.
Most computer users are familiar with Graphical User Interfaces. These
present users with friendly and easy-to-understand visual representations
of concepts embodied by the software (like "file", "document", "friend"
and so on) and ways to perform actions on these objects that have a well-
defined meaning (like "file... open", "document... save" and "friend... send
message").
Interface design is a wide topic, but let's just cover a few key examples to
help illustrate the point.
Ideally, it's the latter, since the whole point of our software is to enable the
user to communicate with the computer. So an interface needs to make
sense to the user. We need to strive to understand the user's way of
looking at the problem and, wherever possible, reflect that understanding
back in the design of our interface.
Interfaces that users find easy to understand and use are said to be
intuitive.
In reality, some compromise is needed, because it's not really possible yet
to construct computer interfaces that behave exactly like the real world.
But we can get close enough, usually, and seek to minimise the amount of
learning the end users have to do.
Another basic rule is that interfaces need to make it clear what effect a
user's actions have had. Expressed in terms of effective communication,
interfaces should give the user meaningful feedback on their actions.
It really bugs me, as someone who runs a small business, when I have to
deal with people who give misleading feedback or who give no feedback
at all when we communicate. I might send someone an important
document, and it would be very useful to know that the document's been
received and that they're acting on it. Silence is not helpful to me in
planning what I should do next. Even less helpful is misleading feedback,
like being told "I'll get right on it" when they are, in fact, about to go on
holiday for two weeks.
If I delete a file, I like to see that it's been deleted and is no longer in that
folder. If I add a friend on a social network, I like to see that they're now
in my friends list and that we can see each other's posts and images and
wotnot and send private messages. When I don't get this feedback, I
worry. I worry my action may not have worked. I worry that the effect it
Which leads me on to a third basic rule for interface design. Because it's
not always possible to make interfaces completely intuitive, and because
the effect of an action is not always clear up front, users are likely to make
the occasional boo-boo and doing something to their data that they didn't
mean to do.
Oops. Important file gone. In the days before the Recycle Bin, too. The
one button they didn't have was the one I really, really needed at that point
- Undo!
Interfaces that allow users to undo mistakes are said to be forgiving, and
making them so can be of enormous benefit to users.
When actions can't be undone, the kindest thing we can do is warn users
before they commit to them.
Another way we can protect users is by presenting them only with valid
choices. How annoying is it when an ATM prompts you to withdraw £10,
£30, and £50, and when you select one of those options you get a message
saying "Only multiples of £20 available". Like it's your fault, somehow!
Interface design should clearly communicate what users can do, and
whenever possible should not give them the opportunity to try to do things
Similarly, when users input data, we should protect them from inputting
data that would cause problems in the software. If the candidate's email
address in a job application is going to be used throughout the application
process, it had better be a valid email address. If you let them enter
"wibble" in that text box, the process is going to fall over at some point.
If, according to the rules of our software, there's no way of knowing that
the user didn't intend to do that, then we need to be forgiving. If the rules
say "that's not allowed in these circumstances", then we should be strict.
One final example, going back to this amazingly well-designed GUI with
the rabbit Delete button. On the main toolbar, it was a rabbit. But there
was also a Delete button on the individual File dialogue, which sported a
picture of an exclamation mark. So having figured out once that "rabbit =
delete", I had to figure it out again for "exclamation mark = delete". Boo!
Hiss! Bad interface doggy - in your basket!
I
don't know about you, but I'm not a big fan of mindless, repetitive
tasks.
Take testing, for example. An averagely complicated piece of software Photo: AARDMAN
Chances are, though, that it won't be the only time we need to perform
those tests. If we make any changes to the software, there's a real chance
that features that we tested once and found to be working might have been
broken. So when we make changes after the software's been tested once, it
will need testing again. Now we're breaking a real sweat!
If time's tight, you may choose to write more automated tests for parts of
the software that present the greatest risk, or have the greatest value to the
customer.
Automating tests can require a big investment, but can pay significant
dividends throughout the lifetime of the software. Testing that might take
days by hand might only take a few minutes if done by a computer
program. You could go from testing once every few weeks to testing
several times an hour. This can be immensely valuable in a learning
process that aims to catch mistakes as early as possible.
Basic Principle #7 states that software that can't be put to use has no value.
Here's another obvious truism for you: while software's being tested, we
can't be confident that it's fit for use.
Or, to use more colourful language, anyone who releases software before
it's been adequately tested is bats**t crazy.
If it takes a long time to test your software, then there'll be long periods
when you don't know if the software can be put to use, and if your
customer asked you to release it, you'd either have to tell them to wait or
you'd release it under protest. (Or just don't tell them it might not work and
brace yourself for the fireworks - yep, it happens.)
If we want to put the customer in the driving seat on decisions about when
to release the software - and we should - then we need to be able to test the
software quickly and cheaply so we can do it very frequently.
Imagine, say, a Java web application. To put it into use, we might have to
compile a bunch of Java program source files, package up the executable
files created by compilation into an archive (like a ZIP file) for deploying
to a Java-enabled web server like the Apache Foundation's Tomcat. Along
with the machine-ready (well, Java Virtual Machine-ready) executable
files, a bunch of other source files need to be deployed, such as HTML
templates for web pages, and files that contain important configuration
information that the web application needs. It's quite likely that the
application will store data in some kind of structured database, too.
Making our application ready for use might involve running scripts to set
up this database, and if necessary to migrate old data to a new database
structure.
This typical set-up would involve a whole sequence of steps when doing it
by hand. We'd need to get the latest tested (i.e. working) version of the
source files from the team's source code repository. We'd need to compile
the code. Then package up all the executable and supporting files and
copy them across to the web server (which we might need to stop and
restart afterwards.) Then run the database scripts. And then, just to be sure,
run some smoke tests - a handful of simple tests just to "kick the tyres", so
to speak - to make sure that what we've just deployed actually works.
And if it doesn't work, we need to be able to put everything back just the
way it was (and smoke test again to be certain) as quickly as possible.
There are many other examples of boring, repetitive and potentially time-
consuming tasks that developers should think about automating - like
writing programs that automatically generate the repetitive "plumbing"
code that we often have to write in many kinds of applications these days
(for example, code that reads and writes data to databases can often end up
looking pretty similar, and can usually be inferred automatically from the
data structures involved).
O
ne thing I learned years ago is that when life is simpler, I tend to
get more done. Other people make themselves busy, filling up
their diaries, filling out forms, taking on more and more
responsibilities and generally cluttering up their days.
When life gets more complicated, we not only open ourselves up to a lot
of unnecessary effort, but we also end up in a situation where there are a
lot more things that can go wrong.
Although I'm lazy, I actually have to work quite hard to keep my life
simple. But it's worth it. By keeping things simple and uncluttered, it
leaves much potential to actually do things. In particular, it leaves time to
seize opportunities and deal with problems that suddenly come up.
And waddayaknow? It turns out that much of the joy and fulfillment that
life has to offer comes through learning and adapting, not through
doggedly sticking to plans.
While we must strive for the simplest software, many problems are just
darn complicated. There's no avoiding it.
Cities are also necessarily very complicated. But my house isn't. I don't
need to understand how a city works to deal with living in my house. I just
need to know how my house works and how it interacts with the parts of
the city it's connected to (through the street, through the sewers, through
the fibre-optic cable that brings the Interweb and TV and telephone, etc.)
The overall design of a city emerges through the interactions of all the
different parts. We cannot hope to plan how a city grows in detail at the
level of the entire city. It simply won't fit inside our heads.
And then we can constrain the overall design by applying the customer's
tests from the outside. So, regardless of what internal design emerges, as a
whole it must do what the customer requires it to do, while remaining
simple enough in its component parts to accommodate change.
I
f there's one thing we can be certain of in this crazy, mixed up world,
it's that we can be certain of nothing.
This final post - putting aside my feeble joke - seeks to reify change to a Lily Strugnell (108), World's Oldest
first-order component of successful software development. It deserves its Facebook User
own principle.
If we're not able to accommodate change, then we're unable to learn, and
therefore less likely to succeed at solving the customer's problems.
When a business can't change the way they work, they struggle to adapt to
changing circumstances, and become less competitive.
Meanwhile, competitors can take those same lessons and come up with a
much better product. There are thousands of new businesses out there
ready, willing and able to learn from your mistakes.
If we automate our repeated tests, this can also make a big difference. One
of the risks of making a change to a piece of working software is that we
might break it. The earlier we can find out if we've broken the software,
the cheaper it might be to fix it.
As well as making our code simple and easy to understand, we also need
to be vigilant for duplication and dependencies in our code.
Too many professional software developers have a fear of change, and too
many teams organise themselves around the principle of avoiding it if they
can.