A Practical Introduction to Security and Risk Management
A Practical Introduction to Security and Risk Management
All rights reserved. No part of this book may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying, recording, or by any information storage and retrieval
system, without permission in writing from the publisher.
Newsome, Bruce.
A practical introduction to security and risk management : from the global to the personal / Bruce
Newsome.
pages cm
Includes bibliographical references and index.
HD61.N52 2013
658.4′7—dc23 2013031584
13 14 15 16 17 10 9 8 7 6 5 4 3 2 1
FOR INFORMATION:
References
Index
Detailed Contents
Chapter 3. Risk
Defining a Risk
Describing a Risk
Categorizing Risks
Negative and Positive Risks
Pure and Speculative Risks
Standard and Nonstandard Risks
Organizational Categories
Levels
Higher Functional Types
Simple Statistical Calculations and Parameters
Formulae for Risk
Predictable Return or Event
Expected Return
Program Evaluation and Review Technique (PERT) Expected
Return
Range of Contingencies
Range of Returns
Risk Efficiency
Analyzing and Assessing Risks
Importance
Placing Risk Analysis and Risk Assessment
Risk Analysis
Risk Assessment
Sources of Risk Assessments
Summary
Questions and Exercises
References
Index
About the Author
I
Analyzing and Assessing Security and
Risks
he chapters in this part of the book will help you understand, analyze, and
T assess security and capacity (Chapter 2); risk (Chapter 3); hazards,
threats (the sources of negative risks), and contributors (the sources of
positive risks) (Chapter 4); target vulnerability and exposure (Chapter 5);
probability and uncertainty (Chapter 6); and events and returns (Chapter 7).
In the process, readers will learn how different advocates and authorities
contest the definitions, analysis, and assessments, how different activities,
operations, and missions face imperfect trade-offs that are exacerbated by
poor analysis and assessment, and how some simple rules and practical
techniques dramatically improve understanding and functionality.
CHAPTER
One private study found that Fortune 1,000 companies with more advanced
management of their property risks produced earnings that were 40% less
volatile, while the other companies were 29 times more likely to lose
property and suffered 7 times costlier losses due to natural causes (F.M.
Global, 2010, p. 3).
This Book
The aim of this book is to give practical knowledge of and practical skills in
the management of security and risk. The managerial skill set is practical,
with analytical skills in understanding the sources of risk, some basic
mathematical methods for calculating risk in different ways, and more artistic
skills in making judgments and decisions about which risks to control and
how to control them.
This book introduces the skills and gives the reader different tools and
methods to choose from. These skills are multilevel and interdisciplinary.
Readers are shown not only, for instance, how to defend themselves against
terrorists but also how to identify strategies for terminating the causes of
terrorism or to co-opt political spoilers as allies.
In this book, readers will learn how to
Part 1 describes how to analyze security and assess security and risks
back to the causes and sources through the relevant concepts: security and
incapacity (Chapter 2), risk (Chapter 3), hazards, threats, and contributors
(Chapter 4), target exposure and vulnerability (Chapter 5), uncertainty and
probability (Chapter 6), and returns (Chapter 7).
Part 2 describes how to manage security and risk, including the
development of or choice between different cultures, structures, and
processes (Chapter 8), establishing tolerability and sensitivity levels
(Chapter 9), controlling and strategically responding to risks (Chapter 10),
and recording, communicating, reviewing, assuring, and auditing risk and
security (Chapter 11).
The third and final part introduces ways to improve security in the main
domains: operational and logistical security (Chapter 12); physical (site)
security (Chapter 13); information, communications, and cyber security
(Chapter 14); transport security (Chapter 15); and personal security (Chapter
16).
CHAPTER
Security
This section concerns security. The subsections below first define security,
explain how security is described at different levels, describe the main
academic and official domains of security (including national security,
homeland security, international security, and human security), and describe
how to assess security.
Defining Security
Security is the absence of risks. Thus, security can be conceptualized as the
inverse of risk and any of its sources or associated causes, including threats,
hazards, exposure, or vulnerability (all of which are defined in subsequent
chapters).
Security as a term is often used in combination or interchangeably with
safety, defense, protection, invulnerability, or capacity, but each is a separate
concept, even though each has implications for the other. For instance, safety
implies temporary sanctuary rather than real security, while defense implies
resistance but does not guarantee security.
Unofficial
United States
Canada
Britain
The UK MOD (November 2009, p. 6) uses the term security “to describe
the combination of human and national security.” “Defense and security
are linked, but different, concepts. Defense primarily refers to states and
alliances resisting physical attack by a third party. Defense is about the
survival of the state and is not a discretionary activity. Security is a
contested concept that can never be absolute. It is, therefore, to some
extent, discretionary. It implies freedom from threats to core values both
for individuals and groups. The decline in the incidence of inter-state war
and the emergence of transnational threats, especially in the developed
world, has resulted in greater political emphasis being placed on security
rather than defense. Moreover, security has gradually evolved from the
concepts of national and international security to the idea of human
security” (UK MOD, January 2010, p. 76).
Levels
Security has many levels: a useful scale would run through the personal,
local, provincial, national (sovereign government), trans-national (across
sovereign states without involving sovereign governments), international
(between sovereign governments), and supra-national (higher authority than
sovereign governments).
Within the state: Britain recognizes national, regional, and local levels; the
United States recognizes federal, state, city, and local levels of government;
and Canada recognizes federal, provincial, and local government.
An academic analysis of crime used three levels (state, institution, and
individual), although presumably we could add international crime (Kennedy
& Van Brunschot, 2009, p. 8).
Increasingly, the authors recognize that the levels and domains are so
interconnected or codependent that security managers need broad knowledge
and awareness:
Domains
Security crosses many domains. A student is most likely to study security in
disciplines like criminology and policing (in courses or fields entitled crime
and justice, transnational crime, public safety, and public security), health
and medicine (public health and health security), economics, political
economy, or development studies (economic security), political science and
international studies (national security, international security, peace and
conflict, war studies, and peace studies), and military or defense studies
(strategic studies, security studies, security management, defense
management, and military science). Some courses (counter-terrorism and
homeland security) are so truly interdisciplinary that they could be taught in
any of these disciplines.
Consequently, security is studied, at least implicitly, by a mix of
disciplines, fields, and subfields, some of them ambiguous or contested.
Many people fret about insecurity but have disciplinary biases or formative
experiences that constrain their study of security, while security crosses the
domains that academic disciplines and professional careers have tended to
separate. Van Brunschott and Kennedy note this crossover of domains:
National Security
United States
The United States has institutionalized national security more than any other
state, particularly in 1947 with the National Security Act, which established
a National Security Council and National Security Adviser to the executive.
For the DOD (2012b, p. 216), national security encompasses “both national
defense and foreign relations of the United States” and is “provided by a
military or defense advantage over any foreign nation or group of nations, a
favorable foreign relations position, or a defense posture capable of
successfully resisting hostile or destructive action from within or without,
overt or covert.”
Many internationalists and foreigners consider national security an
inaccurate and possibly xenophobic concept, especially given increasingly
international and transnational threats. In practice, most Americans use
national security and international security interchangeably or to describe
the same domains whenever politically convenient, while the newer term of
homeland security has supplanted national security.
Canada
In 2003, the Canadian government established a Minister and a Department
of Public Safety and Emergency Preparedness (Public Safety Canada).
Public Safety Canada, as legislated in 2005, is not defined by national or
homeland security, but is responsible for only domestic civilian authorities:
the correctional service, parole board, firearms center, border services, the
federal police, and the security intelligence service.
In April 2004, the Canadian government released its first National
Security Policy (Securing an Open Society), which specified three core
national security interests:
1. intelligence,
2. emergency planning and management,
3. public health emergencies,
4. transportation security,
5. border security, and
6. international security (Public Safety Canada, 2013).
Britain
In 2008, the British government published its first National Security
Strategy. In May 2010, a new political administration, on its first day,
established a National Security Council (a committee to the Cabinet) and
appointed a National Security Adviser. The Cabinet Office (2013) defines
the National Security Council as “a coordinating body, chaired by the Prime
Minister, to integrate the work of the foreign, defense, home, energy, and
international development departments, and all other arms of government
contributing to national security.” Unfortunately, the Cabinet Office does not
define national security. The UK MOD (2009, p. 6) defines national security
as “the traditional understanding of security as encompassing ‘the safety of a
state or organization and its protection from both external and internal
threats’.”
Homeland Security
United States
Justified mostly as a response to the international terrorist attacks of
September 11, 2001 (9/11), on September 20, 2001, President George W.
Bush announced that by Executive Order he would establish an Office of
Homeland Security within the White House. The actual order (13228) was
issued on October 8. The Office was established under the direction of Tom
Ridge, formerly Governor of Pennsylvania, with few staff and no budget for
distribution. The Homeland Security Act of November 25, 2002, established,
effective January 2003, the Department of Homeland Security (DHS), which
absorbed 22 prior agencies—the largest reorganization of U.S. government
since the establishment of the Department of Defense in 1949.
In the first executive order, in many executive justifications, and in popular
understanding, homeland security was equated with counter-terrorism, but
counter-terrorism was always a minor part of departmentalized homeland
security. Before 9/11, 46 federal agencies had some counter-terrorist
responsibilities, according to the Congressional Research Service at the time
(Perl, 2002, p. 9). The DHS absorbed few of them. Then, as now, U.S.
counter-terrorism is conducted mostly by the intelligence agencies, the
military services, the Federal Bureau of Investigation, and state and local
law enforcement agencies, all of which lie outside of the DHS, although they
coordinate. Most of the DHS’ subordinate departments and activities manage
border security, immigration, tariffs and customs, security within American
waters, the security of infrastructure, natural risks, and emergencies in
general.
The Executive Order of October 8, 2001, also created the Homeland
Security Council (HSC) “to develop and coordinate the implementation of a
comprehensive national strategy to secure the United States from terrorist
threats or attacks” (Gaines & Kappeler, 2012).The HSC was almost the same
as the NSC, so the change attracted criticism. “The creation of the HSC
essentially bifurcated the homeland security process: there were now two
agencies reporting to the President that had policy authority over national
security issues” (Gaines & Kappeler, 2012, p. 209). In May 2009, President
Barack Obama merged the staff of the NSC and HSC, although their separate
statutes remain.
On March 21, 2002, Bush signed an Executive Order (13260) that
established the President’s Homeland Security Advisory Council with a
membership of no more than 21 people, selected from the private sector,
academia, officials, and nongovernmental organizations. The Order also
established four Senior Advisory Committees for Homeland Security: State
and Local Officials; Academia and Policy Research; Private Sector; and
Emergency Services, Law Enforcement, and Public Health and Hospitals.
Bush sought stronger executive powers, partly in the name of homeland
security. On September 24, he announced the Uniting and Strengthening
America by Providing Appropriate Tools Required to Intercept and Obstruct
Terrorism Bill. This became the USA PATRIOT Act, which Congress
approved with little deliberation (normally a bill of such length and
consequence would be debated for years). The President signed it into law
on October 26. The Act was long and conflated many issues but primarily
increased the government’s surveillance and investigative powers in order to
“deter and punish terrorist acts in the United States and around the world”
(Public Law 107–56, 2001).
On October 29, Bush issued the first Homeland Security Presidential
Directive (HSPD). He would issue a total of 25 before leaving office in
January 2009; during his two terms, he issued 66 National Security
Presidential Directives, almost all of them after 9/11.
From 2001 to 2008, according to Google Ngram
(http://books.google.com/ngrams), use of the term homeland security has
risen from relative obscurity to surpass use of national security.
Canada
Public Safety Canada is formally defined by public safety and emergency
preparedness (since 2003) and national security (since 2006) rather than
homeland security, but its responsibilities include the national agencies for
emergency management and border security, which in the United States fall
under DHS. Public Safety Canada is responsible for criminal justice and
intelligence, too, which in the United States fall outside of the DHS.
Britain
The British government has considered a department of homeland security
but continues to departmentalize home, foreign, intelligence, and military
policies separately. The Home Office is closest to a department of homeland
security; it is officially described as “the lead government department for
immigration and passports, drugs policy, counter-terrorism and policing”
(Cabinet Office, 2013).
International Security
Most American political scientists acknowledge a field called international
relations; some recognize a subfield called international security. The
American Political Science Association recognizes international security
and arms control as a “section.” However, for ethical and practical reasons,
the study of international security is not universally acknowledged. This is
why Richard Betts advocated a subfield of international relations called
international politico-military studies, which implies parity with other
subfields such as international political economy (Betts, 1997). Some
advocates of international security use it to encompass military, economic,
social, and environmental hazards (Buzan, 1991).
In the 1980s and 1990s, increased recognition of globalization and
transnationalism helped to drive attention toward international security, but
use of the term international security has declined steadily since its peak in
1987, despite a small hump from 1999 to 2001, while use of homeland
security has increased commensurately (http://books.google.com/ngrams).
Human Security
The United Nations and many governments and nongovernmental
organizations recognize human security (freedom from fear or want). In 1994,
the UN Development Program published its annual report (Human
Development) with a reconceptualization of human security as freedom from
fear or want across seven domains:
1. Economic security;
2. Food security;
3. Health security;
4. Environmental security;
5. Personal security;
6. Community security;
7. Political security (human rights).
Assessing Security
In assessing security, we could measure security as the inverse of the risks
(Chapter 3), the hazards and threats (Chapter 4), or our exposure and
vulnerability (Chapter 5). The choice between these options will depend on
our available time and interest in a deeper assessment and the availability
and our confidence in the data.
Subjectively, we could ask stakeholders how secure they feel or ask
experts how secure something is. For instance, an important measure of the
effectiveness of the criminal justice system is to ask the public how secure
they feel. Ultimately, if the public does not feel more secure, the criminal
justice authorities are failing, either to reduce crime or to persuade the public
that security really has improved. Having said that, we must also realize
some adjustments might need to be made for inaccurate cultural or social
sensitivities (see Chapter 9), such as anti-police biases.
If the effectiveness of criminal justice were to be measured as simply the
rate of crime, the authorities would be incentivized to report fewer crimes,
or categorize crimes as less serious than they really were, whatever the real
rate of crime. Moreover, crime rates can fall when the public adjusts to
crime, such as by refusing to leave home, which would reduce public
exposure to crime but would not be evidence for success in fighting crime.
Consequently, we should measure security by both crime rate and public
perceptions of security and not assume that the crime rate is the superior
measure just because events are more tangible than perceptions.
Capacity
This section defines capacity, explains the varying fungibility of different
capacities, how capacity is traded with security, and how capacity (and
thence security) tends to distribute inequitably.
Defining Capacity
Capacity is the potential to achieve something. Different capacities include
the potentials to, for instance, acquire capabilities or deliver performance.
When we identify capacity, we should define also what the capacity might be
converted into, because capacities might be useful for one thing but not
another. For instance, one organization’s capacity for self-defense is different
from another organization’s capacity for investigation of crime.
Unofficial
UN
Britain
SUMMARY
• defined security,
• explored the analytical levels of security,
• reviewed the academic fields and functional domains of security,
including national security, homeland security, international security,
and human security,
• introduced the general principles of assessing security,
• defined capacity,
• explained how some types of capacity are more fungible,
• explained how capacity and security are traded off, and
• described how capacity distributes inequitably, usually for socio-
political reasons.
Q UE S T IO NS AND E XE RCIS E S
Risk
Defining a Risk
At most basic, risks are the potential returns from an event, where the returns
are any changes, effects, consequences, and so on, of the event (see Chapter 7
for more on returns and events). As some potential event becomes more
likely or the returns of that event become more consequential, the higher
becomes the risk. Realizing risk as a resultant of these two vectors makes
risk more challenging but also more useful than considering either alone.
Taken alone, either the likelihood or the return would be an unreliable
indicator of the risk. For instance, in Table 3.1 the most likely scenario is the
least risky (scenario 3); the scenario offering the highest return (scenario 1)
is not the riskiest; the riskiest scenario (scenario 2) has neither the highest
probability nor the highest return.
Risk is often conflated with other things, but is conceptually separate from
the event, the threat that causes the event, and any other cause or source. Such
conceptual distinctions help the analyst to clarify the risk, principally by
tracing its sources through the process to potential returns. You could be
certain about the threat or certain about what the returns would be if a threat
were to cause a certain event but uncertain about the probability of the threat
acting to cause that event. Similarly, I could be certain that a terrorist group
means to harm my organization, but I would remain uncertain about the
outcomes as long as I am uncertain about how the terrorist group would
behave, how my defenders would behave, how effective the group’s
offensive capabilities would be, how effective my defensive capabilities are,
and so on.
Another common semantic error is the use of the phrase “the risk of” to
mean “the chance of” something. The chance of something amounts to a risk.
To speak of the risk of death does not make much sense except as the chance
of death. When we speak of the risk of something, literally we mean the
potential returns associated with that thing. For instance, literally the risk of
terrorism is the potential returns from terrorism, not the chance of any
particular terrorist threat or event.
Table 3.1 The Likelihood, Return, and Expected Return of Three Notional
Scenarios
Unofficial
United Nations
Risk is the “expected losses (of lives, persons injured, property damaged,
and economic activity disrupted) due to a particular hazard for a given
area and reference period. Based on mathematical calculations, risk is the
product of hazard and vulnerability” (UN DHA, 1992). Risk is “the
combination of the probability of an event and its negative consequences”
(UN ISDR, 2009, p. 11).
Britain
Canada
United States
Before the institutionalization of homeland security, the Federal
Emergency Management Agency (FEMA) was the U.S. Government’s
effective authority on risk: “Risk means the potential losses associated
with a hazard, defined in terms of expected probability and frequency,
exposure, and consequences” (FEMA, 1997, p. xxv). The Department of
Homeland Security, which took charge of FEMA in January 2003, defines
risk as “the potential for an unwanted outcome resulting from an incident,
event, or occurrence, as determined by its likelihood and the associated
consequence” (DHS, 2009, p. 111), although in the context of cyber risks
a subordinate authority defined risk as “the combination of threat,
vulnerability, and mission impact” (Cappelli, Moore, Trzeciak, &
Shimeall, 2009, p. 32).
The Government Accountability Office (December 2005, p. 110)
defines risk as “an event that has a potentially negative impact and the
possibility that such an event will occur and adversely affect an entity’s
assets, activities, and operations.”
The Department Of Defense (2010, p. 269) defines risk as the
“probability and severity of loss linked to hazards.”
Describing a Risk
A good description of risk helps analysis, recording, shared understanding,
and communication. A good qualitative description of a particular risk
should include the following:
This is a notional example of a good description: “Within the next year and
the capital city, terrorists likely would attack a public building, causing
damage whose costs of repair would range from $1 million to $2 million.”
Categorizing Risks
Risks are routinely categorized by type. Categories help to communicate the
scope of the risk, to assign the responsible authority to handle the risk, to
understand the causes of the risk, and to suggest strategies for controlling the
risk.
The subsections below describe categorizations as negative and positive
risks, pure and speculative risks, standard and nonstandard risks,
organizational categories, external levels, and higher functional categories.
Pedagogy Box 3.3 Prescriptions for Risk
Categories
The reader might think that no responsible official could forget positive
risks, but consider that the United Nations International Strategy on
Disaster Reduction (2009, p. 11), the Humanitarian Practice Network
(2010, p. 28), and the U.S. Government’s highest civilian and military
authorities on security and risk each define risk as negative (GAO, 2005,
p. 110; DOD, December 2012, p. 267). In 2009, the ISO added “potential
opportunities” after admitting that its previous guides had addressed only
negative risks (p. vii). The word opportunity has been used by others to
mean anything from positive hazard to positive risk, while the word
threat has been used to mean everything from negative hazard to negative
risk. For instance, the British Treasury and National Accounting Office
(NAO, 2000, p. 1) have defined risk in a way that “includes risk as an
opportunity as well as a threat” and the MOD (2010, p. 6) has defined
“risk and benefit” together.
Pure and Speculative Risks
Another binary categorization of risk with implications for our analysis of the
causes and strategies is to distinguish between pure (or absolute) and
speculative risks. Pure risks are always negative (they offer no benefits) and
often unavoidable, such as natural risks and terrorism. Speculative risks
include both positive and negative risks and are voluntary or avoidable, such
as financial investments. This distinction is useful strategically because the
dominant responses to pure risks are to avoid them or to insure against them,
while speculative risks should be either pursued if positive or avoided if
negative. (Strategic responses are discussed more in Chapter 10.)
Organizational Categories
Organizations conventionally categorize risks by the organizational level that
is subject to them or should be responsible for them. Although many different
types of organizations can be identified, they generally recognize at least
three levels, even though they use terms differently and sometimes
incompatibly (see Table 3.3). Thus, all stakeholders should declare their
understanding of categories, if not agree upon a common set.
We could differentiate risks within an organization by level (as in Table
3.3) or by the assets or systems affected by those risks. Table 3.4 shows how
different authorities have categorized these risks; I have attempted to align
similar categories.
These formulae are justifiable anywhere where the event or returns are
predictable given coincidence between a particular hazard and one or all of
vulnerability, exposure, or incapacity. None of hazard, vulnerability, or
incapacity necessarily includes probability, although uncertainty may be
implicit in the assessment of a particular hazard or vulnerability (a higher
rating of the hazard suggests more likely coincidence with the vulnerability; a
higher rating of the vulnerability suggests a more likely failure of defenses).
Expected Return
Risk, in its simplest mathematical form, is the product of probability and
return. If only one return were possible, this formula would be the same as
the formula for the expected return. When we have many possible returns,
the expected return is the sum of the products of each return and its
associated probability. In statistical language, the expected return is a
calculation of the relative balance of best and worst outcomes, weighted by
their chances of occurring (or the weighted average most likely outcome).
The mathematical formula is:
where:
ER = expected return
N = total number of outcomes
Pi = probability of individual outcome
Ri = return from individual outcome
For instance, if we estimate only two possible returns (either a gain of $20
million with a probability of 80% or a loss of $30 million with a probability
of 20%) the expected return is 80% of $20 million less 20% of $30 million,
or $16 million less $6 million, or $10 million.
Note that the expected return is not necessarily a possible return. In the
case above, the expected return ($10 million) is not the same as either of the
possible returns (+$20 million or –$30 million). The expected return is still
useful, even when it is an impossible return, because it expresses as one
number a weighted average of the possible returns. This bears remembering
and communicating, because consumers could mistakenly assume that you are
forecasting the expected returns as a possible return or even the predicted
return.
The expected return does not tell us the range of returns. The expected
return might be a very large positive value that we desire, but the range of
returns might include very large potential negative returns. Imagine that we
expect a higher profit from option A than option B, but the range of returns for
option A extends to possible huge losses, while the range of returns for
option B includes no losses. Risk averse audiences would prefer option B,
but the audience would be ignorant of option B’s advantages if it received
only the expected return. Hence, we should always report the expected return
and the range of returns together.
Having said that, we need to understand that best or worst outcomes may
be very unlikely, so the range of returns can be misleading too. Ideally, we
should report the probabilities of the worst and best outcomes, so that the
audience can appreciate whether the probabilities of the worst or best
outcomes are really sufficient to worry about. We could even present as a
graph the entire distribution of returns by their probabilities. Much of this
ideal is captured by risk efficiency, as shown below.
Program Evaluation and Review Technique
(PERT) Expected Return
If we identify many potential outcomes, then a calculation of the expected
return might seem too burdensome, at least without a lot of data entry and a
statistical software program. In that case, we could choose a similar but
simpler calculation prescribed by the Program Evaluation and Review
Technique (PERT), a project management technique originating from the US
Navy. In this formula, we include only the worst, best, and most likely
outcomes, and we weight the most likely outcome by a factor of 4.
The main problems with the PERT expected return are that the calculation
excludes all possible returns except the worst, best, and best likely and the
actual probabilities of each outcome.
The PERT formula may be preferable to the typical formula of expected
return if consumers want to acknowledge or even overstate the most extreme
possible outcomes.
Range of Contingencies
Many planners are interested in estimating the range of potential
contingencies, where a contingency (also a scenario) is some potential event.
Normally, planners are interested in describing each contingency with an
actor, action, object, returns, space, and time. Such contingencies are not
necessarily statistically described (they could be qualitatively described) but
at least imply estimates of potential returns and can be used to calculate risk
statistically.
ER = (O + 4M + P) ÷ 6
where:
ER = expected return
O = the most optimistic return
M = the most likely return
P = the most pessimistic return
Range of Returns
The range of returns is the maximum and minimum returns (or the best and
worst returns). Sometimes the difference between them is expressed, too, but
the difference is not the same as the range. For instance, the range of returns
from a project might be assessed from a profit of $2 million to a loss of $1
million—a difference of $3 million.
The range of returns is useful for decision makers who want to know the
best and worst possible returns before they accept a risk and is useful for
planners who must plan for the outcomes.
The difference (between the maximum and minimum or best and worst
outcomes) is often used as an indicator of uncertainty, where a narrower
difference is easier for planning. The difference is used as an indicator of
exposure, too (in the financial sense, exposure to the range of returns). The
statistical variance and standard deviation of all estimated returns could be
used as additional indicators.
However, uncertainty is not measured directly by either the range of
returns or the difference; also, the maximum and minimum returns may be
very unlikely. Thus, the maximum and minimum returns should be reported
together with the probabilities of each. You should also report the most likely
return too. Indeed, PERT advocates reporting the most likely return as well
as the worst (or most pessimistic) return and the best (or most optimistic)
return.
Risk Efficiency
Some people have criticized the expected return for oversimplifying risk
assessment and potentially misleading decision makers: “The common
definition of risk as probability multiplied by impact precludes
consideration of risk efficiency altogether, because it means risk and
expected value are formally defined as equivalent” (Chapman & Ward,
2003). These critics prescribed measurement of the “adverse variability
relative to expected outcomes, assessed for each performance attribute using
comparative cumulative probability distributions when measurement is
appropriate” (Chapman & Ward, 2003, p. 48).
This sounds like a complicated prescription, but the two criteria for risk
efficiency are simple enough: the expected return should be preferable
(either a smaller negative return or a larger positive return); and the range of
returns should be narrower (sometimes we settle for a smaller maximum
negative return or a larger minimum positive return).
By these criteria, an option would be considered preferable if its
maximum negative return is lowest, the range of returns is narrowest, and the
expected return is more positive. For instance, imagine that our first option
offers a range of returns from a loss of $1 million to a gain of $1 million with
an expected return of $0.5 million, while the second option offers a range of
returns from a loss of $2 million to a gain of $20 million with an expected
return of $0.25 million. The first option is more risk efficient, even though
the second option offers a higher maximum positive return.
Analyzing and Assessing Risks
The section discusses how you can analyze and assess risks. The subsections
below discuss the importance of risk analysis and assessment, distinguish
risk analysis from risk assessment, describe risk analysis, describe risk
assessment, and introduce the different available external sources of risk
assessments.
Importance
Risk analysis and assessment are important because if we identify the
various things that contribute to the risks then we could control each of these
things and raise our security. As one author has advised businesses in
response to terrorism, “risk assessment and risk analysis are not optional
luxuries” (Suder, 2004, p. 223). Another author has advised project
managers to be intellectually aggressive toward the analysis of security and
risk.
This prescription is more literal and simpler than most other prescriptions
(see below).
Risk Analysis
Risk analysis helps to identify and understand the risks ahead of risk
assessment (appreciating and calculating the risk), which in turn is a
practical step toward choosing which negative risks should be controlled and
which positive risks should be pursued and how to communicate our risk
management.
Analyzing the risk involves identifying the risk and disaggregating the risk
from its source to its potential returns. (Diagrammatic help for analyzing risk
is shown in Figure 4.3.) A proper analysis allows us to assess the likelihood
of each part of the chain; if we had not assessed the risks, we could hardly
imagine either controlling all the actual risks or efficiently choosing the most
urgent risks to control. Poor analysis and assessment of risk leads to
mismanagement of risks by, for instance, justifying the allocation of resources
to controls on misidentified sources of risk or on minor risks. Better analysis
of risk would not prevent political perversities, but would counter poor
analysis. Consider current counter-terrorism strategy, which involves
terminating the causes of terrorism. Tracing the causes means careful analysis
of the risk through its precursors to its sources. If the analysis is poor,
government would end up terminating something that is not actually a source
or cause of terrorism.
Unfortunately, most authorities do not use the term risk analysis literally.
Almost all refer to risk assessment but few refer to risk analysis; their
references to risk analysis tend to mean risk assessment, while they use
risk identification to mean literal risk analysis. In some standards, most
importantly the Australian/New Zealand and ISO standard (2009, p. 6)
and its many partial adherents (such as Public Safety Canada, 2012, and
AIRMIC, ALARM, and IRM, 2002), risk analysis is an explicit step in
the recommended process for managing risks. The Australian/New
Zealand standard, the ISO, and the Canadian government each promise
that risk analysis “provides the basis for risk evaluation and decisions
about risk treatment.” They define risk analysis as the “process to
comprehend the nature of risk and to determine the level of risk,” but this
is risk assessment. Their risk identification (“the process of finding,
recognizing, and recording risks”) sounds more like risk analysis. The
Canadian government refers to hazard identification as “identifying,
characterizing, and validating hazards,” which again sounds like analysis,
and describes “identification” as one part of “assessment” (Public Safety
Canada, 2012, p. 49). Some project managers refer to risk identification
when they clearly mean risk analysis—they properly schedule it before
risk assessment, but add risk analysis as a third step, ranking risks by our
“concern” about them (Heerkens, 2002, p. 143).
Similarly, both the British Treasury and the Ministry of Defense have
defined risk analysis as “the process by which risks are measured and
prioritized,” but this is another definition that sounds more like risk
assessment. The British Civil Contingencies Secretariat ignores risk
analysis and provides a more operational definition of assessment, where
analysis is probably captured under “identifying” risks.
Risk Assessment
According to the semantic analysis above, all a risk assessment needs to do,
after a risk analysis, is
United Nations
United States
Canadian
The Canadian government accepts the ISO’s definition and process of risk
management, but from October 2009 to October 2011 the Canadian
government, with advice from the U.S., British, and Dutch governments,
started the development of a federal method (All Hazards Risk
Assessment process; AHRA), based on the first steps of the ISO process:
(The published AHRA actually described the first 5 steps of the ISO’s
7-step risk management process, but the fifth is the treatment or control of
the risks and is clearly not part of risk assessment.)
British
The glossaries issued by the Treasury and MOD each defined risk
assessment as “the overall process of risk analysis and risk evaluation,”
where risk analysis is “the process by which risks are measured and
prioritized” and risk evaluation is “the process used to determine risk
management priorities by comparing the level of risk against
predetermined standards, target risk levels[,] or other criteria.” Each
department subsequently developed risk assessment as, respectively, “the
evaluation of risk with regard to the impact if the risk is realized and the
likelihood of the risk being realized” (Treasury, 2004, p. 49) or the
“overall process of identifying, analyzing[,] and evaluating risks to the
organization. The assessment should also look at ways of reducing risks
and their potential impacts” (MOD, November 2011).
The Civil Contingencies Secretariat defined risk assessment as “a
structured and auditable process of identifying potentially significant
events, assessing their likelihood and impacts, and then combining these
to provide an overall assessment of risk, as a basis for further decisions
and action” (Cabinet Office, February 2013).
The second step of the IRGC’s 5-step process of managing risk is risk
appraisal, which starts with “a scientific risk assessment—a
conventional assessment of the risk’s factual, physical, and measurable
characteristics including the probability of it happening.” The main
questions that the assessor should ask are below:
Unofficial
Some British police forces adopted the same model, particularly after
the HSE prosecuted the Metropolitan Police for breaches of health and
safety. However, increasingly police complained that Dynamic Risk
Assessment was impractical, while some blamed health and safety rules
for their reluctance to take risks in order to protect the public (such as
when community police officers watched a civilian drown while they
awaited rescue equipment). The HSE (2005) subsequently “recognized
that the nature of policing necessitates police officers to respond to the
demands of unpredictable and rapidly changing situations and reliance
solely on systematic risk assessment and set procedures is unrealistic.”
Structured Judgments
We can structure the survey in more functional ways, as described in
subsections below: Delphi survey; ordinal ranking; and plots of likelihood
and returns.
Delphi Survey
The survey itself can be structured in more reliable ways. The least reliable
surveys are informal discussions, particularly those between a small number
of people under the leadership of one person, such as those commonly known
as focus groups. Informal discussions tend to be misled by those most
powerful in the perceptions of group members and by mutually reactionary,
extreme positions.
Delphi surveys encourage respondents away from narrow subjectivity by
asking them to reforecast a few times, each time after a revelation of the
previous round’s forecasts (traditionally only the median and interquartile
range are revealed, thereby ignoring outliers). Interpersonal influences are
eliminated by keeping each respondent anonymous to any other. This method
helps respondents to consider wider empirical knowledge while discounting
extreme judgments and to converge on a more realistic forecast. It has been
criticized for being nontheoretical and tending toward an artificial consensus,
so my own Delphi surveys have allowed respondents to submit a written
justification with their forecasts that would be released to all respondents
before the next forecast.
Ordinal Ranking
The respondent’s task can be made easier by asking the respondent to rank
the risk on an ordinal scale, rather than to assess the risk abstractly. The
Canadian government refers to risk prioritization as “the ranking of risks in
terms of their combined likelihood and impact estimates” (Public Safety
Canada, 2012, p. 84). Essentially a risk ranking is a judgment of one risk’s
scale relative to another. Fewer ranks or levels (points on an ordinal scale)
are easier for the respondent to understand and to design with mutually
exclusive coding rules for each level. Three-point or 5-point scales are
typical because they have clear middle, top, and bottom levels. More levels
would give a false sense of increased granularity as the boundaries between
levels become fuzzy.
Systematic Forecasts
In the 1990s, governments and supranational institutions and thence think
tanks took more interest in producing their own forecasts of future trends and
events. Initially, they consulted internal staff or external “futurists” and others
whose opinions tended to be highly parochial. In search of more control,
some have systematized forecasts that they might release publicly. These
forecasts are based largely on expert judgments, but are distinguished by
some attempt to combine theoretical or empirical review, however
imperfectly.
Figure 3.3 A Risk Plot: Different Scenarios Plotted by Impact and
Likelihood
For instance, since 1997 the U.S. National Intelligence Council has
published occasional reports (around every four years) on global trends with
long horizons (inconsistent horizons of 13 to 18 years). In 1998, the British
government’s Strategic Defense Review recommended similar forecasts, so
the Ministry of Defense established what is now the Development, Concepts
and Doctrine Center, which since 2001 has published occasional reports on
global strategic trends with a time horizon of 30 years (for the MOD,
January 2010, p. 6, a trend is “a discernible pattern of change”). Annually
since 2004, the British executive has produced a National Risk Assessment
with a time horizon of five years—the published versions (since 2008) are
known as “National Risk Registers of Civil Emergencies.” In 2010, it
produced a National Security Risk Assessment by asking experts to identify
risks with time horizons of 5 and 20 years—this remains classified, but is
summarized in the National Security Strategy (Cabinet Office, 2010, p. 29,
37; Cabinet Office, July 2013, pp. 2–4). Since 2011, the Canadian
government has prescribed annual forecasts of “plausible” risks within the
next five years, short-term, and 5 to 25 years in the future, emerging (Public
Safety Canada and Defense Research and Development Canada, February
2013, p. 11). Since the start of 2006, the World Economic Forum has
published annual forecasts of global risks (not just economic risks) with a
horizon of 10 years.
Some think tanks are involved in official forecasts as contributors or
respondents or produce independent forecasts with mostly one-year horizons.
For instance, since 2008 the Strategic Foresight Initiative at the Atlantic
Council has been working with the U.S. National Intelligence Council on
global trends. Since March 2009, around every two months, the Center for
Preventive Action at the Council on Foreign Relations has organized
discussions on plausible short- to medium-term contingencies that could
seriously threaten U.S. interests; since December 2011, annually, it has
published forecasts with a time horizon through the following calendar year.
Toward the end of 2012, the Carnegie Endowment for International Peace
published its estimates of the ten greatest international “challenges and
opportunities for the [U.S.] President in 2013.”
Official estimates are not necessarily useful outside of government:
officials prefer longer term planning that is beyond the needs of most private
actors; they also use intelligence that falls short of evidence; the typical
published forecast is based on mostly informal discussions with experts.
Some experts and forecasts refer to frequency or trend analysis or theory, but
too many do not justify their judgments. Both the U.S. and British
governments admit to consulting officials, journalists, academics,
commentators, and business persons, but otherwise have not described their
processes for selecting experts or surveying them. The Canadian government
has been more transparent:
Generally, the further into the future forecasts go, the more data
deprived we are. To compensate for the lack of data, foresight
practitioners and/or futurists resort to looking at trends, indicators etc.
and use various techniques: Technology Mapping; Technology Road-
Mapping; Expert Technical Panels, etc. These are alternate techniques
that attempt to compensate for the uncertainty of the future and most
often alternate futures will be explored. Federal risk experts can get
emerging and future insights and trend indicators through community of
practice networks such as the Policy Horizons Canada
(http://www.horizons.gc.ca) environmental scanning practice group.
(Public Safety Canada and Defense Research and Development Canada,
February 2013, p. 11)
SUMMARY
• defined risk,
• explained how qualitatively to describe a risk more precisely and
usefully,
• shown you different ways to categorize risks, including by
Q UE S T IO NS AND E XE RCIS E S
his chapter explains what is meant by the sources of risk, clarifies the
T differences between a source and its causes, defines hazards in general,
and describes how threats differ from hazards. The following section
describes how hazards are activated, controlled, and assessed. Sections then
explain how to assess hazards and threats, the sources of positive risks
(primarily contributors), and categorize hazards, threats, and contributors.
Causes
A cause is the reason for some change. The words source and cause are
often conflated, but they are separate concepts. The source of the risk is the
hazard or threat: The cause of the threat is whatever activated the hazard into
the threat. For instance, the river is a source for a flood. The threat (flood) is
activated by unusual causes (such as unusual rainfall, high tides, saturated
ground, poor drainage, a broken levee, etc.). The causes of the threat are
separate to the normal sources of the river (such as rainfall, springs,
tributaries, runoff) that do not normally cause a flood.
A good analyst pursues root causes. If we were to blame the flood on a
broken levee, we should look for the causes of the broken levee (such as
poor maintenance of the levee), which itself has causes (poor equipment,
maintainers, leaders, or managers), and so forth until we reach the root
causes.
Separation of the sources and causes is useful strategically because the
causes could be terminated—often the most efficient way to terminate a risk.
The advantages of such an analysis are obvious in the pursuit of health
security, where the risks have multiple sources and causes.
Figure 4.2 Analysis of the Root Causes of Crime in Haiti and the
Solutions to Those Root Causes
Defining Hazard
A hazard is a potential, dormant, absent, or contained threat. Hazard and
threat are different states of the same thing: The hazard is in a harmless state,
the threat in a harmful state. The hazard would do no harm unless it were to
change into a threat. For instance, a river is a hazard as long as it does not
threaten us, but the river becomes a threat to us when it floods our property,
we fall into it, or we drop property into it. As long as we or our property are
not in the water, the river remains a hazard to us and not a threat. Hazards
become threats when we are coincident with, enable, or activate the harmful
state.
The hazard is the source of the event and of the associated risk, but it is not
the same as an event or a risk. The risk is the potential returns if the hazard
were to be activated as a threat. For instance, the risks associated with the
flood include potential drownings, potential water damage, and potential
waterborne diseases—these are all separate things to the threat (the flood).
A good analysis of the sources of a risk would be represented
schematically as in Figure 4.3:
UN
United States
British
Canadian
Threat
This section defines threat and introduces ways to assess threats.
Defining Threat
A threat is an actor or agent in a harmful state. A threat is any actor or agent
whose capacity to harm you is not currently avoided, contained, inactive, or
deactivated.
Many risk managers and risk management standards effectively treat
hazards and threats as the same or use the words hazard and threat
interchangeably. Some analysts simply use one or the other word, whichever
is favorite, whenever they identify a source of risk. Some of this favoritism is
disciplinary: Natural risks tend to be traced to hazards but not threats, while
human risks tend to be traced to threats but not hazards. Effectively, such
analysis ignores the transition between hazard and threat. The hazard is a
harmless state with potential to change into a threat; the threat is the same
source, except in a harmful state. A threat could be changed into a hazard if it
could be manipulated into a harmless state.
The routine conflation of hazard and threat is not satisfactory because it
prevents our full differentiation of the harmless and harmful states and
confuses our pursuit of the harmless state. Some authorities effectively admit
as much by creating a third state in between. For instance, the Canadian
government admits an emerging threat—“a credible hazard that has recently
been identified as posing an imminent threat to the safety and security of a
community or region” (Public Safety Canada, 2012, p. 35). Many
commentators within the defense and security communities routinely describe
“threats” as other countries or groups that are really hazards (because they
have not made up their minds to harm or acquired sufficient capabilities to
harm), so they use the term imminent threat or clear and present danger or
even enemy to mean a literal threat. For instance, U.S. DOD (2012a, p. 47)
refers to a specific threat as a “known or postulated aggressor activity
focused on targeting a particular asset.”
Uses and definitions of threat are even more problematic when they treat
threats and negative risks as the same (see Pedagogy Box 4.4), just as some
people treat opportunities and positive risks as the same.
Unofficial
“Threats are the negative—or ‘downside’—effects of risk. Threats are
specific events that drive your project in the direction of outcomes
viewed as unfavorable” (Heerkens, 2002, p. 143).
“A threat is a specific danger which can be precisely identified and
measured on the basis of the capabilities an enemy has to realize a hostile
intent” (Rasmussen, 2006, p. 1).
“Terrorist threats exist when a group or individual has both the
capability and intent to attack a target” (Greenberg, Chalk, Willis, Khilko,
& Ortiz, 2006, p. 143).
“We define threats, on the other hand, as warnings that something
unpleasant such as danger or harm may occur . . . Hazards are associated
with the present, possibly producing harm or danger right now. In
contrast, threats signal or foreshadow future harm or danger, or the
intention to cause harm or danger: harm has not yet been actualized and is
merely a possibility” (Van Brunschot & Kennedy, 2008, p. 5). “[W]e
identify the nature of the hazard under consideration and whether it
constitutes a threat (suggesting impending or potential harm) or an actual
danger today” (Kennedy & Van Brunschot, 2009, p. 8).
“A threat is anything that can cause harm or loss” (Humanitarian
Practice Network, 2010, p. 28).
United States
The U.S. GAO (2005c) defines threat as “an indication of the likelihood
that a specific type of attack will be initiated against a specific target or
class of targets. It may include any indication, circumstance, or even with
the potential to cause the loss of or damage to an asset. It can also be
defined as an adversary’s intention and capability to undertake actions
that would be detrimental to a value asset . . . Threats may be present at
the global, national, or local level” (p. 110).
The U.S. FEMA (2005) defines threat as “any indication, circumstance,
or event with the potential to cause loss of, or damage to an asset. Within
the military services, the intelligence community, and law enforcement,
the term ‘threat’ is typically used to describe the design criteria for
terrorism or manmade disasters” (pp. 1–2).
The U.S. Department of Homeland Security (DHS) (2009) defines
threat as “a natural or manmade occurrence, individual, entity, or action
that has or indicates the potential to harm life, information, operations, the
environment, and/or property” (p. 111). The DHS assesses threat as the
probability that a specific type of attack will be initiated against a
particular target or class of target. The DHS estimates (categorically) the
threat (mostly international terrorist threats) to urban areas; the results
ultimately determine the relative threat by each state and urban area. To
get there, it surveys senior intelligence experts on the intelligence in four
categories (detainee interrogations; ongoing plot lines; credible open
source information; and relevant investigations), then tasks its own
analysts to judge the number of threat levels and the placement of target
areas within one of these levels (U.S. GAO, 2008a, pp. 20–22).
The DOD Dictionary (2012b) does not define threat, but its definitions
of threat assessment specify a threat as terrorist.
Researchers of nuclear and cyber security at Sandia National
Laboratories define threat as a “person or organization that intends to
cause harm” (Mateski et al., 2012, p. 7).
British
Canadian
A threat is “the presence of a hazard and an exposure pathway” (Public
Safety Canada, 2012, p. 94).
Activity Related
Activation strictly means that some activity has changed the hazard into a
threat. One useful analytical step is to list operational activities and to
consider whether they could activate any of the available hazards (the
resulting threats are best categorized as activity related). For instance, the
foreign coalition in Afghanistan has activated hostility in conservative local
groups by bringing education to girls. This was unavoidable given the
coalition’s objectives but still was subject to diplomacy and negotiation.
Coincident
Sometimes, it might be best to consider coincidence between target and
hazard as sufficient for the hazard to become a threat, particularly for natural,
inanimate, or passive hazards. For instance, any object on the ground could
be considered a hazard that could trip us. The victim or target would activate
the hazard by tripping over it; as long as the potential victim or target avoids
the hazard, it remains a hazard. A hurricane remains a hazard until it veers
toward our location.
Enabled
Sometimes activation is more akin to enablement. For instance, if an outside
actor supplies a malicious but unarmed organization with arms, then that
actor has enabled a threat.
Released
Sometimes a hazard only becomes a threat if its container were to be broken.
For instance, if acid is contained in a sealed bottle, it is a hazard until we
break the bottle. Similarly, if an imprisoned criminal is released or escapes
and commits another crime, the criminal has transitioned from hazard to
threat. A nation-state is sometimes described as contained when its neighbors
are ready to stop its aggression.
Increasingly, security and risk managers must think more proactively and
creatively about potential activations and their responses. For instance,
Theo van Gogh was shot to death by a fellow Dutch citizen, who claimed
vengeance for a short film (Submission) that criticized Islam’s posture
toward women. The actual threat was difficult to predict, given the
millions of peaceful Dutch. The target too was uncertain: van Gogh had
directed the film, but it had been written by a former Dutch legislator
(Ayaan Hirsi Ali), and dozens of others were involved with production.
The timing was unpredictable: The film was screened at different
festivals in 2004; the Dutch public broadcaster showed the film on August
29; the threat acted on November 2. Only the targets were predictable and
perhaps could have been protected with better security, yet individuals
must trade security with freedom: van Gogh was killed while pedaling his
bicycle on a public street in Amsterdam.
Similarly, in 2010, a pastor (Terry Jones) at a small church (Christian
Dove World Outreach Center) in Gainesville, Florida, published a book
titled Islam Is of the Devil. In July, he declared the ninth anniversary of
the terrorist attacks of September 11, 2001, as “International Burn a
Koran Day” and promised to burn copies of the Quran on that day. On
September 9, he canceled his plan, under some official and clerical
pressure, but on September 10, protests against his plan grew violent,
particularly in Afghanistan and Pakistan, where clerics condemned the
plan. On March 20, 2011, Jones “prosecuted” and burnt a Quran. On
March 24, Afghan President Hamid Karzai condemned the act, but in the
process publicized it. On April 1, protesters in Afghanistan killed at least
30 people, including seven employees of the United Nations Assistance
Mission in Mazar-i-Sharif. On April 28, 2012, Jones and his followers
burnt more Qurans, although this activation was controlled by more
effective official condemnation and also relegated by other Muslim
grievances. In fact, much political violence is activated only proximately
by such remote events, obscuring longer-term and less tangible
grievances. Terry Jones’ actions were outside the control of the victims,
but officials in the United States learnt better how to articulate their
disapproval of his actions and tolerance of his right to free speech.
Some activations are within the control of the personnel that would be
threatened. For instance, on February 22, 2012, U.S. military personnel
incinerated a collection of documents removed from the library at Parwan
Detention Facility in Afghanistan. After news leaked of some religious
texts within the collection, 5 days of protests ended with at least 30
killed, including four U.S. soldiers. This particular activation must have
seemed remote and unfair to the personnel who thought they were
stopping prisoners from using the documents for illicit communications,
but it was controllable. A later military investigation reported that Afghan
soldiers had warned against the action and that the U.S. soldiers lacked
cultural and religious awareness, since their cultural training amounted to
a one-hour presentation on Islam (Source: Craig Whitlock, “US Troops
Tried to Burn 500 Copies of Koran, Investigation Says,” Washington
Post, August 27, 2012).
Controlling Hazards
Analysis in terms of hazard, activation, and threat helps to clarify how the
hazard can be contained or avoided. A hazard is not forgettable—the hazard
could become a threat, so the risk manager must often choose a strategy for
keeping the source in its hazardous state. Hazards remain hazards to us so
long as we avoid activating the hazard, avoid exposing ourselves to the
hazard, control enablers, or contain the hazard.
Avoidance of Activation
The hazard can be controlled by simply not activating it, which is usually
less burdensome than terminating the hazard. For instance, if an armed group
has no intent to harm us, our interests, or our allies and if the situation would
not change, then we would be silly to pick a fight with it. We may have other
reasons to confront armed actors, but we would be wrong to describe a
hazard as a threat in order to confront it, which illustrates the perversities of
some claims during the 2000s to reduce terrorist threats at home by
intervening in a foreign hazard.
Avoidance of Exposure
We can avoid hazards by simply avoiding coincidence. Imagine a malicious
actor with the intent and capability to harm me (I am vulnerable) but located
in some remote location from where it cannot threaten me (I am not exposed).
The actor in that situation is a hazard but could become a threat by travelling
to me; alternatively, I could coincide with it by travelling to meet it. For
instance, criminals are hazards as long as we do not frequent the same area,
but if we visit their active area, we would expose ourselves to criminal
threats.
Controlling Enablers
Hazards sometimes are unable to harm us, but other actors could enable the
hazard by providing new intents or capabilities. We would be able to prevent
the threat by stopping the enablement. For instance, some states and
international institutions seek to prevent proliferation of weapons, because
those weapons could enable a malicious actor to threaten others.
Containment
Where we are uncertain of the hazards and threats, a simple starting step in
identifying hazards and threats is to list the agents or actors in some situation
and then ask whether their capacity to threaten is contained or not.
Hazards can be contained in physical ways. Familiar examples are acids
or pathogens contained in some sealed container. Similarly, human actors can
be described as contained if they are detained or deterred. Hazards can be
contained in behavioral ways too. For instance, we could, in theory,
comfortably work alongside an explosive device as long as we were
confident that we would not activate its detonator. Containment can be
extended with further precautions. For instance, an item that is suspected of
being explosive could be fenced off or placed in a special container
designed to withstand blast until the item could be dismantled.
The subsections below discuss how hazards and threats can be identified,
estimated on binary or continuous scales, and assessed by their likelihood of
activation.
The U.S. government has lots of human threat assessments and natural
hazard assessments, but no definition of a generic hazard assessment.
Similarly, Public Safety Canada (2012, pp. 49, 95) does not define hazard
assessment but defines hazard identification (“the process of identifying,
characterizing, and validating hazards”) and threat assessment (“a
process consisting of the identification, analysis, and evaluation of
threats”). Even more narrowly, the British Civil Contingencies Secretariat
defines a hazard assessment as “a component of the civil protection risk
assessment process in which identified hazards are assessed for risk
treatment” and defines hazard identification as “a component of the civil
protection risk assessment process in which identified hazards are
identified” (U.K. Cabinet Office, 2013).
Opportunity
In general use, opportunity is defined as “a favorable juncture of
circumstances” (Merriam-Webster’s Dictionary), “a chance to do something
or an occasion when it is easy for you to do something” (Longman), “a
chance to do something, or a situation in which it is easy for you to do
something” (Macmillan), “a time or set of circumstances that makes it
possible to do something” (Oxford), or “an occasion or situation which
makes it possible to do something that you want to do or have to do, or the
possibility of doing something” (Cambridge). FrameNet defines opportunity
as “a situation not completely under the agent’s control and usually of a
limited duration” that gives the agent “a choice of whether or not to
participate in some desirable event.”
In any general sense, an opportunity is a positive situation or good fortune
(an antonym of threatening situation). Some risk analysts and managers use
the word in this sense, such as where the International Organization for
Standardization (2009a, p. vii) reminds us that “potential opportunities”
should be considered and where many authorities suggest “take the
opportunity” as a strategy.
However, some authorities define an opportunity as both a positive risk
and the source (akin to a positive hazard) of a positive risk. Such authorities
include managers of project and financial risks (Heerkens, 2002, p. 143;
Hillson, 2003; O’Reilly, 1998) and the British government’s Treasury, NAO,
MOD, and project management standard (PRINCE2) (NAO, 2000, p. 1; U.K.
MOD, 2011c, p. 7; U.K. Office of Government Commerce, 2009; U.K.
Treasury, 2004). This is an unhelpful conflation, similar to the many uses of
the word threat to mean both a negative hazard and a negative risk. Defining
a positive risk as an opportunity is unnecessary because the term positive
risk needs no synonym.
Positive Hazard
Hazard has no clear antonym, something like a sympathetic neutral, absent
friend, well-wisher, hero, or angel. Positive hazard is the clearest antonym
of negative hazard, just as we differentiate positive and negative risks.
(Opportunity is sometimes used as the source of a positive risk, but we
should not want to lose the unique meaning of opportunity as a favorable
situation.)
Although hazard has an overwhelmingly negative connotation, the same
source, depending on the situation, could become a threat or an ally, so
analytically we would be justified to consider a hazard with both negative
and positive potential. For instance, we might upset somebody enough that
they want to harm us, or we could persuade them to help. Similarly, at the
time of a foreign intervention, an armed militia might remain neutral (a
hazardous condition) or side with local insurgents against the foreigners (a
threatening condition), perhaps because the insurgents activate the hazard
with bribes or because the foreigners activate the hazard by some negative
interaction (as simple as accidental collateral harm to the militia). The
neutral militia’s potential as a contributor would be easy to forget. At the
time of intervention, the foreigners could have engaged positively enough that
the militia would have chosen to ally with the foreigners. The foreigners still
should have rejected the offer if such an alliance were to offer intolerable
negative risks, such as probable betrayal or defection after accepting arms,
as sometimes occurred after foreign coalitions in Iraq and Afghanistan had
allied with local militia.
Contributor
In my definition, a contributor is an actor or agent in a positive state—an
antonym of threat. Other antonyms of threat include allies, defenders, guards,
supporters, carers, helpers, donors, grantors, and aiders, but each has
situational rather than generalizable meanings.
The only available official antonym of threat is contributor. The Canadian
government refers to a contributor effectively as anything that contributes
positively to the management of emergencies. It refers to contributors as “all
levels of government, first receivers, healthcare and public health
professionals, hospitals, coroners, the intelligence community, specialized
resources, including scientific and urban search and rescue (USAR)
resources, the military, law-enforcement agencies, non-government
organizations, private sector contributors, and the academic community”
(Public Safety Canada, 2012, p. 19).
Partner
The Canadian government differentiates a contributor from a partner (“an
individual, group, or organization that might be affected by, or perceive itself
to be affected by, an emergency”) (Public Safety Canada, 2012, pp. 19, 71).
The word partner is often used in the sense of ally, but in risk management, a
partner could be sharing negative risks but not necessarily contributing
positive risks, so the word partner does not mean the same as contributor.
Stakeholder
Practically every other actor is a hazard or an opportunity. This insight helps
explain the common advice to consider stakeholders during practically any
activity related to security or risk management. The British government has
listed the following stakeholders in any business: own organization; donors
and sponsors; suppliers and customers; design authorities; neighboring
organizations; utility companies; insurers; emergency services; and local
authorities (U.K. Business Continuity Institute, National Counterterrorism
Security Office, & London First, 2003, p. 21).
A stakeholder such as a donor might seem like a certain contributor, but
remember that an actor’s state can change: A stakeholder could withdraw its
support and declare neutrality (thereby becoming a hazard) or criticize the
recipient’s handling of its receipts (thereby becoming a threat).
Natural
A natural hazard is any natural phenomenon of potential harm. For instance, a
flood is a natural hazard as long as it could harm us and has natural causes
(such as rainfall). Note that a flood could have unnatural causes too, such as
malicious sabotage of the levee. Some natural hazards can be activated by
human behavior. For instance, some humans have turned harmless biological
organisms into biological weapons; others have triggered earthquakes by
aggressively fracturing the earth in pursuit of natural resources. These
hazards are truly human-made but share other characteristics with natural
hazards.
Natural sources of risk (such as a river) traditionally have been described
as hazards and not threats, even when they are clearly causing harm and are
properly categorized as threats (such as when a river is flooding a town), due
to a linguistic bias toward natural hazards and human threats rather than
natural threats and human hazards, but this is analytically misleading. Natural
and human sources each can be hazards or threats.
Proper analysis should recognize natural threats whenever we coincide
with a natural hazard in a harmful way. Clarification should emerge when we
ask whether we are exposed and vulnerable (see Chapter 5) to the source:
For instance, a person with a communicable disease (the hazard) would
become a threat if we coincided with the person and we lacked immunity to
the disease. Similarly, a drought would become a threat if we lacked
reserves to cover the shortfall in the harvest (Wisner, Blaikie, Cannon, &
Davis, 2004, pp. 49, 337).
One explanation for the favoritism toward the term natural hazard over
natural threat is that natural agents have no human-like conscious intent.
Like all hazards, natural hazards remain hazardous to us as long as we are
not vulnerable and exposed, even though the same agents would be threats to
anyone who is vulnerable and exposed. For instance, a flood or toxic
contamination is a threat to anyone in the affected area, even while it remains
a hazard to us until we travel into the affected area or it spreads into our
area.
I suggest the following categories of natural sources:
Material
Material hazards (hazardous materials; HAZMATs) include entirely natural
materials, such as fossil fuels, and human-made materials from natural
sources, such as distilled fuels, that are somehow potentially harmful, such as
by toxicity.
Inanimate material threats are easier than human or organized threats to
measure objectively, in part because material threats do not have any
conscious intent. For instance, biological and chemical threats (and hazards)
are assessed often by their “toxicity” and measured by some quantity,
perhaps as a concentration or rate in relation to some medium (such as
milliliters of hazard per liter of water) or to the human body (such as grams
of hazard per pound of body weight).
The activation of a threat from a hazardous material is usually human-
caused (for instance, humans drill for oil and sometimes spill oil), so we
should be careful to distinguish the human-caused threat (such as an oil spill)
from the material hazard (the oil).
Some hazardous materials overlap human-made hazards (described later),
such as explosive weapons (which are hazardous in themselves and contain
hazardous materials).
• Terrorists
• Extremists
• Individual criminals
• Organized criminals
• Corporate insider saboteur
• Corporate spy
• State-sponsored terrorism
• Espionage
• War (Public Safety Canada, 2013, p. 65).
The following sections explain more about assessing intent and capability
respectively.
Pedagogy Box 4.13 Official Assessments of
Intent and Capability
The U.S. DHS (2010) notes that “[a]dversary intent is one of two
elements, along with adversary capability, that is commonly considered
when estimating the likelihood of terrorist attacks.” Similarly, the
Canadian government prescribes an assessment of the likelihood of a
deliberate attack in terms of the actor’s “intent” and the attack’s
“technical feasibility” (a combination of the actor’s capability and the
target’s exposure and vulnerability) (Public Safety Canada, 2013, pp. 45–
46).
The Humanitarian Practice Network (2010, p. 40) prescribes
assessment of the threat’s frequency, intent, and capability:
“When assessing a potential threat from a human source (adversary),
determine whether they display the following three key characteristics:
Intent
The subsections below respectively define intent and describe alternative
ways to assess a threat’s intent.
Defining Intent
A good assessment of a human actor assesses the actor’s current intent and
the possibility of external or accidental activations of a threat from a hazard.
For instance, most persons mean no harm to most other people most of the
time, but they could be upset by some grievance or misled by some malicious
entrepreneur, sufficient to become malicious toward us.
In practice, most assessments of intent are judgmental, but still we should
prefer rigor and transparency over personal intuition or assumption, so we
should consider and declare our coding rules. At simplest, we could judge
whether the actor is sufficiently malicious, malignant, or misguided enough to
harm us (the code would be binary: yes or no). This is a simple coding rule
that could be declared to other stakeholders and be used to survey experts.
We could collect finer codes by asking for a judgment on a 5-point scale—
such as a whole number from 0 (no intent) to 4 (highest intent).
For instance, the U.S. Army provides a method for categorizing “enemies”
on two dimensions: hostility and legal recognition as enemies (see Figure
4.4).
Capability
The subsections below respectively define a threat’s capability and assess
that capability.
Defining Capability
Intent to harm is necessary but not sufficient for a threat: The threat must have
the capability to harm us. We could imagine another actor with malicious
intent but no useful weapons (a hazardous condition). If other actors supply
weapons or fail to monopolize weapons (as actually happened when a
multinational coalition conquered Iraq in 2003 without securing former
regime stocks) then the hazard could acquire capabilities sufficient to be
considered a threat.
Assessing Capability
Human-Caused or -Made
Some hazards are best categorized as human-caused or -made, even if they
contain or are derived from natural materials (for instance, biological
weapons). Some of these hazards (such as landmines) are manufactured by
humans deliberately, some are deliberately activated by humans (such as
floods caused by terrorist sabotage), some are accidental (such as unintended
pollution). Four clear categories (accidental, human activated, human
manufactured, and technological) are described in the subsections below.
Accidental
Accidents are unintended events. Deliberate human-made hazards and
accidents are usually conflated, but this is analytically imperfect because
accidental and deliberate activities have very different legal and practical
implications.
• spill,
• fire,
• explosion,
• structural collapse,
• system error yielding failure.
Human Activated
A human-activated threat has changed state from hazard to threat because of
some human activity-note that this outcome could have been unintended
(accidental). A human being could activate a threat by accidentally causing a
grievance; another human being could engage in some activity (such as
educating girls in a conservative part of Afghanistan) knowing that the
activity could activate a grievance—in each case, the outcome is the same,
but in one case, the outcome was accidental, while in the other, the outcome
was an accepted risk. Similarly, one person could accidentally release a
chemical threat from its container into a river, while another could knowingly
change that same chemical into a chemical weapon. (Man-caused is a
common adjective for hazards, but apart from being prejudicially gendered,
it is often restricted to very contradictory behaviors, from terrorism to
accidents. Man-caused literally includes both deliberate and accidental
activities.)
Technological
Technological risks are risks arising from technological failure, where the
technology itself is the hazard (such as a part of an aircraft that falls on to a
population below) or is the activator of a hazard (such as a drilling machine
that activates an earthquake). In detail, a technology risk has been
subcategorized as
SUMMARY
Q UE S T IO NS AND E XE RCIS E S
Target
This section defines target and introduces some simple ways to identify
potential targets.
Defining Target
A target is the object of a risk or plan. The threat is the subject whose actions
cause harm to the object. In the semantic frame risk two types of object are
routine: a human victim (“the individual who stands to suffer if the harm
occurs”) or “a valued possession of the victim” (Fillmore & Atkins, 1992, p.
82).
Pedagogy Box 5.1 Official Definitions of
Target
Identifying Targets
Identifying targets is a key step in assessing the risks associated with certain
threats. Analytically, no threat should be identified except in relation to a
particular target. (The proper analysis of all these concepts as they contribute
to a risk is represented in Figure 4.3.) Targets can be identified by their
attractiveness to threats and by their risk factors or indicators.
1. Exposed
2. Vital
3. Iconic
4. Legitimate
5. Destructible
6. Occupied
7. Near
8. Easy (Clarke & Newman, 2006).
Given the resources, we should research the particular threat’s intent and
capability toward particular targets. Consider the simple, notional scenario
below, which I will present as a series of logical steps in analysis:
Defining Vulnerability
Vulnerability essentially means that the target can be harmed by the threat.
For instance, I would be vulnerable to a threat armed with a weapon that
could harm me. The threat would cease if I were to acquire some armor that
perfectly protects me from the weapon or some other means of perfectly
countering the weapon.
Vulnerability is routinely conflated with exposure. In general use,
vulnerability is “the state of being exposed to or likely to suffer harm”
(FrameNet). The Humanitarian Practice Network (2010) defined
vulnerabilities as “factors that increase an organization’s exposure to threats,
or make severe outcomes more likely” (p. 42). However, vulnerability and
exposure are usefully differentiated (see the following section): essentially,
vulnerability means that we are undefended against the threat, while exposure
means we are subject to the threat.
Some definitions of vulnerability conflate risk. For instance David
Alexander (2000, p. 13) defined vulnerability as the “potential for casualty,
destruction, damage, disruption or other form of loss in a particular element,”
but this sounds the same as risk. Similarly, the Humanitarian Practice
Network (2010, p. 28) defined vulnerability as “the likelihood or probability
of being confronted with a threat, and the consequences or impact if and
when that happens,” but this also sounds the same as risk.
UN
Vulnerability is the “degree of loss (from 0% to 100%) resulting from a
potentially damaging phenomenon” (United Nations Department of
Humanitarian Affairs [UN DHA], 1992) or “the characteristics and
circumstances of a community, system, or asset that make it susceptible to
the damaging effects of a hazard” (United Nations Office for International
Strategy for Disaster Reduction [UN ISDR], 2009, p. 12).
United States
Canadian
Assessing Vulnerability
This section introduces different schemes for assessing vulnerability by the
target’s defenses; the gap between the target’s defenses and a particular
threat; and the gap between the target’s defenses and standard defenses.
Defenses
As a correlate of invulnerability, we could measure the target’s defenses. For
instance, the Humanitarian Practice Network (2010) prescribes measuring
our “strengths” as “the flip-side of vulnerabilities” (p. 42).
We could measure our defensive inputs by scale or value. For instance, the
defensive inputs at a site could be measured by spending on defenses, total
concrete poured as fortifications, or total length of barriers at the site.
We should preferably measure the outputs—the actual defensive
capabilities and performance, although this is usually more difficult than
measuring inputs. The ideal measures of outputs are practical tests of
defensive function. For instance, official authorities often send agents to
attempt to pass mock weapons through baggage screening systems in airports
as tests of airport security.
“The gap between the current ability to provide a response and the actual
response assessed to be required for a given threat or hazard. Plans
should be made to reduce or eliminate this gap, if the risk justifies it”
(U.K. Cabinet Office, 2013).
The U.S. Department of State effectively measures the gap between the
security standards and the actual security of overseas missions. The
standards for overseas security are set by the Overseas Security Policy
Board, chaired by the Assistant Secretary for Diplomatic Security, with
representatives from U.S. government agencies that have a presence
overseas. Each year, the Bureau of Diplomatic Security creates the
Diplomatic Security Vulnerability List, which ranks sites according to
their vulnerability. The Security Environment Threat ratings are
determined by the Bureau of Diplomatic Security’s Threat Investigations
& Analysis Directorate (established May 2008) (GAO, 2009, pp. 5–7).
Exposure
This section defines exposure and suggests ways to assess exposure.
Table 5.2 Canada’s Official Coding Rules for the Technical Feasibility of
an Attack
SOURCE: All Hazards Risk Assessment Methodology Guidelines 2012–2013,
http://www.publicsafety.gc.ca/prg/em/emp/2013-ahra/index-eng.aspx, Public Safety Canada, 2013.
Reproduced with the permission of Public Works and Government Services Canada, 2013.
Defining Exposure
Exposure often is treated as synonymous with vulnerability (and even risk),
but exposure implies that we are subject to the threat, while vulnerability
implies our lack of defenses against the threat. We are subject to the threat if
we coincide with or are discovered by the threat in ways that allow the threat
to target us.
FrameNet defines the verb to expose as to “reveal the true, objectionable
nature of” something. In general use, risk is sometimes used as a verb to
imply our voluntary exposure to some threat (as in: “I risked discovery by the
enemy”). Risk is also used as a verb in front of some valued object
representing what the actor has to lose (as in: “I risked my savings on that
bet”) (Fillmore & Atkins, 1992, pp. 97, 100–101). This use of the verb risk
helps explain the dual meanings of exposure in the frame of risk: exposure to
the threat and what we have to lose. In military contexts, exposure implies
that we are under observation by a threat; but in financial contexts, exposure
implies the things that could be lost. Thus, unfortunately, in security and risk
analysis one word is used routinely with two different meanings, which I
will term threat exposure (the more military context) and loss exposure (the
more financial context).
Here I will focus on threat exposure (the target’s revelation to the potential
threat). If the hazard could not reach us or find us, we would not be exposed,
whatever our vulnerability. This is a profound insight, because if we were
confident that we are not exposed to a certain hazard, we would not need any
defenses against that hazard (although we would need to be ready to acquire
defenses if the hazard were about to discover or reach us). For instance, if
we were confident that a communicable disease had been quarantined
perfectly, we would not be justified in ordering mass immunization against
that disease. We would not be exposed, so we would not need defenses.
Essentially any control of access is an attempt to control the potential
target’s exposure. Access controls include any attempt to manage the entry of
actors or agents into some domain, for instance, by demanding identification
of permitted persons before entry into a building or a password before
access to some digital environment. As long as the access controls work
perfectly, everything inside the perimeter remains not exposed to external
threats, even though the perimeter would remain exposed. For instance, after
9/11, under new regulations, airline cockpit doors were supposed to be
locked during flight, stopping the pilot’s exposure to any threat in the
passenger cabin during flight. The pilot remains vulnerable to the sorts of
weapons (small knives, fists, and tear gas) used on 9/11, unless the pilot
were equipped with a firearm, body armor, and a gas mask (some cockpit
crew are certified to carry firearms), but the pilot is not exposed to any of
these things as long as he or she remains on the other side of the door.
Passengers in the passenger cabin would remain exposed to any threat in the
passenger cabin, but their vulnerability would be reduced if joined by an
armed Air Marshal prepared to defend them.
Assessing Exposure
For assessing exposure, we should measure exposure by area coincident with
a particular hazard or threat, exposure by time, or some combination.
Exposure by Area
Someone’s exposure to a threat could be defined by the space known to be
coincident with that threat. For instance, crime tends to concentrate in certain
areas (sometimes known as hot spots), perhaps because of local de-policing
or a self-segregated community or some path-dependent accident of history.
Someone’s exposure to crime in that area increases as he or she spends more
time travelling through, working in, or living in that same area. This repeat
exposure helps to explain repeat victimization. Someone who becomes a
victim of crime in a high-crime area and who does not move or change his or
her behavior is just as likely to be a victim in the future (and past
victimization may harm a person’s capacity to avoid future crime). One study
suggests that 10% of the crime map experiences 60% of crime, 10% of
offenders are responsible for about 50% of offenses, and 10% of victims
suffer about 40% of crimes (Kennedy & Van Brunschot, 2009, pp. 69–72).
We could measure the number or value of targets exposed. We could
measure exposure spatially (such as the target area as a proportion of the
total area or the target area’s length of border or coastline as a proportion of
total border or coastline) or in terms of flows (the number or scale of ports
of entry, migrants, or trade). These sorts of measures could be combined
mathematically in a single quotient (see Figure 5.1 for an example).
The UN ISDR (2009) noted that “[m]easures of exposure can include the
number of people or types of assets in an area. These can be combined
with the specific vulnerability of the exposed elements to any particular
hazard to estimate the quantitative risks associated with that hazard in the
area of interest” (p. 6). Similarly, the Humanitarian Practice Network
(2010, p. 42) prescribed asking which persons travel through or work in
the most exposed areas, which sites are most exposed to the threats, and
which assets are most exposed to theft or damage.
Exposure by Time
We are exposed in time whenever the threat is coincident with us or knows
where we are and can target us at that time. We could measure our exposure
in time in either absolute terms (such as days exposed) or proportional terms
(such as the fraction of the year). We could make the measure judgmental:
For instance, one judgmental scheme assigns a value between 1 and 5 to
correspond respectively with rare, quarterly, weekly, daily, or constant
exposure (Waring & Glendon, 1998, pp. 27–28).
Figure 5.1 A Formula for “Location Risk” (Tiv) Using Simple Measures
of Exposure and Threat
• defined target;
• introduced ways to identify targets, by their attractiveness to threats
and their risk factors;
• defined vulnerability;
• described how to assess vulnerability, by our defenses, the gap
between target defensive and threat offensive capabilities, and the gap
between target defensive and standard offensive capabilities;
• defined exposure; and
• explained how to assess exposure by area and time.
Q UE S T IO NS AND E XE RCIS E S
1. Why could one thing be more likely than another to be a target of a threat?
2. Critique the unofficial and official definitions of vulnerability shown
earlier in this chapter.
3. Practically, what could we measure in order to assess the vulnerability of
a target?
4. Explain your favored choice of the official definitions of exposure shown
earlier in this chapter.
5. Practically, what aspects of a potential target could we measure in order
to assess exposure?
CHAPTER
Uncertainty
This section defines uncertainty, explains the difference between uncertainty
and probability, explains the difference between uncertainty and risk, and
introduces some practical ways to identify uncertainties.
Defining Uncertainty
Uncertainty is a lack of certainty, where certainty is “the personal quality of
being completely sure” (FrameNet). Many other definitions would suggest
that uncertainty, probability, and risk can be the same (see Pedagogy Box
6.1), but they are distinct, with important implications for how we assess and
manage them.
Pedagogy Box 6.1 Definitions of Uncertainty
Unofficial
Australia/New Zealand
British
Canada
Public Safety Canada (2012) does not define uncertainty but defines
uncertainty analysis as “an analysis intended to identify key sources of
uncertainties in the predictions of a model, assess the potential impacts of
these uncertainties on the predictions, and assess the likelihood of these
impacts” (p. 96).
Identifying Uncertainties
As soon as we face any uncertainty (for example, “we don’t know if that
species can carry this pathogen”), we face a risk, so a good route to the
identification of risk is to identify our uncertainties, even if we were so
uncertain that we could not yet estimate the probability or returns.
We can look for uncertainties in areas known to be very uncertain. For
instance, uncertainties concentrate in certain dimensions of any project (see
Table 6.1).
International diplomatic, capacity-building, and stabilization missions are
associated with particular uncertainties in similar dimensions (see Table
6.2).
The most extreme uncertainties are those that never occur to us. We could
be undertaking routine activities oblivious to the risks. From one
perspective, ignorance is bliss; from another perspective, ignorance is a
failure of imagination or diligence.
Some uncertainties cannot be improved, or could not be improved without
unreasonable burdens, so security and risk managers need to be comfortable
with some level of uncertainty, without giving up their aggressive pursuit of
certainty. A different skill is to persuade other stakeholders to accept the
same comfort level as yours.
Definitions
Probability, likelihood, and chance each mean the extent to which something
could occur. Probability implies quantitative expressions (using numbers);
likelihood and chance imply qualitative expressions (using words).
Quantitatively, probability can be expressed as a number from 0 to 1 or a
percentage from 0% to 100%.
The likelihoods run from no chance to certainty, but these words are
routinely misunderstood and misused to mean narrower ranges. Likelihood is
a potentially loaded word because it sounds similar to likely. (For instance,
unfortunately, FrameNet defines likelihood as “the state or fact of being
likely.”) If you were to ask someone to estimate the likelihood of something,
you might be leading them (however accidentally) to respond with an
estimate of likely. Similarly, if we ask someone about the probability of
something, they may infer that it is probable. If we asked someone to estimate
the possibility, they might be led to think of something that is barely possible,
not probable. The most loaded synonyms of chance are risk and hazard—
words with overwhelmingly negative connotations. In general use, these
words are used as verbs, as in to risk and to hazard, to mean taking the
chance of harm to some valued object in pursuit of some goal (Fillmore &
Atkins, 1992, pp. 87–88, 97). Unfortunately, no synonym or similar word is
clear of potential bias, but chance seems to be least loaded and the word that
surveyors should use when asking respondents for assessments.
Chance is “the uncertainty about the future” (Fillmore & Atkins, 1992, p.
81). Probability is “the extent to which something is probable”;
likelihood is “the state or fact of being likely” (FrameNet). “Probability
is the likelihood that the potential problem will occur” (Heerkens, 2002,
p. 148).
Likelihood is “the chance of something happening” and “is used with the
intent that it should have the same broad interpretation as the term
‘probability’ has in many languages other than English.” Likelihood
includes descriptions in “general terms or mathematically, such as a
probability or a frequency over a given time period.” “In English,
‘probability’ is often narrowly interpreted as a mathematical term” and
defined as a “measure of the chance of occurrence expressed as a number
between 0 and 1, where 0 is impossibility and 1 is absolute certainty”
(International Organization for Standardization, 2009a, p. 7).
British
Canadian
United States
Confidence
Given the choice, we should choose assessments or sources of assessments
in which we have more confidence, perhaps because those assessments are
produced from superior methods, superior sources, more genuine experts, or
people who have been proven correct in the past. Sometimes authorities
survey people on their confidence in forecasts.
The proper way to use ratings of confidence is to rely on the confident
forecasts and ignore the unconfident forecasts (by whatever threshold you set
between confidence and unconfidence). For instance, one group of experts
might assess the probability of an event at 60%, another group at 50%. In
order to choose between them, we could ask others to rate their level of
confidence in each group; we could ask even the forecasters themselves to
give their confidence in their own forecasts.
However, sometimes respondents or consumers confuse likelihood with
confidence; for instance, sometimes consumers multiply the rating of
confidence by the rating of likelihood to produce another rating of likelihood,
but this is not theoretically justified or fair to the respondent. In order to
discourage consumers from doing such a thing, intelligence analysts are
advised to declare their confidence in their judgment and to follow their
assessment of the probability “with the word ‘because’ and a response to
complete the sentence that includes a list of key factors that support the
judgment” (Pherson & Pherson, 2013, pp. 185, 188, 195).
SOURCE: All Hazards Risk Assessment Methodology Guidelines, 2012–2013, p.25, Public Safety
Canada and Defence Research and Development Canada, 2013. Reproduced with the permission
of the Minster of Public Works and Government Services , 2013.
SOURCES: Pherson & Pherson, 2013, pp. 188–190; U.S. Government Accountability Office, 1998,
pp. 5–7.
Whatever qualitative terms are used, different users will interpret them
differently. For instance, surveys of military officers and intelligence
officials reveal that they interpret the word probably anywhere within a
quantitative range from 25% to 90% (even though, literally, probably
means greater than 50% to 100%). Worse, users interpret the word
probable differently when the impacts are negative or positive—around
70% to 80% probability for positive impacts but around 40% to 50% for
negative impacts.
The terms unlikely, likely, most likely, and almost certainly are
normative in the intelligence community, but without standard quantitative
interpretations. Some intelligence agencies have published tables of
standard quantitative interpretations for each term, but still have not
agreed a common standard, so these tables remain occasional guidance
that officials often disregard (Pherson & Pherson, 2013, pp. 185–188,
191). A potential standard, created more than 50 years ago by an
influential U.S. intelligence analyst, is shown in Table 6.6.
The British MOD has decided on similar schemes for intelligence (see
Table 6.7) and global trends (see Table 6.8).
Interestingly, the British MOD allowed for quantitative separation
between levels and an essentially illiterate scale for ease of
differentiation between levels, while the Canadian Chief of Defence
Intelligence has chosen a more literal and interval scale (see Table 6.9).
Unfortunately, many official authorities use more than one scheme,
without clear justification. For instance, the British defense and security
sector has used several different schemes for coding project risks, none
with any declared mathematical or literal justification (see two examples
in Table 6.10). The MOD eventually settled on a scheme for project risk
management (see Table 6.11), but this remains incompatible with its
schemes for intelligence or strategic trends (see Tables 6.7 and 6.8).
Surveying Judgments
Ideally, we would become sufficiently expert in the theories in order to
estimate the likelihood of future behaviors, but we could forego our own
review of the theories in favor of a survey of other experts. We could ask
respondents to assess the probability of an event occurring within some
period of time, to forecast when an event would occur, or if they would agree
with a prediction of an event within a period of time.
Freely available survey results, such as those reported by journalists, are
materially attractive even though the respondents are unlikely to be experts.
Such surveys often ask how the respondent will behave (such as vote) and
rarely ask the likelihood of something happening, but they may ask whether
the respondent thinks that something will occur or change. For instance,
survey items often ask whether the respondent believes that crime will
increase in a certain area. The proportion of respondents who agree could be
used as a probability; although this method has little theoretical justification,
it may be the best we could do with limited time or resources.
Any consumer faces some basic statistical choices (for instance, we
should select the median rather than mean response because medians are less
sensitive to outlier responses) and some methodological choices (for
instance, a “Delphi Survey” would ask the respondents to reforecast more
than once in reaction to the most recent responses, anonymously revealed—
this helps respondents to converge around the most realistic responses: see
Chapter 3).
SUMMARY
• defined uncertainty,
• explained the difference between uncertainty and risk,
• explained how to find and identify uncertainties,
• defined probability and likelihood,
• explained how to analyze probability,
• explained how to mathematically calculate probability,
• explained how to separate predictable and forecastable things,
• explained how to separate plausible and implausible things,
• noted the motivation for a separate assessment of confidence,
• explained how to convert qualitative and quantitative expressions of
probability,
• explained how to use frequency data to infer probabilities,
• referred to expert judgments on probabilities, and
• explained how to use time to infer probabilities.
Q UE S T IO NS AND E XE RCIS E S
his chapter defines returns (one of the two main concepts that make up a
T risk), events, issues, and incidents, explains how events are assessed,
shows how returns are categorized, and gives practical advice on how to
assess returns.
Defining Returns
The returns of an event are the changes experienced by an affected entity. The
returns are experienced by or affect the holders of the risk or those exposed
to the risk, such as investors or victims.
The term returns, as used in this book, includes many other concepts, such
as effects, outputs, outcomes, costs, losses, profits, gains, benefits,
consequences (the U.S. government’s favorite), or impacts (the British
government’s favorite). These other terms are inconsistently used and
defined. For instance, the Canadian government refers to both consequences
and impacts to mean slightly different things.
Effect
Impact
The impact is “the amount of loss or damage that can be expected from a
successful attack on an asset. Loss may be monetary, but may include loss
of lives and destruction of a symbolic structure” (U.S. Government
Accountability Office [GAO], 2005b, p. 110). The U.S. Department of
Defense (DOD) Dictionary does not define this term.
Impact is the “scale of the consequences of a hazard, threat or
emergency expressed in terms of a reduction in human welfare, damage to
the environment and loss of security” (U.K. Cabinet Office, 2013). Impact
is the “consequence of a particular outcome (may or may not be expressed
purely in financial terms)” (U.K. MOD, 2011c, p. Glossary 2).
“The term impact is used to estimate the extent of harm within each of
the impact categories [people; economy; environment; territorial security;
Canada’s reputation and influence; society and psycho-social]. When
describing a composite measure of impacts (considering more than one
impact category), the term consequence is applied” (Public Safety
Canada, 2013, p. 22).
“Impact is the seriousness or severity of the potential problem in terms
of the effect on your project” (Heerkens, 2002, p. 148).
Consequence
For U.S. Department of Homeland Security (DHS) (2009), the
consequence is “the effect of an event, incident, or occurrence” (p. 109).
“Consequences mean the dangers (full or partial), injuries, and losses of
life, property, environment, and business that can be quantified by some
unit of measure, often in economic or financial terms” (U.S. Federal
Emergency Management Agency [FEMA], 1992, p. xxv). The
consequence is “the expected worse case or reasonable worse case
impact of a successful attack. The consequence to a particular can be
evaluated when threat and vulnerability are considered together. This loss
or damage may be long- or short-term in nature” (U.S. GAO, 2005b, p.
110). The U.S. DOD Dictionary does not define this term.
The consequence is “the outcome of an event affecting objectives.” It
“can be certain or uncertain and can have positive or negative effects on
objectives” (International Organization for Standardization, 2009a, p. 8).
The “consequences of an event” include “changes in circumstances”
(Australian and New Zealand Joint Technical Committee, 2009, p. 1).
For Canadian government, a consequence is “a composite measure of
impacts (considering more than one impact category)” (Public Safety
Canada, 2013, p. 22).
For the British Civil Contingencies Secretariat, consequences (plural)
are the “impact resulting from the occurrence of a particular hazard or
threat, measured in terms of the numbers of lives lost, people injured, the
scale of damage to property and the disruption to essential services and
commodities” (U.K. Cabinet Office, 2013).
In U.S. government, costs are “inputs, both direct and indirect,” while a
benefit is the “net outcome, usually translated into monetary term; a
benefit may include both direct and indirect effects” (U.S. GAO, 2005b,
p. 110). The U.S. DOD Dictionary does not define any of these terms.
Harm
Defining Event
An event is an occurrence in time or “a thing that happens or takes place”
(FrameNet). Events, as occurrences, include accidents, wars, attacks,
structural failures, etc. The returns of the event are separate things, such as
death or injury.
The event is useful to identify because we could calculate the risk more
accurately after assessing the likelihood of the event occurring and the
returns from such an event. Analytically, we should imagine how an actor
could cause an event and imagine the returns.
Returns and events are routinely conflated, sometimes for justifiable
convenience when the returns are unknown. For instance, people often talk
about a potential accident as a risk, where the real risk is the potential harm
from the accident (Fillmore & Atkins, 1992, p. 87). Similarly, the British
Civil Contingencies Secretariat defines a contingency as a “possible future
emergency or risk” (U.K. Cabinet Office, 2013), but the potential emergency
(event) cannot be the same as the potential returns (risk).
Conflation of events and returns is colloquially justifiable, but not
analytically justifiable. Strictly speaking, we would be wrong to talk of a
potential accident or potential war or potential building collapse as a risk
(each is a potential event, but not a potential return); we should be self-
disciplined enough to talk of a potential event with which we should
associate potential returns (risks), while keeping the event and the returns
separate.
The British Civil Contingencies Act of 2004, and thence the Cabinet
Office, does not define event but identify incident (“an event or situation
that requires a response from the emergency services or other
responders”) and regularly use the phrase event and situation in
definitions of various emergencies (U.K. Cabinet Office, 2013). The
British government has been keen to conceptualize issue, particularly in
the context of defense acquisition projects. For the U.K. MOD
(unpublished), an issue is “a significant certain occurrence differentiated
from a risk by virtue of its certainty of occurrence and by the fact that it
should be accounted for in Planning and Scheduling activities and not
Risk Management.”
The Canadian government does not define issue but defines incident in
a similar way, as “an event caused by either human action or a natural
phenomenon that requires a response to prevent or minimize loss of life
or damage to property or the environment and reduce economic and social
losses.” The government goes further to describe incident management
as “the coordination of an organization’s activities aimed at preventing,
mitigating against, preparing for, responding to, and recovering from an
incident” (Public Safety Canada, 2012, p. 52). For the Humanitarian
Practice Network (2010, p. xvi) a critical incident is “a security incident
that significantly disrupts an organization’s capacity to operate; typically
life is lost or threatened, or the incident involves mortal danger.”
Disruption
A disruption is “a disturbance that compromises the availability, delivery,
and/or integrity of services of an organization” (Public Safety Canada, 2012,
p. 28).
Crisis
The British Standards Institute defines a crisis as “an inherently abnormal,
unstable and complex situation that represents a threat to the strategic
objectives, reputation or existence of an organization,” but the Civil
Contingencies Secretariat essentially defines it as an emergency (U.K.
Cabinet Office, 2013).
For Canadian government, a crisis is “a situation that threatens public
safety and security, the public’s sense of tradition and values, or the integrity
of the government. The terms ‘crisis’ and ‘emergency’ are not
interchangeable. However, a crisis may become an emergency. For example,
civil unrest over an unpopular government policy may spark widespread
riots” (Public Safety Canada, 2012, p. 20).
Emergency
The UN (UN Department of Humanitarian Affairs [DHA], 1992) defines an
emergency as “a sudden and usually unforeseen event that calls for
immediate measures to minimize its adverse consequences.”
Britain
In Britain, the Civil Contingencies Act (2004) (U.K. Cabinet Office, 2013)
defines an emergency as “an event or situation which threatens serious
damage to human welfare [or] to the environment of a place in the United
Kingdom, or war, or terrorism that threatens serious damage to the security of
the United Kingdom.” The Civil Contingencies Secretariat went on to define
an emergency as “an event or situation which threatens serious damage to
human welfare in a place in the UK, the environment of a place in the UK, or
the security of the UK or of a place in the UK.” The British MOD (2011c)
defines an emergency as “an event or situation which threatens serious
damage to the human welfare, security or environment of the UK” (p.
Glossary-2).
The British Cabinet Office categorizes three types of events that could
count as emergencies:
1. Natural events
2. Major accidents
3. Malicious attacks
and that cannot be effectively dealt with under any other law of Canada.”
The Emergencies Act recognized four types of events as emergencies:
Strategic Shocks
The MOD adds strategic shocks, which are “high impact events that have the
potential to rapidly alter the strategic context” or result “in a discontinuity or
an abrupt alteration in the strategic context” or “dislocates the strategic
context from the trends that have preceded it.” Past examples include the
public breach of the wall dividing Berlin in 1989, the terrorist attacks of
September 11, 2001, and the global financial crisis of 2007–2008 (U.K.
MOD, 2010a, pp. 5–6, 91).
Categorizing Returns
The returns of major risks have multiple dimensions. The most typical
categorization is between material and human returns. For instance, the
consequences of terrorism have been coded from 1 to 5 on two dimensions:
humans killed or injured and economic costs (Greenberg, Chalk, Willis,
Khilko, & Ortiz, 2006). Swiss Re Group, like most insurers and reinsurers,
measures losses in terms of total economic losses; insured property claims;
and human casualties.
Unfortunately, official authorities prescribe some diverse terms of and
categories of returns. Table 7.1 shows my attempt to align their categories.
Assessing Returns
Returns can be assessed judgmentally on ordinal scales or measured in
economic, organizational, operational, human, environmental, or
territorial/geopolitical terms, as explained in sequential subsections below.
The British MOD judges impacts on three levels and three dimensions.
Each harm or impact is ranked on a scale from 0 to 5 (see Table 7.2).
The U.S. DOD standard, which has been used in civilian government
and the commercial sector too, uses a 4-point scale of negative
consequences by three different dimensions (see Table 7.3).
The UN’s Threat and Risk Unit standardized a 5-point scale of impacts by
three dimensions (see Table 7.4).
Economic Returns
Economic returns include:
• buildings,
• infrastructure,
• machinery and equipment,
• residential housing and contents, and
• raw materials.
Organizational Returns
Organizational returns are often neglected in favor of more material
measures, but serious organizational returns include corporate reputation,
scale and direction of new media coverage, management effort, personnel
turnover, and strategic impact. Some ways could be found to measure these
returns in fungible financial terms, but usually the returns are intangible
enough to discourage measuring them at all.
SOURCE: All Hazards Risk Assessment Methodology Guidelines, 2012–2013, p.32, Public Safety
Canada and Defence Research and Development Canada, 2013. Reproduced with the permission
of the Minster of Public Works and Government Services, 2013.
Operational Returns
The outputs, performance, or effectiveness of projects, programs, activities,
strategies, campaigns, missions, and operations are routinely assessed,
usually in terms of operational effectiveness, which is another form of return.
Human Returns
Human returns are the changes experienced by human beings. Typically,
human returns are measured as deaths, injuries, disability-adjusted life years,
economic cost, and changes of situation, as described in subsections below.
Deaths
Deaths are normally measured as fatalities due to an event, mortality rate
(deaths per unit population), and frequencies (deaths per unit time).
In the context of terrorism, mass casualty tends to mean at least 5 dead and
high casualty tends to mean at least 15 dead.
Death is one of the severest human returns, but a focus on deaths can
underestimate other returns, such as injuries, disabilities, loss of life
expectancy, psychological stress, health costs, and injustices.
Injuries
An injury is any damage to the body. (Wound implies a puncture wound or an
injury due to violence.) An injury could cause death, but normally we
measure deaths and survivable injuries separately.
Injuries can be measured as a rate, as a frequency, by number on the body,
and on a scale (normally described by severity). For instance, the Canadian
government recognizes a high degree of injury as “severe harm that affects
the public or compromises the effective functioning of government following
a disruption in the delivery of a critical service” (Public Safety Canada,
2012, p. 51).
Injuries can be categorized by location on or in the body, medical cause
(for instance, toxic, traumatic), or event (for instance, road traffic accident,
sports).
Injuries, like deaths, have economic value in the tort system and health
system (see below).
DALYs
Disability-adjusted life years (DALYs) are the years of healthy life lost
across a population due to premature mortality or disability caused by some
event or threat. The DALY is a useful measure because it places two separate
measures (deaths; injuries) on the same measure, helping to equate the
returns. For instance, one source might affect few people but kill most of
them, while another source severely disables everyone it affects but kills few
of them directly.
This method remained unchanged from 1990 until the Global Burden of
Disease Study (2012) adapted the method:
SOURCE: All Hazards Risk Assessment Methodology Guidelines, 2012–2013, p.27, Public Safety
Canada and Defence Research and Development Canada, 2013. Reproduced with the permission
of the Minster of Public Works and Government Services, 2013.
Economic Value
Since outcomes can be measured by imperfectly competitive dimensions,
assessors naturally search for a fungible measure, which tends to be
financial, but this can seem too insensitive or calculated, especially when
fatalities are measured economically. The economic value of a particular
fatality can be calculated from past inputs, such as the sunk cost of education,
and lost future outputs, such as lost earnings.
A life lost or injured has financial value in insurance and the tort system
(although insurance or legal liability would not be engaged in every case).
Many medical authorities (such as the American Medical Association) and
government authorities issue standard “schedules” for assessing the
economic worth of injuries, for use by adjusters in insurance claims,
employers in compensation of injured employees, and judges in tort cases.
The first calculation normally expresses the impairment locally on the body,
specific to the part of the body (down to the finger or toe or organ), the scale
of the impairment, and the type of injury. Multiple impairments are then
combined and expressed as the resulting impairment to the whole body. This
overall impairment is combined with the occupation of the victim and the age
of the victim according to prescribed codes and formulae. The final product
of all this is normally a “rating” of the victim’s “final permanent disability
rating” or “physical impairment for employment,” on a scale from 0% (no
reduction of earning potential) through progressively more severe partial
disability to 100% (permanent total disability).
Since 1984, the U.S. government has offered financial compensation (in
the form of money or tax relief) for deaths and injuries suffered by
resident victims of crime, usually to the value of a few tens of thousands
of dollars. In 1996 (following mass casualty terrorism at a bombed
federal office in Oklahoma City in 1995), this offer was extended to
resident victims of terrorism, even if abroad at the time. Victims of
terrorism on September 11, 2001, were compensated under an additional
federal scheme, whose payouts far exceeded payouts by charities or
insurers. From all sources, the seriously injured and the dependents of
each killed victim received $3.1 million per victim on average, except
emergency responders, who received $4.2 million on average. Payouts
varied with projected lifetime earnings (Dixon & Kaganoff Stern, 2004).
Since the 1960s, the British government has offered to victims of
violent crime compensation by type of injury, with a current range from
£1,000 to £500,000 (effective November 2012). In January 2012, the
British government decided to include (effective April 2013) British
victims of terrorism abroad since 2002.
Changes of Situation
Changes of situation include forced migration or displacement from normal
place of residence, homelessness, loss of work, separation from family, loss
of rights, and injustice.
Environmental Returns
Natural environmental damage can be assessed in economic terms when
agriculture or some other economically productive use of the area is affected.
Otherwise, an environmental regulator or assessor refers to statutory scales
of value or legal precedents before seeking to impose a fine on or
compensation from the perpetrator. Biologists and geographers may assess
environmental damage in lots of nonfungible direct ways, such as animals
killed, species affected, and the area of land or water affected. The land area
is the most fungible of these measures.
After a natural disaster, the Canadian government uses the area affected to
modify the ranking of the magnitude of official response (see Table 7.6),
where a damaged area of 400 square kilometers would raise the ranking
of official response by 1.0 (see Table 7.8).
Territorial/Geopolitical Insecurity
Separate to an assessment of harm to a natural environment, we may need to
assess declining security of a geopolitical unit, such as sovereign territory, a
city, or province. This would be a routine measure in unstable countries or
countries with porous or contested borders, such as Pakistan, Afghanistan,
Columbia, the Philippines, and Somalia, where central government is not in
control of all of its territory most of the time.
Pedagogy Box 7.14 Canadian Assessment of
Territorial Insecurity
SOURCE: All Hazards Risk Assessment Methodology Guidelines, 2012–2013, p.40, Public Safety
Canada and Defence Research and Development Canada, 2013. Reproduced with the permission
of the Minster of Public Works and Government Services, 2013.
This magnitude score would be modified by duration:
• 1 hour (subtract 2)
• 1–3 hours (subtract 1.5)
• 3–10 hours (subtract 1)
• 0.5–1 day (subtract 0.5)
• 3–10 days (add 0.5)
• 20 days to 1 month (add 1)
• 1–3 months (add 1.5)
• 3–12 months (add 2)
• 1–3 years (add 2.5)
• 3–10 years (add 3)
• More than 10 years but not permanent (add 3)
• Permanent (add 3.5)
It would also be modified by population density:
SUMMARY
• defined returns,
• defined events,
• defined issues and incidents,
• showed how to assess events,
• explained how to categorize returns, and
• explained how to assess returns—judgmentally on an ordinal scale and
objectively by their economic, organizational, human, environmental,
and territorial/geopolitical returns.
Q UE S T IO NS AND E XE RCIS E S
II
Managing Security and Risk
his (second) part of the book builds on good analyses and assessments of
T security and risks by explaining how to manage security and risks.
Chapter 8 will help the reader to design and to develop an organization’s
cultures, structures, and processes to more functionally manage security and
risk. Chapter 9 explains why different people have different sensitivities to
different risks, that is, why different people tend to be oversensitive to
certain types of risks while tolerating others. Chapter 10 explains how to
choose controls and strategies in response to risks. Chapter 11 shows how
we should record risks, communicate to stakeholders about our
understanding and management of security and risks, monitor and review our
current management, and audit how others manage security and risk.
Unofficial
UN
United States
Canadian
British
Cultures
A culture is a collection of the dominant norms, values, and beliefs in a
group or organization. This section explains why security and risk managers
should pay attention to culture, how they can assess a culture, and how they
can develop a culture to be more functional.
The British Standards Institution (2000) noted that the “role of culture in
the strategic management of organizations is important because: the
prevailing culture is a major influence on current strategies and future
chances; and any decisions to make major strategic chances may require a
change in the culture.” It identified failures of risk management “due, at
least in part, to a poor culture within the organization” despite the
organization’s proper attention to the process (p. 21).
Similarly, the International Risk Governance Council (2008, pp. 6, 20)
noted that organizations and societies have different “risk cultures” that
must be managed as part of risk management.
The Australian/New Zealand and ISO standard (International
Organization for Standardization, 2009b) stresses in one of its 11
“principles for risk management” that risk management should take
account of “human and cultural factors” and that the “framework” should
be tailored to and integrated into the organization.
Assessing Culture
Culture is difficult to observe because it is less tangible than structure and
process, but a researcher could directly observe organizational personnel in
case they betray normative noncompliance with, negative valuations of, or
incorrect beliefs about security and risk management.
The researcher could survey personnel with classic questions such as “Do
you believe risk management is important?” or “Do you follow prescribed
processes when nobody else is watching?”
Sometimes, a bad culture is betrayed by repeated failures to implement
processes, to exercise authority, or to take responsibility for risk
management. Ideally, such repeated failures should be observed currently by
regular monitoring and reviewing and should prompt an audit that would
diagnose the root causes (as discussed in Chapter 11).
Developing a Culture
Changing a culture is difficult, but obvious solutions include exemplary
leadership, more awareness of the desired culture, more rewards for
compliance with the desired culture, more punishments for noncompliance,
and more enforcement of compliance.
Of course, we should also consider whether the negative culture is a
reaction to something dysfunctional in the structure or process. For instance,
perhaps employees are copying noncompliant leaders; perhaps employees
have good reason to dislike some of the prescribed processes, such as
processes that are too burdensome or that incorrectly assess certain risks. If
the structure or process is at fault, the structure or process needs to change
positively at the same time as we try to change the culture positively.
Ultimately, culture is changed and maintained only by congruence with
structure and process, training of and communication to personnel of the
desired culture, and cultural congruence across all departments and levels
from managers downwards.
Pedagogy Box 8.2 Developing a “Security
Culture”
• Make sure that all staff are familiar with the context, the risks and
the commitments of the organization in terms of risk reduction and
security management.
• Make sure that all staff are clear about their individual
responsibilities with regard to security, teamwork and discipline.
• Advise and assist staff to address their medical, financial and
personal insurance matters prior to deployment in a high-risk
environment.
• Be clear about the expectations of managers and management styles
under normal and high-stress circumstances.
• Make security a standing item (preferably the first item) on the
agenda of every management and regular staff meeting.
• Stipulate reviews and if needed updates of basic safety and
security advice, as well as country-wide and area-specific security
plans, as described above.
• Invest in competency development. It is not uncommon for aid
agencies to scramble to do security training when a situation
deteriorates. Investment should be made in staff development,
including security mitigation competences, in periods of calm and
stability.
• Ensure that security is a key consideration in all program planning.
• Perform periodic inspections of equipment by a qualified
individual, including radios, first aid kits, smoke alarms, fire
extinguishers, intruder alarms and body armor.
• Carry out after-action reviews (AARs). The focus is on assessing
what happened and how the team acted in a given situation, not on
individual responsibilities. It is a collective learning exercise.”
(Humanitarian Practice Network, 2010, pp. 13–14)
Structures
Structures are patterns of authorities and responsibilities. The authorities are
those departments or persons assigned to determine how security and risk
should be managed. The responsible parties are supposed to manage security
and risk as determined by the authorities. The three subsections below
respectively explain why the development of structure is important, give
advice on developing the internal structure of an organization, and give
advice on developing functional relations between organizations.
The report concluded that “given the expenditure of over £1.1 billion
since 1998 without the delivery of its principal armoured vehicles—the
Department’s standard acquisition process for armoured vehicles has not
been working.”
The NAO blamed the MOD also for specifying high capability goals,
then failing to compromise given the technology available.
The NAO suggested that the MOD could improve its standard
acquisitions process by learning from UORs, which identify incremental
requirements and technological opportunities beyond current acquisitions
but warned that the MOD would need to combine planning for full
sustainment and value for money beyond current operations. The NAO
recommended that the MOD should improve its technological awareness
and pursue evolutionary development within realistic technological
opportunities.
The NAO had not fully investigated the structure, process, or culture of
acquisitions, and some anonymous officials complained that the NAO was
better at identifying past failings than solutions and better at blaming the
MOD than the political decisions with which the MOD must comply. The
political administration (Labour Party, 1997–2010) was not prepared to
cut any procurement program during its many wars but instead
incrementally trimmed the funding or projects from practically all
programs, many of which consequently could not achieve their specified
capabilities.
The budgeting system also created structural problems. As in the
United States, national government in Britain is paid for from annual
authorizations, within which the MOD matches money to programs, with
little spare capacity at the time, so when one program overran its budget
or suffered a cut in budget, money was robbed from other programs or the
program’s activities or objectives were cut. The Committee of Public
Accounts found that from 2006 to 2011 the MOD had removed £47.4
billion from its equipment budget through 2020–2021, of which 23%
(£10.8 billion) covered armored vehicle projects. The Committee
recommended that the MOD “should ensure that future procurement
decisions are based on a clear analysis of its operational priorities, and
must challenge proposals vigorously to ensure they are both realistic and
affordable. Once budgets have been set, they must be adhered to. The
Department’s inability to deliver its armoured vehicles programme has
been exacerbated by over-specifying vehicle requirements and using
complex procurement methods” (U.K. MOD, 2011c, p. 5).
The Treasury was most influential over UORs, since it capped the
budget for all UORs and often meddled with individual UORs, with the
result that MOD departments fought mostly internally over the few UORs
that could be approved—those approved tended to be the cheaper UORs.
Since the users of the products of these UORs often were multiple or
inconsistent, the user was weakly represented in these decisions. Special
Forces were the most consistent and politically supported users, so
tended to enjoy the best success rate, but often with undesirable outcomes
for other users. For instance, the MOD acquired a new light machine gun
for the whole army—it had satisfied the requirement from the special
forces for an ambush weapon but was practically useless in most of the
long-range defensive engagements in Afghanistan. Similarly, the MOD
acquired lots of small fast unarmoured vehicles that were useful for
special operations but soon deleted from the fleet. Some of the vehicles
that were acquired via UORs met justifiable UORs (the more survivable
vehicles were most required), but they were usually acquired without
training vehicles, so users often first encountered new types of vehicles
only after deploying.
The Labour government had promised to publish Gray’s report in July
2009 but reneged until October, then deferred most consideration until the
next Strategic Defence Review. A national election (May 6, 2010)
delayed that review.
On February 22, 2011, 9 months after taking office as Defence
Secretary Liam Fox announced his first reforms of the MOD’s
procurement process, which he condemned as “fantasy defence
procurement” and a “conspiracy of optimism.” He promised that
procurement projects would not proceed without a clear budgetary line
for development, procurement, and deployment. He announced a Major
Projects Review Board (under his own chairmanship) to receive
quarterly updates on the MOD’s major programs—first the 20 most
valuable projects, followed by the rest of the 50 most valuable projects.
The Board met for the first time on June 13, 2011. Following the meeting,
the MOD asserted that
Any project that the Board decided was failing would be publicly
“named and shamed.” This could include a project that is running
over budget or behind expected timelines. This will allow the public
and the market to judge how well the MOD and industry are doing in
supporting the Armed Forces and offering taxpayers value for
money.
The Defence Reform Unit’s report was published on June 27, 2011.
The Report made 53 wide-ranging recommendations, the most important
of which was for a smaller Defence Board, still chaired by the Defence
Secretary, but without any military members except the Chief of Defence
Staff. The three Service Chiefs were supposed to gain greater freedom to
run their own services. The services would coordinate primarily through
a four-star Joint Forces Command. The MOD would form separate
Defence Infrastructure and Defence Business Services organizations.
Another recommendation was to manage and use senior military and
civilian personnel more effectively, transparently, and jointly, with people
staying in post for longer, and more transparent and joint career
management. An implementation plan was expected in September 2011,
for overall implementation by April 2015. The process of acquisition was
not expected to change, although the structure would. The Select
Committee on Defence (U.K. House of Commons, 2011b, Paragraph 207)
recommended that the MOD should appoint “suitably experienced
independent members” to the Board.
On September 12, 2011 (speaking to the Defence & Security Equipment
International show in London), the Defence Secretary claimed that
Britain’s “forces in Afghanistan have never been so well-equipped.” In
Afghanistan at that time, British forces employed about 10,000 military
personnel and 22 different models of armored vehicle at a cost in 2011 of
£4 billion (from contingency reserve, above the core defense budget of
£33.8 billion in fiscal year 2011–2012). The U.K. MOD was in the
middle of a 3-month study into the Army’s future vehicle fleet—clearly
most of the armoured or fighting vehicles returning from Afghanistan (by
the end of 2014) would not be required for the core fleet; some could fill
core requirements but the cost of repatriating, reconfiguring, and
sustaining even these vehicles would be prohibitive in an era of austerity.
On May 14, 2012, Defence Secretary (since October 2011) Philip
Hammond announced to the House of Commons his latest reforms.
Processes
A process is a series of actions or activities toward some end. The
subsections below explain why the development of a prescribed process of
security and risk management is important and gives examples of standard
processes to choose from.
• defined culture,
• explained why we should care to develop the culture of security and
risk management,
• advised on how to assess a culture,
• advised on how to develop a culture,
• defined structure,
• explained why the design and development of organizational structures
is important,
• given advice on developing the internal structure of an organization,
• shown how to develop relations between organizations,
• defined process, and
• compared different standard processes.
Q UE S T IO NS AND E XE RCIS E S
UN
Canada
Britain
Organizational Sensitivity
Different organizations have different sensitivities due to internal decisions,
strategies, and cultures. The Australian/New Zealand and ISO standard
(International Organization for Standardization, 2009b) admits that “human
and cultural factors” can change an organization’s sensitivity. The BSI (2000,
p. 21) noted that “the prevailing culture is a major influence on current
strategies and future chances.” At that time, the British government found that
42% of British government departments (n = 237) regarded themselves as
more risk averse than risk taking, although 82% supported innovation to
achieve their objectives (U.K. National Audit Office, 2000, p. 5).
Loss Averseness
Table 9.1 Drivers of Risk Sensitivity and Insensitivity
In general, people are loss averse, meaning that they are disproportionately
sensitive to losses than gains. For instance, imagine a choice between an
investment from which you could earn a sizeable profit but also face a 25%
probability of losing all of your investment and another investment with the
same chance of winning the same profit but a 84% chance of losing just 30%
of your investment. The expected returns are practically the same in both
cases, but most people would be repulsed by the option in which they could
lose everything. (This response would be justified by analysis of the range of
returns: In one case the range runs from total loss to profit; in the other the
range runs from a loss of 30% of total investment to profit. The latter is less
uncertain and has a smaller worst-possible return.)
In economic language, people tend to have a nonlinear loss function: They
become loss averse at a quicker rate than the losses become more likely. If a
person were speculating just once on a potential high loss, the loss
averseness could be described as rational (better not to bet at all rather than
accept the chance of losing everything). If the person were able to bet many
times and the odds of gains were favorable each round, even if a high loss
were possible in any round, the person could rationally justify placing the bet
many times, expecting a net gain over the long run. Professional gamblers
think rationally in terms of net gains over the long run, but most people are
nonlinearly loss averse.
Returns Focused
In general, people tend to focus on the potential returns more than the
probability of those returns. In fact, most people are so bad at understanding
the difference between probabilities that they often ignore probabilities
altogether. In effect, their risk calculation overstates the returns and
understates the likelihood, with the result that they obsess about potential
highly negative events just because they would be highly negative, while
neglecting much more likely but less negative events. This helps explain the
popular cultural attention to low likelihood, high impact negative events
(”shocks” or “black swans”) and inflated popular fears about the most
harmful and spectacular but least likely crimes, such as mass murder.
In fact, people inflate the likelihood of both negative and positive things,
so they tend to be both over-confident about winning a bet and over-
expectant of crime and other bad things.
When asked to assess a risk, typically, people will not consciously think in
terms of likelihood or returns but will assess some feeling about the risk,
even if it is subconsciously biased by their particular bias toward the
likelihood or returns. Consequently, surveyors should ask respondents to
assess the likelihood or returns separately. Nevertheless, most people
probably assess the likelihood and returns of any particular event the same,
based on their one feeling about the risk. For instance, the World Economic
Forum (2013, p. 45) has found that respondents rated the likelihood and
impact of each given risk similarly; a strict statistical interpretation of the
results would suggest that highly likely events tend to be highly impactful and
that unlikely events tend to be inconsequential, but the real world shows
otherwise: More likely events (such as normal precipitation) tend be less
consequential than less likely events (such as hurricanes).
Proximity
People will feel more sensitive to a risk that is more proximate in time or
space (say, when they are travelling through a stormy area) than when they
are remote to the risk (say, when the last storm recedes into the past). This
proximity is correlated with anchoring, as explained below.
Psychological Anchoring
When people experience an event, they tend to be psychologically
“anchored” in the experience and more sensitive to the associated risks. For
instance, persons who experience a road traffic accident today likely would
be more sensitive to the risks associated with road traffic accidents
tomorrow than they were yesterday. Over time, or with therapy or
distractions or maturation, memory tends to deteriorate or become less
salient, and the sensitivity declines, but a particularly formative or shocking
experience may anchor the person’s sensitivity forever, even manifesting as a
medical condition, such as post-traumatic stress disorder.
Cognitive Availability
People do not need to experience the event directly to become anchored in it,
as long as the experience is available to them in some captivating way, such
as visual images of the events or personally related, emotionally delivered
verbal accounts by those who experienced it. More immersive or
experiential media, such as movies and video games, increase the effect. The
cognitive availability of the event produces effects similar to direct
experience of the event. For instance, most people in the world were very
remote to 9/11 in geographical terms but were shocked by the images and
accounts and felt more sensitive toward terrorism risk. Normal human
reactions to 9/11 were understandable and rational, since audiences were
learning about a real event with implications for risks everywhere.
Cognitive availability can mislead the audience. People are more likely to
recall risks associated with striking images or experiences than risks that are
less cognitively available (even if they are objectively higher). The great
interest that popular culture takes in fictional violent crime and that
journalists take in real violent crimes contributes to a general perception that
violent crimes are more frequent than they really are. Survey respondents
often report high levels of fear but are imprecise about the threats, although
they blame crime in general, the surrounding area in general, or certain
demographics, and they refer to a recent crime as evidence of increasing
frequency. This high but abstract fear has been termed free-floating fear.
Such fear is very frustrating for police and public safety professionals when
they succeed in lowering crime without lowering the public’s fears of crime
(although sometimes such professionals are less willing to admit that minor
crimes and social disorder often increase even while the rates of serious
crime or overall crime decrease).
Consequently, people can feel more sensitive to remote risks just because
they become more aware of them, not because the risks are becoming more
proximate. For instance, elderly people were found to be more fearful when
they had frequent visitors; these frequent visitors tended to remind the elderly
about all the things to worry about, whereas people with less visitors
received fewer reminders and developed less fear. Similarly, the self-
reported fears of violent crime among residents of Winnipeg, Manitoba,
surged after they received broadcasts from a television station in Detroit,
Michigan, 1,000 miles away, where the news reports were dominated by
crime. Winnipeg’s crime rate had not changed (Kennedy & Van Brunschot,
2009, pp. 31–32).
The availability bias is easy to manipulate by officials, journalists and
their editors, and entertainers—anybody with capacity to release striking
images or conceptualizations to a mass audience (remember the earlier
description of “risk society”).
Unrepresentativeness
People also take cognitive shortcuts through their memory to cases that seem
“representative” of a new case, even when the cases are not provably
representative at all. For instance, one person could blame a surge in youth
crime on bad parenting because of a memory of bad parents whose child
turned to crime, while another person could blame the same surge on poverty
because of a memory of a deprived child who stole food.
Base-Rate Neglect
Worse, most people are naturally underempirical: They react to the most
available and proximate events rather than check the longer-term rate or
trend. This dysfunction is called also base-rate neglect.
Perversely, people tend to tolerate frequent negative events, like road
traffic events, even though they would not accept similar losses of life
concentrated in shorter periods of time. Some risks are tolerated because
they are familiar, routine, and distributed regularly over time. (Road traffic is
justified also for rational reasons, such as social, economic, and personal
gains.) Road traffic killed one American every 16 minutes in 2010 (see
Table 9.2): This frequency perversely suggests routine risk; moreover,
driving is voluntary. This helps explain why people are much more sensitive
to infrequent small losses of life to terrorism than to frequent losses of life to
road traffic (Renn, 2008, p. 23).
Maturation
Maturation suggests less base-rate neglect (and less anchoring and
unrepresentative cases), although unfortunately most people do not mature
significantly in this respect once they reach adulthood.
Young children tend to be insecure and inexperienced with risks, but as
they enter adolescence, they tend to act more recklessly, particularly toward
thrill-seeking speculative risks; youth and masculinity is associated with
testosterone, a natural hormone that has been shown (in females too) to peak
at the same time as reckless behavior. As people age or mature, testosterone
production tends to fall naturally and they tend to gather experiences of risks
and responsibilities (such as dependent families) that encourage them to be
more sensitive.
Yet older people, like very young children, naturally tend to focus on very
short-term concerns and can even behave more recklessly (for instance, rates
of sexually transmitted disease increase after retirement, after a trough in
middle age). Thus, over a typical lifetime, risk sensitivity would fall lowest
in adolescence and early adult years, peak in middle age, and decline in old
age (except to short-term risks).
The role of age and gender is noticed even among experts. For instance,
the World Economic Forum (2013, p. 50) found that its younger respondents
(40 years old or younger) and female respondents were more pessimistic
about both the likelihood and impact of negative risks in general (on average,
women rated the likelihood as 0.11 higher and the impact as 0.21 higher on a
5-point scale) and economic risks in particular. Older respondents (officials
and politicians tended to be found in this group) were more pessimistic about
particular risks (prolonged infrastructure neglect; failure of climate change
adaptation; rising greenhouse gas emissions; diffusion of weapons of mass
destructions), but presumably for professional reasons, rather than any age-
related sensitivity. (The World Economic Forum surveyed more than 1,000
experts, mostly from business, academia, nongovernmental organizations, and
government.)
Misinformation
When people feel uninformed about a risk, they tend to overestimate low
risks, while underestimating high risks (assessment bias). In fact, people can
be more comfortable with certainty about a high risk than uncertainty about
the level of risk, in part because uncertainty drives them to fear the worst.
Informed awareness of the risk helps to lower sensitivity, even when the risk
turns out to be worse than feared! For instance, a person living for years with
undiagnosed symptoms probably wonders whether the underlying condition
is harmless or deadly (a wide range of returns). That person might be
relieved to receive a firm medical diagnosis, even if the prognosis does not
include recovery, just because the diagnosis relieves some of the uncertainty.
In some sense, the uncertainty was a negative event in itself, since it was
worrying and stressful.
Objective expertise usually leads to informed awareness. Inexpert
familiarity with negative risks tends to cause inflation of those risks. For
instance, almost everyone has heard of cancer, but few people objectively
understand the chances by different sources or causes—consequently, they
react viscerally to anything that they believe could be carcinogenic.
Conversely, as long as their exposure is voluntary, people will expose
themselves to things such as sunlight, tanning machines, nicotine, alcohol, or
sexually transmitted pathogens that are much more carcinogenic than
practically harmless but externally controlled agents like fluorinated water.
Experts should be able to identify a risk that outsiders either inflate
(because they fear it) or deflate (because they do not understand why they
should fear it). However, disciplinary biases could drive experts within a
particular domain to misestimate, leaving us with a dilemma about whether
to believe the in-domain or out-domain experts. For instance, the World
Economic Forum (2013, p. 51) found that economists assessed economic
risks, and technology experts assessed nanotechnology risks, as lower than
external experts assessed these same risks, while environmental experts
assessed environmental risks higher than others assessed them.
Real Situations
In hypothetical situations, people tend to claim more recklessness or bravery
than they would exhibit in real situations. For instance, if asked how they
would behave if they saw a stranger being robbed, most people would claim
to defend the victim, but in practice, most people would not. This is an issue
for risk managers who might use data gathered from respondents on their risk
averseness in hypothetical or future scenarios—most respondents claim less
sensitivity than they would show in the real world, and their responses could
be manipulated easily by the framing of the question.
Lack of Control
People are more sensitive to risks that they feel that they cannot control or
are under the control of remote persons or groups, such as official regulators.
Resentment is more likely if the controllers are seen to be incompetent or
separate, such as by ethnicity or religion or class. Perversely, people can be
insensitive to voluntary risks if they think they can control them even if they
cannot. For instance, alcoholics and tobacco smokers tend to underestimate
the addictiveness of alcohol or nicotine and tend to underestimate the health
risks (Renn, 2008, p. 22).
People tend to accept surprisingly high risks so long as they are voluntary
and thrilling or promise intrinsic or social rewards despite the negative
risks, such as driving cars fast or playing contact sports. In a sense, these are
speculative risks. Indeed, people effectively tolerate probable net losses
when they bet against a competent oddsmaker, but complain about unlikely
and smaller losses over which they have less control, such as being short
changed. Most people are more sensitive to pure risks, like terrorism, than
objectively higher risks, like driving a car (hundreds of times more lives are
lost in car accidents than to terrorism), in part because driving is a
speculative risk (at least in the sense that it is voluntary) (see Table 9.2).
Distrust
Most people are keen on controls on negative risks so long as the controls do
not impact their lives, but they are distrustful of controls that affect them. In
fact, people are happy to complain about all sorts of pure risks and how
poorly they are controlled by higher authorities, but also complain when
controls are imposed without consultation, affect personal freedoms, or use
up resources that could be used elsewhere. Surveys can produce
contradictory responses, such as majority agreement that a risk is too high
and needs to be controlled, but majority disagreement with any significant
practical controls that might be proposed.
Risk sensitivity decreases when the speculative behavior is
institutionalized within trust settings or regimes that the speculator perceives
as fair and universal. When rule breaking is exposed, risk sensitivity
increases (as well as altruistic punishment) (Van Brunschot & Kennedy,
2008, p. 10). For instance, decades ago most people had confidence in the
banking system, but since financial crises and scandals in 2008 they became
more sensitive to financial risks and more distrustful of the financial system.
People can be keen on punishing rule breakers even if the punishment of a
few is bad for themselves; their motivation makes evolutionary and rational
sense if such punishment improves future compliance with rules—this
motivation is known as altruistic punishment.
Sometimes this blame is justified, particularly when a minority of people
break the rules and expose the rest of us to increased risks. For instance, we
should feel aggrieved if a colleague allowed a friend to bypass the controls
on access to a supposedly secure area and that friend stole our property or
attacked us. Similarly, many people blamed a handful of financial speculators
for the financial crisis in 2008 and proposed punitive measures such as
stopping their employment, stopping their income bonuses or salary rises, or
taxing their high earnings before the crisis (a tax known by some advocates
as a “Robin Hood” or “windfall” tax). A contradiction arises when a risk
grows uncontrolled and perhaps manifests as a shocking event while most
people had not supported controlling the risk, but most people still would
seek to blame others.
Group Psychology
Individuals tend to be vulnerable to social contagion or peer pressure.
Groups tend to encourage members toward reckless behavior as long as the
majority is unaffected (“peer pressure”). Groups naturally provide members
with the sense of shared risks, collective protection, and cohesion. Groups
can exploit any of these effects by actively encouraging recklessness in return
for membership or credibility within the group. On the other hand, where the
group is risk sensitive but a new member is risk insensitive, the group would
encourage the member to conform to the group.
Pair Bonding
Most of us will ignore or accept very high risks as long as they are socially,
emotionally, or sexually attractive, familiar, or normative. Pursuit of such
risks could be described as rational so long as we expect rewards such as
belonging or happiness, but most people are unrealistic about the positive
risks, blind to the negative risks, and neglectful of easy controls on the
negative risks.
For instance, most people will marry or seek a lifelong romantic
relationship and claim that they are doing so for love rather than material
gain. In many ways, the trajectory is admirable and exhilarating—why
interrupt something so natural and emotional with caution and negotiation?
Yet a lifelong commitment faces much uncertainty because of the long term of
the commitment and the comparative immaturity of the parties. Part of the
impulsiveness is biochemical and has been compared to intoxication or
addiction. The elevation of the biochemicals (such as serotonin) typically
subsides within 18 to 36 months of “falling in love,” although the longer-term
parts of love could persist. Love seems to be some combination, varying by
individual experience, of friendship or companionship, attachment or
commitment, and sexual lust or attraction.
In effect, most people celebrate their romantic and emotional impulses, bet
on lifelong love, and even eschew easy controls on the risks that would make
lifelong love easier to achieve. Popular culture is full of celebratory stories
about people who impulsively make commitments despite little experience
with their partners, opposition from close family or friends, official barriers,
or poverty.
Unfortunately, few people achieve lifelong love and the costs of a failed
partnership are great. Even though most people expect “’til death do us part”
and forego a prenuptial agreement when they marry, most first marriages end
in divorce (see Table 9.2). A typical wedding cost $27,000 per average
American couple in 2011 (according to TheKnot.com). A divorce could cost
a few thousand dollars, if both parties would not contest the divorce, but a
typical divorce is more expensive (about $40,000 for the average couple in
direct legal costs alone) and is associated with other negative returns, such
as opportunity costs (which could add up to hundreds of thousands of
dollars), changes of financial arrangements, residency, and employment,
stress, and loss of custody of property and children.
If humans were rational in their pairing and procreating, they would rarely
need help or fail, but historically all societies have developed norms,
cultures, and institutions intended to direct human behaviors toward lifelong
pairing and procreation and toward gender roles and codependency. The
ideals of love, lifelong pairing, and marriage have natural origins and
rewards, but each is partly socially constructed. Some of these structures are
to the benefit of couples or society as a whole, but some are counter-
productive. For instance, many people feel trapped in marriages by fears of
the inequities of divorce. In traditional societies, men have most of the rights
and benefits, while women have few alternatives to marriage and even can
be forced to marry. In return, men are expected to provide for their wives and
children; their failure to provide is often the sole allowable grounds for a
female-initiated divorce. Societies that are legally gender-equal or -neutral
still culturally and effectively treat provision as a masculine role. Many
public authorities still promise greater alimony payments and widowed
pensions to women than men on the obsolete grounds that women are
helpless without men. Family legal systems in developed societies tend to
favor women during disputes over child custody and separation of assets
(even though in traditional societies they tend to favor men). These norms,
cultures, and institutions effectively change the risks by gender, but couples
and parents, when starting out, are largely oblivious to these structures.
Some couples choose not to separate but are not happy together. Naturally,
a dissatisfactory pairwise commitment could be rational, given the material
efficiencies of living together, the redundancy in the pair when one is
incapacitated, and the costs of separation. Some people make perfectly
rational calculations that a relationship is a hedge against future incapacity;
some people simply expect to gain from a more affluent partner or from an
employer or government that has promised benefits to partners; some people
stay unhappily together for the sake of their marriage vows, their children, or
religious and other socially constructed reasons. Some economists would
argue that all humans are making rational, self-interested calculations all the
time, but most people, at least consciously, report that their romantic pairings
are emotional, not rational (Bar-Or, 2012).
Procreating
Most adults will have children during their lifetime, even though child
rearing is very expensive and offers few material efficiencies (unlike a
childless adult pairing). For many parents, the emotional rewards of
parenting are incalculable and outweigh all material costs, but parents also
tend to underestimate the material costs. The U.S. Department of Agriculture
estimates the direct expense of rearing an American child from birth in 2011
through age 17 at around $235,000 for a middle-income family, excluding
higher education expenses around $30,000 to $120,000. The average cost of
tuition and fees for the 2011/12 school year as $8,244 for a public college
and $28,500 for a private one. The indirect costs and opportunity costs for
the parents are likely to add up to a sum at least double the direct costs (Lino,
2012).
In addition to the financial risks are the potential psychological and
emotional returns: Parents, like romantic partners, must face many
disappointments in the choices made by their dependents; tragically, a child
could reject his or her parents, fall ill, or die; if the parents separate, likely
one parent would be left with little custody. Yet most people, when they
choose to have children, do not care to predict the negative risks
realistically, or they choose to accept whatever comes. The emotional
rewards of rearing children may be described as rational, although this
would be conflating too much of the rational with the emotional. Some may
have rational reasons to procreate, such as future reliance on children for
support during the parental generation’s retirement, although many parents
would object ethically to such reasons. Of course, for many people
procreation is an accident of recreation: They never meant to have children
at that time or with that person, but sexual rewards are difficult to ignore.
Sexual impulsiveness helps to explain the high rate of sexual acts without
protections against either pregnancy or sexually transmitted diseases.
War Initiation
For sovereign states, a clearly high risk is potential defeat in war. Most
states are at peace most of the time, but when they enter wars, they tend to be
overoptimistic or simply feel that they have no choice (perhaps because the
alternative is capitulation without fighting). Less than 60% of belligerents
will win their wars (as defined in the Correlates of War dataset for the last
200 years or so), and this proportion is inflated by states that bandwagon
belatedly with the winning side. More than one-third of war initiators (such
as a state that attacks another state without provocation) will lose, despite
their implicit confidence in victory at the time they initiated. This
overconfidence can be explained partly by the political and social pressures
for governments to throw their weight around or to comply with belligerent
stakeholders or to divert stakeholders from other issues. Official
overconfidence must be explained partly by poor official risk assessments
too.
Avoidable Disease
People naturally accept some risks that are seen as unavoidable or routine,
such as the lifetime chance of cancer (about 40%) or heart disease (about
25%). Less rational is voluntary engagement in activities—primarily
smoking, alcohol abuse, unhealthy eating, and sedentary lifestyles—that
dramatically increase the risks: Some activities easily double the chances. In
fact, most prevalent cancers and heart diseases are entirely “avoidable” (see
Table 9.2).
Air Travel
Air travel is often described as the safest mechanical way to travel, but this
statement illustrates a difficulty with frequency data: Frequencies often do
not fairly compare dissimilar events. Air safety is rigorously regulated and
inspected, whereas individual car owners and drivers effectively regulate
their own safety, outside of infrequent and comparatively superficial
independent inspections. Consequently, an average aircraft flight is much less
likely to cause fatality than an average car journey. Yet the fatalities of air
travel seem large because a catastrophic failure in an aircraft tends to kill
more people per failure. Consequently, an air accident is much less frequent
but more newsworthy than a typical road accident. Moreover, an air
passenger can feel lacking in control of the risks, whereas a car driver can
feel in control (a situation that tends to decrease sensitivity to risk). For all
these reasons, people tend to be more sensitive to the risks of air travel than
car travel. Yet road travel kills more than 600 times more Americans than air
travel kills. Even though people take more road journeys than air journeys,
road traffic is more deadly than air travel, both absolutely and as a rate per
miles travelled (see Table 9.2).
Road Travel
In 2010 (the safest year on American roads in recorded history), road traffic
killed nearly 32,885 Americans (16.3 per 100,000 inhabitants), an average
of 90 persons per day, and injured 2,239,000 Americans (724 per 100,000
inhabitants), an average of 6,134 persons per day.
Americans take many more journeys by car than by aircraft, so the higher
number of Americans killed on the roads is partly a function of increased
exposure to road traffic. Car travel seems less risky when the measures
capture exposure: in 2010, the National Highway Traffic Safety
Administration observed 0.0011 fatalities per 100,000 miles travelled, 12.64
fatalities per 100,000 registered motor vehicles, and 15.65 fatalities per
100,000 licensed drivers.
The risks increase with voluntary behaviors, such as reckless driving,
telephone use, or intoxication. About 30% of Americans will be involved in
an alcohol-related car accident at least once during their lifetime (see Table
9.2).
Violence
People are sensitive to violence of all kinds because the perpetrator’s intent
seems outside of the victim’s control (although sometimes intent is activated
by the target), and violence suggests severe effects on the unprotected human
body. Potential harm from violence is certainly a pure risk, whereas driving
and firearm ownership are voluntary and might be experienced as thrilling.
Most types of violent crimes, including homicides, have declined in America
since the 1980s. Violent crimes are a minority of all crimes, but in 2010
Americans suffered more than 1.2 million murders, rapes and sexual assaults,
robberies, and aggravated and simple assaults (408 per 100,000 inhabitants).
Tragically, some victims experienced more than one of these crimes. More
than 2.2 times as many Americans die in road traffic accidents than are
murdered (14,748, or 4.8 per 100,000 inhabitants, in 2010, another declining
year for murders).
Private firearm ownership also has declined (from a high of 54% of
households in 1994 to lows of 40%, according to Gallup’s surveys);
nevertheless, firearm crimes have not declined as a proportion (around 80%)
of crime. Most deaths by firearms are suicidal, but suicides do not trouble
most other people; instead, people worry more about external uses of
firearms against them. Firearm crime started to rise after a trough in 2000
(road traffic fatalities have continued to decline since then). In 2010, in
America 11,078 homicides were committed with firearms (3.6 per 100,000
residents); the deaths and injuries attributable to firearms cost America $68
billion in medical costs and lost work (data source: Centers for Disease
Control).
Most Americans are genuinely scared of firearm crime. Indeed, these fears
drive much of the defensive demand for guns. Nevertheless, sensitivity to
firearm crime is countered by the freedoms to own and bear arms. Some
cities and states legislated against firearm ownership or carriage, but
national political interest in firearm control did not change significantly until
after a series of mass murders with firearms occurred in 2012. At the end of
2012, the political administration pushed for more restrictive federal
legislation. This event illustrates the potential for “shocks” (unusual events)
to change popular or political sensitivity.
Terrorism
Many more Americans are murdered for nonpolitical reasons that are
murdered by terrorism, but terrorism attracts more sensitivity. Everybody
should feel more sensitive to political violence after shocking events like the
terrorist attacks in the north-eastern United States of September 11, 2001.
(Use of the term terrorism risk rose about 30 times from 2000 to 2005,
according to Google Ngram.) Rationally, high lethality (nearly 3,000) on one
day (9/11) demonstrates the capacity of terrorism to kill more people at once,
but nonterrorist Americans and road traffic accidents each killed many more
times more Americans even in 2001, and 2001 was an extreme outlier for
terrorist-caused deaths. Road traffic and nonterrorist Americans are
consistently more deadly and costly every year.
From 9/11 through 2012, nonterrorist Americans murdered more than 600
times more Americans (180,000) within the United States than terrorists
murdered (less than 300 U.S. citizens, excluding U.S. military combatants)
both in the United States and abroad. Fewer Americans were killed by
terrorism in that period than were crushed to death by furniture.
In the most lethal year (2001) for Americans due to terrorism, road traffic
killed more than 10 times more Americans than terrorism killed. During the
seven calendar years (2000–2006) around 9/11 and the U.S. “war on terror,”
terrorism killed about 80 times fewer Americans than road traffic accidents
killed. During that same period, road traffic killed 300 times more Britons
than terrorism killed. Even in Iraq, during the peak in the insurgency and
counter-insurgency there, terrorism killed about the same proportion of the
Iraqi population as the proportion of the British population that was killed by
road traffic. The rate of road traffic deaths in low- to middle-income
countries ran 200 to 220 times greater than the rate of terrorism deaths
globally.
Terrorism does threaten political, social, and economic functionality in
ways that typical road traffic accidents and ordinary murders cannot, but
terrorism is not as costly in direct economic costs. For 2000, the U.S.
National Highway Traffic Safety Administration estimated $230.6 billion in
total costs for reported and unreported road traffic accidents, excluding the
other costs of traffic, such as environmental and health costs due to emissions
from automobile engines. The terrorist attacks of September 11, 2001, (the
costliest ever) cost around $40 billion in insured losses.
Terrorism, like any crime, is a pure risk for the victim; its violence and
seeming irrationality prompts visceral sensitivity. Moreover, terrorism is
infrequent and concentrated in time. Some authors have pointed out that
governments have rational, self-important, and manipulative reasons (such as
desires to avoid scrutiny, link other issues, or sell otherwise unpopular
policies) for inflating terrorism and other violent threats to fundamental
national security (“the politics of fear”; “the politics of security”) (Cox,
Levine, & Newman, 2009; Friedman, 2010; Mueller, 2005, 2006).
Political and popular obsessions with terrorism declined in the late 2000s,
when economic and natural catastrophes clearly emerged as more urgent. In
Britain, the Independent Reviewer of Terrorism Legislation admitted that the
most threatening forms of terrorism deserve special attention but pointed out
that objectively terrorism risks had declined and were overstated. “Whatever
its cause, the reduction of risk in relation to al-Qaida terrorism in the United
Kingdom is real and has been sustained for several years now. Ministers
remain risk averse—understandably so in view of the continued potential for
mass casualties to be caused by suicide attacks, launched without warning
and with the express purpose of killing civilians.” He took the opportunity to
describe terrorism, in the long-run, as “an insignificant cause of mortality” (5
deaths per year, 2000–2010) compared with total accidental deaths (17,201
in 2010 alone), of which 123 cyclists were killed by road traffic, 102
military personnel were killed in Afghanistan, 29 Britons drowned in
bathtubs, and 5 Britons were killed by stings from hornets, wasps, or bees
(Anderson, 2012, pp. 21–22, 27).
Terrorism is still risky, but the main lesson in all this is that terrorism
tends to be inflated at the expense of other risks.
Sharks
Many people have phobias of harmless insects, birds, and even ephemeral
things (such as clouds). Some of these phobias are difficult to describe
rationally but are more likely where the source is considered strange,
unpredictable, or indistinguishable from harmful versions or relatives.
Hazards with the potential for great, random, unconscious, or irrational
violence tend to occupy inordinate attention. These hazards include terrorists
and other premeditated violent criminals (who are much less frequent than
nonviolent criminals) and animals such as sharks.
Most species of sharks do not have the physical capacity to harm humans
and are wary of humans (humans kill millions of wild sharks per year).
Almost all attacks by sharks on humans are by a few unusually large species,
almost always by juveniles developing in shallow waters who mistake
humans for typical prey; in most attacks, the shark makes a single strike, does
not consume anything, and does not return for more. Having said that, a single
strike could be fatal if, say, it severs an artery or causes the victim to breathe
in water. In some cases sharks have attacked humans who seem unusually
vulnerable or numerous, such as after shipwrecks.
Shark attacks are extremely rare, but popular culture devotes considerable
attention to shark lethality. Sharks kill 0.92 Americans per year, while
trampolines kill 1.10 Americans per year, rollercoasters 1.15, free-standing
kitchen-ranges 1.31, vending machines 2.06, riding lawnmowers 5.22,
fireworks 6.60, dogs 16.00, sky diving 21.20, furniture 26.64 by crushing,
and road traffic 33,000 (Zenko, 2012). Of course, we should remind
ourselves that frequencies can be misleading because they tend to ignore
dissimilar parameters such as exposure. One explanation for the low number
of people actually killed by sharks is the low exposure of people to sharks:
very few people swim with sharks. Nevertheless, sharks are much less
dangerous to humans than most humans believe.
SUMMARY
Q UE S T IO NS AND E XE RCIS E S
10
his chapter explains controls and strategies—the actual things under our
T control that affect our risks. The two main sections of this chapter define
control and explain when and why controls are applied to risks and
define strategy and explain when and why to choose between available
strategies in response to risk.
Control
This section defines control, explains how to separate tolerable from
intolerable risks that you should control, explains the trade-off between
intolerability and practicality, explains why sometimes even tolerable risks
are controlled, and explains why different stakeholders can have different
levels of toleration and control at the same time.
Defining Control
A control is anything that was intended to or effectively does reduce a risk
(see Table 10.1 for official definitions). If a control reduces a risk, the
precontrol state of the risk is usually known as the inherent risk, while the
postcontrol state is usually known as the residual risk.
The control is not necessarily an absolute solution to the risk. It may
reduce a risk to a still intolerable level or to an only temporarily tolerable
level. Consequently, good risk management processes prescribe monitoring
the risk, even after control.
Since 1995, the Food and Drug Administration (FDA), which oversees
the safety of most foods, medical devices, and medical drugs in the United
States, has published a tolerability level for insect parts in food, even
though surveys of food consumers show that most consumers, rhetorically
at least, would not knowingly tolerate any insect parts in food. In effect,
the FDA regards its “food defect action levels” (such as more than 75
insect parts per 50 grams of flour or more than 60 insect parts per 100
grams of chocolate) as ALARP levels: “The FDA set these action levels
because it is economically impractical to grow, harvest, or process raw
products that are totally free of non-hazardous, naturally-occurring,
unavoidable defects” (U.S. FDA, 2005).
Tolerance of an ALARP level could introduce new risks, such as
potential collapse of public confidence in the authority that tolerated the
ALARP level before some associated public shock revealed such
tolerance to the public. For instance, from 2004 to 2007, U.S. consumers
were shocked by a series of revelations of meat from diseased livestock
in the human food chain. Effectively, authorities had knowingly tolerated
diseased meat in the food chain, while most consumers had been unaware.
In 2009, new U.S. federal laws took effect that outlawed diseased meat
from being passed for human consumption. In 2012, British consumers
were shocked by revelations that meats routinely sold in supermarkets,
fast food outlets, and restaurants as beef had been identified genetically as
horse meat, mostly from continental European sources, causing a collapse
in confidence in European-wide regulation of food labelling.
Incompatible Tolerability
Sometimes different stakeholders effectively work alongside each other with
different tolerability levels. For instance, during recent multinational
coalition operations in Afghanistan (since 2001), Iraq (since 2003), and
other countries, soldiers from developed countries have been instructed not
to travel without armed or armored protection, while indigenous personnel
were issued inferior levels of protection, and some “third-party” nationals in
the employ of civilian contractors were expected to work without any
protection at all.
These incompatible tolerability levels reflect the different risk
sensitivities within each national culture and organizational culture, and the
different practical constraints on each actor’s controls.
In some cases, incompatible sensitivities or controls may not matter to
cooperative operations, but they could interfere with interoperability. For
instance, in Afghanistan and Iraq, foreign personnel often were forbidden
from entering high-risk areas that local personnel had been ordered to enter,
while local personnel often demanded equipment of the same survivability as
used by foreign troops. Similarly, local personnel often accused foreign
personnel of deferring too readily to remote strike weapons, such as air-to-
ground missiles launched from aircraft, that sometimes cause collateral
civilian casualties, while foreign personnel often accuse local personnel of
lacking care in the use of their portable firearms against civilians
misidentified as enemies.
Strategies
This section defines risk management strategies, describes existing
prescribed strategies, describes the 6 “T” strategies, and combined or
balanced strategies.
Defining Strategy
A risk management strategy is any purposeful response to insecurity or risk;
the strategy might be emergent or subconscious, but must aim to affect
security or risk. The strategy is usefully distinguished from the controls—the
particular actions used to change a particular risk, such as a guard acquired
as part of a protective strategy. (Many authorities on risk management have
offered a set of recommended responses to risk or approaches to security that
they usually term treatments, approaches, responses, or strategies.
Unfortunately, many authorities use these terms interchangeably for strategy
or control.)
Existing Strategies
The Australian/New Zealand standard (since 1995) and ISO (International
Organization for Standardization, 2009a, pp. 9–10) offer a set of seven
strategies that has proved most appealing, but not perfect, partly because
some of the seven strategies overlap (see Table 10.2). For instance,
retaining the risk is written to include both negative and positive risks,
which overlaps with pursuing a positive risk. Similarly, changing the
consequences involves mostly controlling the consequences of a potential
event, but, as written, includes also the retention of financial reserves, which
would not directly control the consequences at all and is better placed as a
substrategy of retaining the risk. The ISO standard is followed by the
Canadian government, among others, but the Canadian government is
dissatisfied with the ISO strategies and recently (2013) published a cursory
development, which remains ongoing.
Trade associations tend to follow the ISO, otherwise prescriptions tend to
be contradictory. For instance, the Humanitarian Practice Network (2010, pp.
28, 50, 55) identified three risk management strategies, three overlapping
security strategies, and two variations of the risk management strategies, for
eight overlapping approaches that actually shake out as substrategies to three
of the seven strategies offered by the ISO (see Table 10.2).
Similarly, two criminologists have categorized just three
strategies/approaches (prepare and make ready, respond, recover and
prevent), within which they conflated many competing optional approaches.
For instance, within “preparedness and readiness” they effectively conflated
transferring risks, avoiding risks, defending against threats, and preventing
negative events—each of which is different to preparing or making ready for
an event. The only natural separation between “preparing” and “responding”
is chronological (you should prepare to respond to an attack in case it
happens; it if happens you should respond). Finally, “recover and prevent”
included an optional approach (“prevent”) that naturally belonged in the first
stage but was mentioned in all three stages. Fairly, they admitted “some
degree of slippage with respect to the notions of preparedness and
prevention” (Van Brunschot & Kennedy, 2008, p. 184).
Official authorities have tended to focus their risk management strategies
on project risks, such as the U.S. Defense Acquisition University’s four
uncontentious strategies (avoid; control; accept; transfer). Other official
authorities are focused on security strategies such as preparedness,
resilience, continuity, and any other of a total of nine synonyms that largely
mean controlling the negative consequences (see below) of a potential
event—which is only one of the seven strategies offered by the ISO.
The Institute of Chartered Accountants of England and Wales (Turnbull,
1999) suggested four effective strategies (see Table 10.2), which were most
influential on British government. Subsequently, the Treasury prescribed (and
most other departments adopted) five risk management strategies known as
the “five Ts” (U.K. Ministry of Defense [MOD], 2011c, pp. 6–7). The British
government’s project management standard (PRINCE2) follows similar
strategies but knows them by other words. These five Ts also contain
impractical overlaps and separations. For instance, treating and terminating
risks involve essentially the same activities—terminating the risk would be
the ultimate effect of perfectly treating the risk.
The Six “T” Strategies
Clearly, the current offerings are dissatisfactory. The authoritative
prescriptions do not agree on even the number of strategies. Some of their
strategies align neatly, but some contain substrategies that are placed under
different strategies by different authorities. Some strategies are separated but
are really variations of each other. Some offerings are very narrow. Most
surprising, no authority admits diversification, a routine strategy in many
domains, especially finance. Similarly, no authority explicitly admits the
possibility of turning a risk from negative to positive.
Tolerate
The strategy of toleration might be known elsewhere as one of assumption or
acceptance, but these terms are often confused with taking the risk, which
implies pursuit of a positive risk (see below).
Tolerating the risk would eschew any control on the risk (although this
should not imply forgetting about the risk). We could choose to tolerate the
risk, even if it were higher than our threshold for intolerability, if we were to
decide that the benefit of controlling the risk would not be justified by the
cost of controlling the risk.
Tolerating the risk means eschewing any additional control but is not the
same as eschewing any management of the risk. Tolerating the risk should
imply either watching or retaining the risk, either of which might be treated
elsewhere as a separate strategy but is properly treated as an option within
the strategy of tolerating the risk, as described in subsections below.
Watch
While we tolerate a risk, we should watch the risk in case the risk changes.
Such a watch implies, in practice, periodic reassessment of the risk level. If
the risk were to fall, we would feel more justified in tolerating the risk. If the
risk were to rise, we should consider a new strategy (probably treat the
risk).
Retain
A strategy of “retaining the risk” implies that the owner of the risk is holding
or building reserves against the potential negative returns. For instance, if we
feared a poor harvest and could not find or afford an insurer or donor who
would promise to supply any shortfall in our supply of food, we should build
reserves of food. Similarly, if we invest in a new commercial venture but
decide that insurance against financial failure would be too costly, we should
hold or build financial reserves that could pay for the costs of failure.
Retaining the risk is the main alternative to transferring the risk to some
outside actor (such as an insurer or partner).
Reduce Exposure
Reducing our exposure to the sources of risk would reduce their
opportunities to harm us. Reducing exposure involves any of four sub-
strategies: deferring our exposure to the sources; avoiding exposure;
withdrawing from exposure; or containing the hazard.
Defer
We could choose to defer our acceptance of the risk. The word defer implies
that we are not currently exposed to the risk but that we reserve the option to
undertake the risk at a later point. For instance, we could decide that an
investment is too negatively risky this year, so we could defer a review of the
decision to next year in case the risk might have changed to a state that is
worth pursuing.
Avoid
The word avoid implies that we want to do something without exposing
ourselves to a negative risk. For instance, we could decide that we should
intervene in a lawless area in order to terminate the threats at their
geographical source—this i s a strategy of termination. An alternative
strategy is to intervene in the area whenever the threats are not present,
perhaps in order to build local capacity or provide humanitarian aid—this
strategy is one of avoidance.
Withdraw
A strategy of withdrawing from the risk implies that we are currently
exposed to the risk, but we choose to stop our exposure to the risk. For
instance, we could be operating in some city where the chance of political
violence rises to an intolerable level, at which point one of our choices is to
move somewhere else.
Contain
Containing the hazard could be achieved by preventing the hazard from
reaching us, or preventing ourselves from coinciding with the hazard. For
instance, if a flood were to reach us it would be a threat, but if we could
construct some diversion or barrier the flood would not reach us. Similarly, a
river that routinely floods a narrow valley could be dammed. Similarly, a
criminal could be detained.
Sometimes, containment temporarily contains a hazard until it returns to its
threatening state. Worse, containment could strengthen the hazard. For
instance, detention of criminals is criticized for bringing criminals together
where they can further radicalize and prepare each other for further crime,
without providing opportunities for renunciation of crime or the take up of
lawful employment. Indeed, more criminals return to crime after detention
(this return is known as recidivism) than return to lawfulness.
Sometimes, an attempt to contain a hazard might reduce the frequency of
minor events but not all events. For instance, a dam would terminate minor
floods, but if flood waters could overflow the dam then we would have less
frequent but more catastrophic floods.
Some strategies of containment have costs that are underassessed by the
author of the strategy: often the domain, such as flood prevention, in which
the risk manager is working, is imperfectly competitive with the domain,
such as natural biodiversity, that suffers the costs of the measures. For
instance, from the environmentalist’s perspective, damming a valley is likely
to damage its natural environment in ways that are not justified by the
decreased chance of flooding in the town.
Reduce Intent
Since a threat necessarily must have intent and capability to harm us, we
could keep a hazard in its hazardous state or return a threat to it hazardous
state by terminating the source’s threatening intent. The three main
substrategies are reduce the causes of the activation of such intent; deter
intent; and reform intent.
Reduce the Causes of Activation
The causes of the threat include the activation of the hazard into a threat.
Prevention of the causes would prevent the threat from arising from the
hazard.
Prevention is particularly appropriate in domains such as preventable
diseases: helping people to give up behaviors such as smoking is far cheaper
than treating smokers for lung cancer; vaccinating against a pathogen is
ultimately cheaper than treating the diseases caused by the pathogen.
Similarly, prevention of climate change would be more effective and
efficient (1% to 2% of global GDP until 2050) than treating the effects (5%
to 10% of global GDP until 2050) (Swiss Re, 2013, p. 13). Similarly,
preventing human hazards from acquiring the intent or capabilities to behave
as terrorists is more efficient than defending every potential target from every
potential threat.
Prevention is attractive in international relations, too, where the negative
returns can be enormous. For instance, the British government has long
advocated for more international cooperation in the assessment and control
of potential conflicts.
Deter
Deterring the threat means dissuading the hazard from becoming a threat. For
instance, most national efforts to build military capacity are explicitly or
effectively justified as deterrent of potential aggressors. So long as potential
aggressors are deterred, they remain in a hazardous state and do not reach a
threatening state.
Deterring the threats would reduce the frequency of negative events. For
instance, at physical sites, security managers often seek to draw attention to
their alarms, cameras, and guards in order to increase the chances that the
potential threat would observe these measures and be deterred. However,
encouraging observation of our defensive measures could help the potential
threat to discover vulnerabilities, such as fake alarms, misdirected cameras,
and inattentive guards. For the purpose of deterrence, ideally, defensive
vigilance and preparedness should be observable without being counter-
able.
We could seek to detain or kill or otherwise punish people for having the
intent to harm. This action may reduce the threat’s capabilities directly and
deter others from becoming similar threats, although sometimes punishment
(particularly in counter-terrorism) becomes vengeful rather than purposefully
deterrent.
Reform
We could also seek to reform or turn around someone’s harmful intent.
Postdetention programs, such as supervision by parole officers and
suspended sentences, seek to reduce recidivism mainly by containment and
deterrence, but some programs include mandatory participation in seminars
and suchlike that aim to reform the former prisoner’s intent. Much counter-
terrorism since the 2000s is focused on persuading people that terrorism is
morally wrong, to renounce terrorism, and to speak out against terrorism.
Reduce Capabilities
Reducing threatening capability involves controlling the hazard’s acquisition
of capability or reducing the threat’s acquired capabilities.
Counter the Acquisition of Capabilities
Preventing the potential aggressor from acquiring the capabilities to threaten
us is the objective behind the many strategies called “counter-proliferation,”
which aim to reduce the supply of arms to hazardous actors (such as
unfriendly states, insurgents, and terrorists). Countering acquisition is easy to
confuse with containing the hazard but is not the same strategy. For instance,
while seeking confirmation as U.S. Secretary of State, Senator John Kerry
told the Senate’s Foreign Relations Committee (January 24, 2013) about
current U.S. strategy toward Iranian nuclear weaponization: “We will do
what we must do to prevent Iran from obtaining a nuclear weapon, and I
repeat here today, our policy is not containment [of a threat]. It is prevention
[of acquisition], and the clock is ticking on our efforts to secure responsible
compliance.”
Given that a group’s capabilities include personnel, this strategy could
focus on countering the group’s recruitment.
Protection
For the U.S. DHS (2009), protection is the “actions or measures taken to
cover or shield from exposure, injury, or destruction. In the context of the
NIPP [National Infrastructure Protection Plan], protection includes actions to
deter the threat, mitigate the vulnerabilities, or minimize the consequences
associated with a terrorist attack or other incident” (p. 110).
For the British Civil Contingencies Secretariat, civil protection is
“organization and measures, under governmental or other authority, aimed at
preventing, abating or otherwise countering the effects of emergencies for the
protection of the civilian population and property” (U.K. Cabinet Office,
2013).
For the Humanitarian Practice Network (2010, pp. xviii, 55, 71) the
protection approach is “a security strategy” or “approach to security” that
“emphasizes the use of protective devices and procedures to reduce
vulnerability to existing threats, but does not affect the level of threat.” It
later added that reducing vulnerability under this approach can be done “in
two ways, either by hardening the target or by increasing or reducing its
visibility,” but the latter reduces the likelihood not the returns.
Preparedness
For the UN, preparedness is the “activities designed to minimize loss of life
and damage, to organize the temporary removal of people and property from
a threatened location and facilitate timely and effective rescue, relief and
rehabilitation” (UN DHA, 1992) or “the knowledge and capacities
developed by governments, professional response and recovery
organizations, communities, and individuals to effectively anticipate, respond
to, and recover from, the impacts of likely, imminent, or current hazard
events or conditions” (UN ISDR, 2009, p. 9).
For U.S. Federal Emergency Management Agency (FEMA) (1992)
preparedness is “those activities, programs, and systems that exist prior to an
emergency that are used to support and enhance response to an emergency or
disaster.” For U.S. DHS (2009), preparedness is the
Mitigation
For the UN, mitigation is the “measures taken in advance of a disaster aimed
at decreasing or eliminating its impact on society and environment” (UN
DHA, 1992) or “the lessening or limitation of the adverse impacts of hazards
and related disasters” (UN ISDR, 2009, p. 8).
For the U.S. government, mitigation is “any action taken to eliminate or
reduce the long-term risk to human life and property from hazards” (U.S.
FEMA, 1999), “ongoing and sustained action to reduce the probability of or
lessen the impact of an adverse incident” (U.S. DHS, 2009, p. 110), or “the
capabilities necessary to reduce loss of life and property by lessening the
impact of disasters” (U.S. DHS, 2011).
For Public Safety Canada (2012), it is the “actions taken to reduce the
impact of disasters in order to protect lives, property, and the environment,
and to reduce economic disruption” (p. 63).
“Mitigation . . . aims at reducing the negative effects of a problem”
(Heerkens, 2002, p. 150).
Consequence Management
Consequence management sounds much like mitigation. The Canadian
government defines consequence management as “the coordination and
implementation of measures and activities undertaken to alleviate the
damage, loss, hardship, and suffering caused by an emergency. Note [that]
consequence management also includes measures to restore essential
government services, protect public health, and provide emergency relief to
affected governments, businesses, and populations” (Public Safety Canada,
2012, p. 17). The British Civil Contingencies Secretariat defines
consequence management as the “measures taken to protect public health and
safety, restore essential services, and provide emergency relief to
governments, businesses, and individuals affected by the impacts of an
emergency” (U.K. Cabinet Office, 2013).
Resilience
Resilience is “the ability of a system, community, or society exposed to
hazards to resist, absorb, accommodate to and recover from the effects of a
hazard in a timely and efficient manner, including through the preservation
and restoration of its essential basic structures and functions. Comment:
Resilience means the ability to ‘resile from’ or ‘spring back from’ a shock”
(UN ISDR, 2009, p. 10).
Resilience is the “adaptive capacity of an organization in a complex and
changing environment” (International Organization for Standardization,
2009a, p. 11).
The U.S. DHS (2009) defined resilience as “the ability to resist, absorb,
recover from, or successfully adapt to adversity or a change in conditions”
(p. 111).
In Britain, “resilience reflects how flexibly this capacity can be deployed
in response to new or increased risks or opportunities” (U.K. Prime
Minister’s Strategy Unit, 2005, p. 38), the “ability of the community,
services, area, or infrastructure to detect, prevent, and, if necessary, to
withstand, handled and recover from disruptive challenges” (U.K. Cabinet
Office, 2013), or is the “ability of an organization to resist being affected by
an incident” (U.K. MOD, 2011c, p. Glossary-3). Community resilience is
“communities and individuals harnessing local resources and expertise to
help themselves in an emergency, in a way that complements the response of
theemergency services” (U.K. Cabinet Office, 2013).
In Canada, resilience is “the capacity of a system, community, or society to
adapt to disruptions resulting from hazards by persevering, recuperating, or
changing to reach and maintain an acceptable level of functioning” (Public
Safety Canada, 2012, p. 80).
Recently, the World Economic Forum asserted the greater importance of
“national resilience” to “global risks.”
The World Economic Forum chose to break down resilience into three
characteristics (robustness, redundancy, resourcefulness) and two measures
of performance (response, recovery).
Response
The World Economic Forum places recovery as a part of resilience, but U.S.
emergency management always has separated response as a “phase” of
emergency management before recovery.
Recovery
Continuity might overlap recovery—although recovery might imply outside
aid, such as by the UN, which would mean that the risk had been transferred.
The UNHCR defined recovery as a focus on how best to restore the capacity
of the government and communities to rebuild and recover from crisis and to
prevent relapses into conflict. “The UN ISDR (2009) defined recovery as
“the restoration, and improvement where appropriate, of facilities,
livelihoods and living conditions of disaster-affected communities, including
efforts to reduce disaster risk factors” (p. 9).
The U.S. DHS (2009) defined recovery as
Turn
We could effectively terminate the risk by turning the source or cause in our
favor. For instance, rather than kill the leader of the criminal gang, we could
ally with it against another threat or offer a cooperative return to lawful
activities.
The strategy of turning the risk offers more than either terminating or taking
the opportunity because it turns a negative into a positive risk.
We would never want to turn a positive risk into a negative risk, but we
could do so unintentionally, for instance, by upsetting an ally until the ally
turns against us. Moreover, a strategy of turning a negative into a positive
risk could fail and could introduce new risks from the same source. At worst,
an alliance could expose us to a temporary ally that ends up a threat. For
instance, from 2007, the U.S.-led coalition in Iraq chose to pay rents and to
arm various militia or insurgent groups in return for their commitment to stop
attacks on coalition targets; some of these groups helped to combat others
who remained outside of the coalition, but some eventually turned their new
arms on the coalition. The continuing lawlessness and multiple duplicities in
Iraq were permissive of such chaos. At worst, an alliance could expose us to
a threat that is only pretending to be an ally.
A strategy of turning threats into allies can be tricky too because of
reactions from third parties. For instance, many victims of the criminals
would feel justifiably aggrieved if you offered cooperation with the
criminals without justice for the victims. Some other criminals could feel that
their further crimes would be rewarded by cooperation or feel aggrieved that
you chose not to cooperate with them.
Take
Taking a risk is a deliberate choice to pursue a positive risk, even if negative
risks are taken too. The strategy of taking risk is known elsewhere as a
strategy of pursuing, enhancing, or exploiting positive risks. (The British
Treasury, and thence most of British government, has called the strategy
“taking the opportunity,” but I have found that users conflate its intended
meaning with any strategic response to risk, as in “taking the opportunity” to
do anything but nothing.)
Taking risk could include accepting some potential negative returns, so
long as we are simultaneously pursuing positive risk. For instance, any
speculative risk includes the chance of gaining less than we expected or even
a loss. Taking a risk does not need to mean avoidance of potentially negative
returns, just to mean pursuit of potential positive returns.
Taking the risk would seem obvious if we estimate a large chance of
positive outcomes and no chance of negative outcomes. Nevertheless, many
people take speculative risks despite a much higher chance of loss than of
gains. In fact, against a competent bookmaker, most bets have negative
expected returns, but plenty of people make such bets in pursuit of an unlikely
big win.
Even if the chance of positive outcomes is higher than of negative
outcomes, we should still not take the risk if the cost of taking the risk
outweighs the potential positive returns or at least the expected return (see
Chapter 3). Taking the risk may necessitate investments or expenditures of
resources. For instance, many business ventures involve hefty investment in
the hope of future profits or returns on investment. Sometimes investors must
take highly subjective decisions about whether the potential positive returns
outweigh the potential loss of the exposed investment.
Finally, taking the risk may involve unobserved risks. For instance, we
could agree to make a hefty investment after having estimated that positive
returns are highly likely, yet that investment might leave us without reserves
against unrelated negative risks, such as potential collapse in our health or
income while awaiting the returns on our investment.
Transfer
Transferring the risk means that we transfer some of the risk to another actor.
We could pay an insurer for a commitment to cover any negative returns,
hope to sue the liable party through the tort system, rely on charity to cover
our losses, rely on some guarantor to compensate us, share the risk with
business partners, or share risks with contractors.
The most likely alternative to transferring the risk is to retain the risk—
relying on our internal resources to cover negative returns. Retaining the risk
is a form of tolerating the risk, whereas transferring the risk implies that we
cannot tolerate the risk. Retaining the risk makes better sense if we were to
believe that the other actor could not or would not cover our negative returns.
The sub-sections below discuss the six main vectors for transferred risk:
insurers; tort systems; charities; guarantors; partners; and contractors.
Insurers
Insurers accept a premium (usually a financial price per period of coverage)
in return for accepting some responsibility for a risk (usually a promise to
pay monies toward the financial costs that you would suffer due to agreed
events).
Insurers cover some risks at standard rates—these standard risks are
easier for the insurer to assess, such as potential harm from road traffic
accidents, home fires, and work-related injuries. Traditionally, most
potential insurees were forced to retain risks associated with war, terrorism,
natural disasters, and other risks that insurers liked to write up as “acts of
god,” either because insurers refused to insure against such risks, given their
greater uncertainty, or because few consumers could afford the premiums (or
because consumers were less trusting of the insurer who would insure
against such risks). In recent decades, particularly since the terrorist attacks
of 9/11 (September 11, 2001), governments have guaranteed insurers against
higher losses associated with these risks, promised to cover the losses
directly, or legislatively forced insurers not to exclude such risks. These
official actions have encouraged insurers and insurees and discouraged
retention of risk. Consequently, take-up of political risk and terrorism
insurance has jumped 25%–50% since 9/11.
However, at the same time, legal disputes between insurer and insuree
over huge losses have discouraged potential insurees in certain domains.
Insurers sometimes disagree with the insured party about whether the insured
is covered against a certain negative return. For instance, the insurers of the
twin towers in Manhattan that collapsed on September 11, 2001, tried to
claim that the source was domestic terrorism (because the terrorists, although
foreign by citizenship and sponsorship, were passengers and hijackers of
planes that had taken off from American airports), which was not covered,
rather than international terrorism, which was covered.
Sometimes insurers find that incoming claims run ahead of their reserves
or their reinsurers’ reserves, so in theory the insuree could be left retaining
all the risk even after paying for insurance, although governments can choose
to guarantee insurers.
Tort System
The tort system is the legal system that allows parties to bring claims of
wrongdoing, harm, or injustice against another party.
The tort system is an uncertain instrument and a negative risk given that a
court could disagree with the claimant’s case against a liable party, leaving
the claimant with legal costs or an order to pay the other party’s legal costs.
The tort system is useless to us if the court is biased against us, the liable
party has insufficient reserves or assets to cover our losses, or the liable
party is able to evade a court’s judgment.
The effectiveness of the tort system varies by time and space. For instance,
the United States has a strong tort system and highly accountable public
services, where service providers and suppliers complain that their
liabilities make business difficult, whereas Britain has a weak tort system
and poorly accountable public services, where consumers complain that
service providers and suppliers are too willing to make promises for which
they are practically not liable (except in terms of lost customers).
Charities
Charities accept risks that otherwise would not be covered by those exposed,
perhaps because they are too poor in material terms or too poorly
represented. Charities often appear in response to crisis. For instance, some
persons chose to manage donations of money in response to Storm Sandy in
the north-eastern United States in October 2012, and others chose to
volunteer their labor. Most international aid is essentially charitable,
although some quid pro quo, such as preferential trade, might be implied.
Charities are the least certain of the actors to which we could transfer our
risks, since they themselves usually rely on voluntary donations and their
reserves tend to be unpredictable and particular in form. Unlike insurers with
whom we contract, charities are under no obligations, so they can choose not
to cover our losses.
Guarantors
Guarantors promise to cover our negative returns. Guarantors could be as
simple as a relative who promises to help out if our business fails or as
authoritative as a government that promises to pay “benefits”/“entitlements”
if we lose our jobs or become too ill to work. Some guarantors effectively
preempt the tort system by promising to pay us compensation if some
representative causes us injury or injustice. For instance, in order to
encourage business, many governments guarantee residents against losses due
to riots, terrorism, or other organized violence.
However, guarantors sometimes choose not to honor their commitments.
Ultimately few governments are beholden to any independent judicial
enforcement of their promises, so any official guarantee is effectively a
political risk. For instance, governments often guarantee to compensate
businesses against mass lawlessness or political violence in order to
encourage business within their jurisdiction, but after widespread riots in
Britain in August 2011, the British government was criticized for its
incremental interpretation of who was eligible for compensation and how
quickly their businesses should be restored, leading eventually to official
clarification of its future interpretation.
Partners
We could persuade another party to coinvest or codeploy —effectively
sharing the chance of lost investment or mission failure. However, a
coinvestor could sue us for false promises or incompetence or some other
reason to blame us disproportionately for the losses—effectively using the
tort system to transfer the risk back to us. Similarly, political partners can
blame each other for shared failures.
Contractors
Contractors who agree to provide us with some service or product
effectively share the risk of their nonperformance, such as a failure to deliver
a service on time or as specified. Contractual obligations may mean nothing
in the event of nonperformance if the contractor does not accept those
obligations, refuses to compensate us for nonperformance, is not found liable
by the tort system, or does not have the reserves to cover our losses.
1. Simple risks, such as home fires, where the causes are obvious, “can
be managed using a ‘routine-based’ strategy, such as introducing a law
or regulation.”
2. Complex risks arise from “difficulties in identifying and quantifying
causal links between a multitude of potential causal agents and
specific observed effects. Examples of highly complex risks include
the risks of failures of large interconnected infrastructures and the risks
of critical loads to sensitive ecosystems.” They “can be addressed on
the basis of accessing and acting on the best available scientific
expertise, aiming for a ‘risk-informed’ and ‘robustness-focused’
strategy. Robustness refers to the degree of reliability of the risk
reduction measures to withstand threatening events or processes that
have not been fully understood or anticipated.”
3. Uncertain risks arise from “a lack of clarity or quality of the scientific
or technical data. Highly uncertain risks include many natural
disasters, acts of terrorism and sabotage, and the long-term effects of
introducing genetically-modified species into the natural environment.”
They “are better managed using ‘precaution-based’ and ‘resilience-
focused’ strategies, with the intention being to apply a precautionary
approach to ensure the reversibility of critical decisions and to
increase a systems’ coping capacity to the point where it can withstand
surprises.”
4. Ambiguous risks result “from divergent or contested perspectives on
the justification, severity, or wider meanings associated with a given
threat. Risks subject to high levels of ambiguity include food
supplements, hormone treatment of cattle, passive smoking, some
aspects of nanotechnology, and synthetic genomics.” The “appropriate
approach comprises a ‘discourse-based’ strategy which seeks to
create tolerance and mutual understanding of conflicting views and
values with a view to eventually reconciling them.”
SUMMARY
• defined control,
• given advice on establishing tolerable risks,
• explained the trade-off between intolerance and practicality,
• explained why sometimes tolerable risks are controlled,
• explained why some stakeholders could be working together with
incompatible levels of toleration and controls,
• defined strategy,
• reviewed and aligned existing prescriptions for strategies,
• explained the six rationalized “T” strategies:
tolerate, which includes
watching and
retaining
treat and terminate, which includes
reducing exposure, by any of
deferring,
avoiding,
withdrawing, or
containing,
reducing threatening intent, by any of
reducing the causes of activation,
deterring threats, or
reforming threats,
reducing threatening capabilities, by any of
countering the acquisition of capabilities or
reducing acquired capabilities,
controlling the negative effects of any threatening event, which
includes many official synonyms, including
protection,
preparedness,
contingency planning,
mitigation,
consequence management,
resilience,
response,
continuity, and
recovery,
turn,
take,
transfer, to any of
insurers,
the tort system,
charities,
guarantors,
partners, or
contractors, and
thin (diversify).
• explained how to balance and combine strategies in response to
negative and positive risks,
pure and speculative risks,
unknown and known sources and causes, and
uncontrollable and controllable sources and causes.
Q UE S T IO NS AND E XE RCIS E S
11
his chapter explains how to record the risks that you have assessed and
T how you have managed them, how to communicate information on
security and risk, how to monitor and review how your stakeholders are
managing their security and risk, and how to audit how other others manage
their security and risk.
Recording Risk
This section describes why recording information about risks and risk
management is important and often required by higher authorities and how
practically you should record such information.
Requirement
A good process for managing risk should include the deliberate maintenance
of some record (also known as a register or log) of your assessments of the
risks and how you are managing the risks. Such a record is useful for
institutional memory, transparency, accountability, assurance and compliance,
and communication.
Pedagogy Box 11.1 Other Requirements for
Risk Records
Practical Records
In practice, the record is usually a table or spreadsheet of information related
to risk. Today, such information is easily maintained in digital software.
Word processing software would be adequate; users generally use simple
spreadsheet software; more dedicated risk and security managers might
acquire a specialized risk management software; some software is
configured for specialized domains, such as financial risks and maritime
risks.
The task is relatively simple technically, but can be burdensome, and many
standards differ on what information should be recorded. Consequently, many
responsible parties fail to complete the task perfectly.
The standards in the public domain differ but generally agree with my
basic specifications below:
Communicating Risk
Risk records can be used to communicate the risks to other stakeholders, but
they tend to be restricted to the few stakeholders who need or can be trusted
with all the information. Consequently, few owners have released complete
records publicly, except as required by legislation or regulation. The
subsections below describe the requirement for communicating risk and the
alternative ways in which you could visually communicate risks.
Requirement
Communication of risk assessments and information about your management
of the risks is useful for external confidence and transparency and is a
necessary vehicle in compliance with external reviews, monitors, and audits.
• A plan is a piece of paper. Paper does not reduce any risks. Plans
need to be shared, explained, and implemented.
• A good plan today may no longer be appropriate 6 months from
now. If the situation evolves, review the analysis and plans.
• People not familiar with security plans and procedures cannot
adhere to them. All staff and visitors need to be briefed as soon as
they arrive and after any important changes are made.
• Good implementation depends on competencies. The best possible
plan falls apart without the knowledge and skills to implement it.
Some aspects of security management require specialized
knowledge or skills.
• Effective security management depends to a degree on practice.
Practicing—through simulations and training—is vital.
Since August 2006, the British government has published its assessment
of Britain’s terrorism threat level on a 5-point scale (low—an attack is
unlikely; moderate—an attack is possible but not likely; substantial—an
attack is a strong possibility; severe—an attack is highly likely; critical—
an attack is expected imminently). The threat level remained at the fourth
level except for a jump to the highest (fifth) level during short periods
during August 2006 and June–July 2007. This practical constancy was not
useful to public consumers; the British government issued no guidance on
public responses by threat level. In September 2010, threat levels were
separated for “Northern Ireland-related terrorism” and “international
terrorism,” since then the international terrorist threat level has dropped
as low as the middle (third) level, and the Northern Ireland-related
terrorist threat level has dropped as low as Level 2.
Risk Matrices
Risk matrices are the most recognizable ways to communicate risk, thanks to
their simplicity and frequent use, although matrices typically oversimplify the
records on which they are based and often use misleading scales or risk
levels.
Conventional risk matrices are formed by two dimensions representing
returns and likelihood respectively. A fair and transparent matrix should have
symmetrical scales on each dimension and should admit that the risk in each
cell is a product of these dimensions. This mathematical consistency
produces risk levels each with a definite order and clean boundaries. For
instance, Table 11.2 shows a simple risk matrix with symmetrical 3-point
scales on each dimension and with risks calculated as products of the scales.
Table 11.4 A Risk Matrix With Asymmetrical Dimensions and Risk Levels
The ISO defines a risk matrix as a “tool for ranking and displaying risks
by defining ranges for consequences and likelihood” (p. 8). The Canadian
government refers to a plot of risks in a space defined by two dimensions
as a “likelihood-consequence graph,” but it refers to an almost identical
plot (where each risk is represented as a two-dimensional cross rather
than an oval) as a “risk event rating scatter plot” (Public Safety Canada,
2013, pp. 6, 54). The British government has referred to the same visual
representations as risk registers—even though that term is used for mere
records too.
Heat Maps
A heat map is a matrix on which we can plot risks by two actionable
dimensions, usually risk owner and risk category. The level of risk can be
indicated by a color (say, green, yellow, and red) or a number (say, from 1 to
3) in the cell formed where the risk’s owner and category intersect. Such heat
maps are useful for drawing attention to those owners or categories that hold
the most risks or largest risks and thus need higher intervention, or those who
hold fewer or smaller risks and thus have capacity to accept more risk.
Risk Radars
Risk radars are segmented, concentric circles that normally are used to
represent time (circles further from the center represent time further into the
future) and risk categories or risk owners (each segment of the pie would
represent a different category or owner). Risk radars are useful for drawing
attention to risks whose control is most urgent.
Moving outward from the center circle (which normally represents the
present) each concentric circle would represent some interval of time further
into the future (say, an interval of 1 year). Segments or arcs of the concentric
circles can be delineated to differentiate risk by category or owner,
depending on whether the categories or owners were most salient. Each
risk’s level or rank can be represented by a distinct color or number.
Risk Maps
Some risks are usefully plotted in space by their real geographical location.
For instance, if we operate or are based in different areas, the risks to or
security of our operations and bases can be plotted on a map of these areas.
Vehicles or assets in transit move through areas with different risks. Some
security and risk managers, such as coast guards and navies, use software to
monitor the locations of assets as large as oil tankers in real time.
Often we are interested in plotting in space not just our potential targets
but also the other actors, especially hazards and threats and our contributors.
The British MOD has advocated a method (“center of gravity analysis”) for
analyzing actors involved in a conflict. These actors can be plotted on a map,
where each actor is a represented by an icon whose color corresponds with
the actor’s assessed relationship with the British (see Figure 11.1). (In this
illustration, the analysis is notional, but reminiscent of coalition operations in
Iraq.)
Sources, causes, and events too can be plotted in space. For instance, the
British MOD has plotted on a map of the world current conflicts and the
areas experiencing two of more “stresses” (increased population, decreased
crops, increased shortages of food and water) (see Figure 11.2).
Requirement
All of the main risk management standards identified in this book specify
monitoring or reviewing as a step in their recommended processes for
managing risk (see Table 8.3), although some are better at specifying what
they mean.
Process
Essentially, a practical process of monitoring or reviewing involves periodic
checking that subordinate security and risk managers are performing to
standard. The periods should be systematized on a transparent regular
schedule, while reserving the right to shorten the periods and make
unannounced reviews without advance notice if necessary.
Usually, the responsibility is on the subordinate to report on schedule and
to include all specified information without reminder. The report can consist
of simply the current risk register. Audiences usually prefer some visual
presentation summarizing what is in the current risk register (see the section
above on communicating). The subordinate could be tasked with answering
some exceptional questions—such questions are normally motivated by
higher concerns or lower noncompliance, in which case the task amounts to
an audit (see the section below on auditing).
Pedagogy Box 11.9 The Structure and
Process of Official Monitoring of British
Government, 2000–2011
The Nimrod Safety Case [by BAE Systems, the current design
authority, monitored by Qinetiq, a defense contractor, and by the
Integrated Project Team at the Ministry of Defense] was fatally
undermined by a general malaise: a widespread assumption by those
involved that the Nimrod was “safe anyway” (because it had flown
successfully for thirty years) and the task of drawing up the Safety
Case became essentially a paperwork and “tick-box” exercise.
(Haddon-Cave, 2009, p. 10)
Monitoring applies not just to procurement projects but also
operational and physical security. For instance, the British MOD requires
its departments to submit annually “assurance reports” on their plans for
business continuity. The MOD’s leadership intended these reports “to
ensure compliance and provide lessons for the improvement of risk
management” (U.K. MOD, 2011c, Chapter 6, p. 3).
In 1998, the political administration in Britain (led by the Labour
Party) published a Strategic Defence Review, but repeatedly deferred
another review (although it did publish a “New Chapter” in 2002 and a
defense white paper in 2003). After the Labour Party lost national
executive power in May 2010, the following Prime Minister used the first
of his traditional annual opportunities for a major speech on security to
highlight his administration’s quick reviews.
Requirement
A risk or security audit is an unusual investigation into how an actor is
managing risk or security. Most standard processes of risk management do
not specify an audit as a separate part of the process, but at least imply that
their advocacy of monitoring and reviewing includes a prescription to audit
where and when necessary.
For instance, in 2009, the United Nations High Commissioner for Refugees
(UNHCR) deployed 17 staff to Pakistan, but on June 9, three staff were
killed by a bombing of a hotel, after which another was abducted for 2
months. Later that year, the UN Office of Internal Oversight carried out the
first audit of UNHCR’s security management; it recommended personnel with
better training in assessment, improved operational strategies, and more
integration of security management into preparedness and response activities.
Audit Questions
When auditing an organization’s capacity for effectively managing risks, we
should ask at least the following questions:
1. Has a full and clear set of standards for the performance of risk
management been assigned?
2. Do personnel value and wish to comply with the standards?
3. Are the personnel trained in the skills and knowledge of the standards?
4. Have clearly accountable authorities been assigned?
5. Are the responsibilities of managers clear and fulfilled?
6. Is a register of the risks being maintained?
7. Has a cycle of reviews been scheduled and fulfilled?
8. Does risk management affect other planning?
SUMMARY
Q UE S T IO NS AND E XE RCIS E S
III
Managing Security in Different
Domains
his (third and final) part of the book explains how to manage security and
T risk by the five main domains: operational and logistical security
(Chapter 12); physical (site) security (Chapter 13); information,
communications, and cyber security (Chapter 14); transport security (Chapter
15); and personal security (Chapter 16).
CHAPTER
12
Scope
Operations include all activities that contribute or support a common goal; in
military lexicons, operations tend to be larger endeavors than missions;
sometimes higher operations are described as strategic, while lower
operations are described as tactical.
Logistics are activities and systems concerned with supply. Other things
may be outside of the control of the operators, but nevertheless relevant to
operations, such as the sites, communications, infrastructure, and transport on
which logistics depends, as described in the following chapters.
Pedagogy Box 12.1 Official Definitions
Operational Risks
Operations can be interrupted by commercial events (such as failures of
supply or income), political events (such as government regulation), crime,
terrorism, insurgency, war, natural events (such as flooding), accidents (such
as fire), personnel actions (such as labor strikes), organizational failures
(such as inept management), or technical failures (such as a failed transport
vehicle).
Operations often offer compound risks that are easy to overlook. For
instance, in unstable areas, we can expect increased rates of crimes and
accidents. If we were to suffer an attack, we should expect negative returns
such as casualties and compound effects such as more accident-prone,
illness-prone, or nonperforming personnel. Accidents are associated with
increased likelihood of illness, which in turn is associated with increased
likelihood of accidents. Operations away from home tend to be more
stressful. Employees who underperform or are incapacitated must be
repatriated and replaced, but communications become less secure as the area
becomes less stable.
Some operational areas offer peculiar or untypical risks. For instance, in
underdeveloped areas, an organization from the developed world might
forget that local health care is riskier. Blood transfusions might not have been
reliably screened for communicable pathogens such as hepatitis virus. The
communications to local hospitals may be more exposed to human threats.
Local health carers may be so poorly compensated that they do not work
consistently.
In unstable areas where more transactions are cash based, the
procurement, transport, storage, and distribution of cash create special
opportunities for thieves. If the use of cash cannot be restricted, the cash
should be guarded. The practices for handling cash should be varied so that
threats cannot predict where and when to find cash.
Operations can be interrupted for external reasons that are largely outside
of the control of the operators, although operators must still assess and
control the risks. For instance, on September 16, 2012, Japan announced
that it was purchasing from a private owner some disputed islands, known
as Diaoyu in China and Senkaku in Japan. On September 17, Chinese
protesters attacked Japanese businesses in China. Mazda and Nissan
stopped car production in China for 2 days. Honda suspended production
at two factories in the southern city of Guangzhou and the central city of
Wuhan after Honda’s stores in Qingdao were damaged by arsonists.
Toyota, which was also targeted in the eastern city of Qingdao, said its
factories and offices were operating as normal. Panasonic shut its factory
in Qingdao for a day. Canon temporarily suspended operations at three
plants (at Zhuhai, Zhongshan, and Suzhou).
Operations can be interrupted for largely internal reasons. For instance,
in 2010, suicides of employees at a facility in China ownedby Foxconn, a
manufacturer of electronics components, based in Taiwan, drove the
company to install netting to catch jumpers, among other responses.
Pressure from its clients—especially Apple, a supplier of popular
electronic devices—led to raised wagesand other improvements. The
Washington-based Fair Labor Association audited Foxconn’s facilities in
China and initially found serious violations of labor standards. In August
2012, it reported that the manufacturer was improving working conditions
ahead of schedule. Yet on September 23, about 2,000 workers rioted.
Production was interrupted until September 25.
In late 2001, the United States and a few allied militaries intervened on
the ground in Afghanistan. The North Atlantic Treaty Organization
(NATO) authorized an allied mission, although most U.S. operations
remained outside of the NATO mission.
Some of NATO’s supplies arrived by train through Russia. Germany
and Spain had reached bilateral agreements with Russia for their military
supplies to fly in via Russia. About one-quarter of supplies were flown
into two air bases (Bagram and Kandahar) in Afghanistan. (By
comparison, only about 5% of military supplies were flown into Iraq
during the occupation there; the rest were shipped into Kuwait, then
transported by land into Iraq.) Each of these bases accepted the largest
U.S. transport planes (C17s), which had capacity for current main battle
tanks and standard containers, as seen on transport ships and heavy goods
vehicles.
Bagram, near Kabul, in the north-east of Afghanistan, serviced mostly
U.S. forces. Kandahar, in southern Afghanistan, was a stage in the air
bridge to Camp Bastion, the base for all British, a minority of U.S., and
most other NATO forces.
Southern Afghanistan was not secure for ground transport, so forces
based in Bastion and further north in Helmand province were practically
dependent on air transport. NATO forces generally transported supplies
from Kandahar into Bastion by C130 aircraft (a medium-class transport
aircraft). At that time, British forces alone received 250 tons of
ammunition every month by air. From Bastion, supplies were transported
by helicopters to further operational bases or by locally contracted trucks
to closer sites. Operating bases were widely separated and generally
stocked for 20 to 30 days of normal consumption.
In 2008, 70%–75% of the coalition’s military supplies were shipped to
Karachi in Pakistan, then transported by land into Afghanistan via five
border crossings. These crossings are widely separated. Supplies
travelled 400 miles through Quetta to the border crossing into southern
Afghanistan on their way to Kandahar. Other supplies travelled 1,100
miles through Peshawar and the Khyber Pass on their way to Kabul.
In November 2008, the Taliban (a term used widely for insurgents in
Afghanistan and Pakistan, with the implication of Jihadi motivations)
dramatically increased the frequency of their attacks on or hijacks of U.S.
and NATO ground convoys passing through the Khyber Pass. Truck
crossings at Torkham, the main crossing site, 3 miles west of the summit
of the Khyber Pass, fell from an average of 800 per day to 200 for a
while. On December 2, attackers ambushed 22 trucks in Peshawar, inside
Pakistan. On December 6, they attacked NATO’s depot near Peshawar,
leaving 145 NATO vehicles, trailers, and containers burnt out. On
December 8, attackers destroyed about 50 NATO containers at a supply
depot near Peshawar.
NATO and U.S. security in Pakistan was not helped by increasingly
tense relations between the United States and Pakistan during a return to
mostly democratic government and during increasingly frequent and
deadly secret U.S. air strikes against insurgent or terrorist targets inside
Pakistan.
Unable to satisfactorily secure the existing routes and nodes on the
ground in Pakistan, NATO negotiated additional routes through other
neighbors. The shortest route would be through Iran from the Persian Gulf
into south-western Afghanistan, but Iran was not friendly or trustworthy to
NATO. Supplies could be shipped through the Black Sea to Georgia,
carried by railway through Georgia and Azerbaijan to Baku on the
Caspian Sea, then shipped to Turkmenistan, before trucks took over for
the long ground journey into northern Afghanistan. To the north of
Turkmenistan, Uzbekistan was less friendly. Further north, Kazakhstan
was friendlier but had no border with Afghanistan. Tajikistan shared a
border with north-eastern Afghanistan but is landlocked. Russian
tolerance, if not support, was critical to the cooperation of any of these
neighbors.
By March 2009, NATO, the United States, and Pakistan had sweetened
relations and improved the security of ground transport through Pakistan:
NATO was supplying 130 to 140 containers per day through Pakistan,
more than demand. In 2009, the United States opened a northern
distribution network, involving Azerbaijan, Turkmenistan, Russia, and
China, with a planned capacity of 100 containers per day. Meanwhile, the
U.S. Army Corps of Engineers and its contractors were constructing new
roads in Afghanistan (720 planned miles in 2009, 250 to 350 in 2010),
including a highway north from Kabul through Baghlan and Kunduz
provinces, thereby improving communications with Tajikistan and
Uzbekistan.
1. Identifying the sources (hazards and threats, which may include the
stakeholders or targets of the operations). Many threats, such as
thieves, saboteurs, vandals, and corrupt officials, are easy enough to
profile, but terrorists, kidnappers, blackmailers, and corrupt
governments are more agile and need more specialized assessments,
particularly in foreign cultures.
2. Assessing the likelihood of hazards being activated as threats,
3. Assessing the intents and capabilities of the threats, and
4. Identifying operational exposures and vulnerabilities to those intents
and capabilities.
In the early 1990s, UN field operations and staff proliferated, and more
staff were harmed by malicious attacks. At that time, the UN’s highest
authority on security was the UN Security Coordinator, a senior manager.
In 1994, the UN created its first handbook on field security—
effectively a policy document. A Security Operations Manual followed
in 1995 as the more practical guidance for the actual security managers.
The Field Security Handbook was modified in May 2001.
The terrorist attacks of September 11 encouraged the General
Assembly to vote for reform: effective January 1, 2002, the Security
Coordinator was elevated to an Assistant-Secretary-General; and the Ad-
Hoc Inter-Agency Meeting on Security Matters (IASMN) was replaced by
the Inter-Agency Security Management Network, consisting of the senior
security managers from each agency, chaired by the Security Coordinator.
The IASMN met only once per year to review practices; the rest of the
time the Office of the Security Coordinator managed security, except that
Security and Safety Services managed security at the headquarters
building in New York, the Department of Peacekeeping Operations
managed the security of civilian staff in peacekeeping operations, and the
different national military components of peacekeeping operations each
managed their own security.
Meanwhile, the General Assembly authorized an independent panel to
review the UN’s security management. It was about to report when the UN
Assistance Mission in Iraq suffered two quick suicide bombings: on
August 19, 2003, 5 days after the establishment of the office in Baghdad,
when 22 staff and visitors died, including the UN Special Representative,
and more than 150 were injured; and on September 22, when a UN guard
and two Iraqi policemen died. The UN withdrew the mission (600 staff),
until in August 2004 a smaller mission returned.
The Independent Panel on the Safety and Security of United Nations
Personnel reported on October 20, 2003:
The attacks are signals of the emergence of a new and more difficult
era for the UN system. It is of the utmost importance for UN
management and staff to recognize the extent to which the security
environment of the UN is changing. Already, parties to hostilities in
numerous conflicts are targeting civilians in order to draw military
advantages, in violation of the most basic principles of international
humanitarian law. In several instances, staff members of the UN and
other humanitarian agencies have been victims of targeted attacks for
their role in assisting these civilians. The bombings in Baghdad
differ from these previous attacks not so much for having targeted the
UN, but for having done so by using abhorrent tactical means and
military-scale weapons. These characteristics, added to the potential
links to global terror groups, are significant developments that the
UN needs to factor into its security strategy.
The Panel recommended clearer and more robust responsibilities and
practices, including better methods for assessing risk and security. In
December 2004, the General Assembly agreed to establish an Under-
Secretary-General for Safety and Security, who would lead a Department
of Safety and Security in place of the Office of the Security Coordinator,
the Security and Safety Services, and the Department of Peacekeeping
Operation’s responsibility for the security of civilian staff in the field.
In the field, the most senior UN official (either the Head of Mission or
the Special Representative of the Secretary General) is the highest
security officer, to whom reports the Chief Security Adviser. That person
is responsible for the Security Management Team, which issues minimum
operating standards, with which the mission’s civilian and support
components comply and the mission’s military and police components are
supposed to coordinate. Special attention is given to coordinating the
many partner organizations (sometimes thousands per mission) (UN,
2006; UN Department of Peacekeeping Operations, 2008, pp. 79–80).
The Department of Safety and Security was established effective
January 1, 2005. Its Threat and Risk Unit, which is responsible for the
development of risk assessment methods and for supporting risk
assessments in the field, developed a Security Risk Management process,
which includes a Security Risk Assessment process, which in turn
includes two main parts: situational analysis and threat analysis. The
IASMN approved these processes in 2005.
The Field Security Handbook was revised again, effective January
2006, but this lengthy and dense document (154 pages) included no
definitions of security or risk and no methods for assessing either security
or risk, although it included two paragraphs that specified a “threat
assessment . . . before an effective security plan can be prepared” and
referred readers to the Security Operations Manual.
The Threat and Risk Unit published its integrated version of the
“Security Risk Assessment” process on November 29, 2006. In January
2007, the Staff College and the Department of Safety and Security started
training staff to train other personnel in security management.
UN Security Risk Assessment starts with situational analysis, which
means identifying the threats and their behaviors and capabilities. The
threat analysis develops likely scenarios for attacks by the threats. These
scenarios are supposed to clarify the UN’s vulnerabilities, which then
would be mitigated. The scenarios are developed through several
matrices (the threat matrix method) that actually are unnecessarily more
complicated than they need to be.
The matrix begins with a list of the likely scenarios alongside
descriptions of each scenario; columns are added to assess (on a scale
from 1 to 5) the “simplicity” or “feasibility” of the threat’s
implementation of the scenario, the availability of the weapons that would
be required (again on a 5-point scale), the past frequency of such a
scenario (5-point scale), the anticipated casualties (standardized to a 5-
point scale, from two or less staff to hundreds), and an assessment of the
UN’s existing controls (“mitigations”) on each scenario (5-point scale,
from no controls to fully controlled). The matrixes had all sorts of other
columns that inconsistently explicated some of the codings.
The method was supposed to produce a single matrix (“system
analysis”), listing the scenarios alongside five columns for each of the
five assessments (simplicity, availability of weapons, frequency,
casualties, controls). The simplest and fairest output would have been a
sum of the five assessments by scenario. For no good reason, the method
expected users to rank by ordinal scale the codings in each column, sum
the rankings across the five columns for each scenario, and superfluously
express the sum as a percentage, before ranking each scenario on an
ordinal scale.
In December 2007, a UN site in Algiers was attacked. The Steering
Committee of the High-Level Committee on Management constituted an
Operational Working Group to review the UN Security Management
System. It developed a new Security Level System to replace the security
phase system by January 2011.
Scope
Operational security is freedom from risks to operations. Inherently,
operational security improves when the sources of the risks (the hazard and
threats) are removed, our exposure and vulnerability decline, or we control
the negative returns from a potential threatening event.
SUMMARY
Q UE S T IO NS AND E XE RCIS E S
13
his chapter covers site security (also: physical security). The sections
T below define site security, review site risks, describe site security
services, give advice on site location, describe how to control access to
sites, describe passive perimeters, describe surveillance and counter-
surveillance, and explain security engineering (protective materials and
construction).
Scope
Site security is the security of a defined space (the site). The North Atlantic
Treaty Organization (NATO) and the U.S. Department of Defense (DOD)
(2012b) define physical security as “physical measures designed to
safeguard personnel, to prevent unauthorized access to equipment,
installations, material, and documents, and to safeguard them against
espionage, sabotage, damage, and theft.” The U.K. Ministry of Defense
(MOD) (2009b) defines physical security as “that part of National Security
that relates to national assets and infrastructure” (p. 6). The Humanitarian
Practice Network (2010) defines a site as “the real estate that the agency
uses on a regular basis, notably offices, residences, and warehouses” (p.
181).
Site Risks
Some sites are valuable in themselves or accommodate other things of value.
For instance, even desolate land in a downtown area is valuable depending
on allowable potential uses. Illegal occupancy or contamination of such land
would lower its value or prevent the owners from exploiting the value, so
owners are forced to take measures to control access to valuable land even
when desolate.
Sites can be temporary, such as when operators must set up temporary
camps during emergencies or humanitarian operations. These camps tend to
be close to the threat that displaced the persons in the first place, so
sometimes the camps are exposed to travelling threats, such as a flood or any
human threats who want to harm the displaced groups. Some operations must
move camp daily for a period when threats are in pursuit or the environment
is intolerably hazardous due to natural hazards, unexploded ordnance, etc.
Built structures represent sunk costs and critical value to wider
operations. At the same time, built structures are vulnerable to destruction by
sustained vandalism or occasional arson or explosive devices. More
valuable material parts, such as copper pipes, attract thieves.
Structures accommodate valuable stores or equipment, or secret or
proprietary information that attract thieves or vandals. Structures
accommodate operational activities and associated personnel and thus attract
threats intent on interrupting operations or punishing operators.
The activities or resources at different sites may represent critical nodes
on which wider systems depend. For instance, a production site likely
depends on a logistics site, a power supply, the residences where personnel
live away from work, the transport vehicles that carry supplies to consumers
and users or carry personnel from residences to offices, and the infrastructure
on which transport vehicles travel.
Access Controls
The section describes controls on access to a site, beginning with a
subsection on the scope of such controls, followed by subsections on
assessing access controls, known bypasses of access controls, guards,
emergency services and quick reaction forces, gates, and emergency refuges.
Scope
Access controls are attempts to manage the entry of actors or agents into
some domain, for instance, by demanding identification of permitted persons
before entry into a building. Most physical security and crime prevention
within a site depends on preventing unauthorized access or use. Access
controls also can be used to prevent unauthorized persons or items from
leaving. Most thefts from sites seem to be by authorized users of the site, not
unauthorized visitors. In case a visitor commits a crime while on site, a
controlled exit gives an opportunity to at least record information about the
time of exit that would be useful to the investigation.
Analytically, all access controls are attempts to control exposure (see
Chapter 5). Perimeter barriers include walls, fences, or ditches that restrict
access from the outside to the inside. The perimeter has points of controlled
access, where the visitor must negotiate a human guard, lock, or identity
check before access. If these access controls were to work perfectly in
preventing all threats from access to the inside, nothing on the inside would
be exposed to threats unless it were to leave the perimeter. Thus, given
perfect access controls, only the perimeter and any assets on or outside the
perimeter would need to be defended. Good access controls are more
efficient than whole-of-site invulnerability.
Guards
Guards are the personnel tasked with controlling access. Guards, at least
those working around the perimeter, are more exposed than are personnel
within the perimeter, so guards need different capabilities. Consequently, the
guards could be justifiably equipped with arms and armor against the threats,
while the personnel on the inside would not need the same equipment so long
as the access controls work perfectly.
Resources and laws often restrict the capabilities that can be acquired,
carried, and used, and any acquisition implies additional training and
liabilities. Portable arms include sticks or batons, irritant chemical sprays,
weapons that discharge an electrical charge, and handguns. Attack dogs are
common alternatives to carried weapons. Portable defensive technologies
include vests and helmets made with materials resistant to kinetic attack.
Cheap barrier protections against communicable diseases and toxic materials
include latex or rubber gloves and facemasks. Kinetic-resistant materials
would be required against sharp materials and fire-resistant materials against
incendiary devices. Short sticks are useful for inspecting the contents of bags
and vehicles. Extended mirrors are useful for viewing the underside and
interior compartments of vehicles. Systems for inspecting other personnel
include magnetic metal detectors, chemical explosive detectors, and
Explosives Detection Dogs.
Good access controls do not necessarily deter malicious actors with
strong intent, but at least keep the attacks on the perimeter. A successful
control on access prevents exposure of the interior and its assets and
persons, but other areas, assets, or persons must expose themselves to the
threat in order to perform the control. So long as the guards are performing
their role, the guards are more exposed but less vulnerable; the personnel on
the inside are more vulnerable but less exposed. For instance, on April 16,
2010, a Republican terrorist group forced a taxi driver to drive a vehicle
with an explosive device up to the gate of a British military base (Palace
Barracks in Holywood, Northern Ireland), where it detonated, causing
considerable destruction and few injuries, fortunately without killing anyone.
The perimeter’s controllers remain exposed so long as they continue to
control access (and do not run away). Consequently, many guards are harmed
while preventing unauthorized access. For instance, on August 15, 2012, an
unarmed guard was shot in the left forearm while apprehending an armed
man attempting to enter the Family Research Council in Washington, DC,
with admitted intent to kill as many staff as possible. On February 1, 2013, a
suicide bomber detonated inside a built access point for staff of the U.S.
Embassy in Ankara, Turkey, killing himself and a Turkish guard, but nobody
inside the perimeter.
Unfortunately, absent perfect diligence during the procurement of guards
and perfect leadership of guards, some guards tend to inattention or even
noncompliance, sometimes for corrupt reasons. For instance, all of us have
probably experienced a guard who did not pay proper attention to our
credentials before permitting our access. More alarmingly, some smuggling
is known to occur via airport personnel who take bribes in return for placing
items in aircraft without the normal controls. The human parts of the counter-
surveillance system need careful monitoring and review (see Chapter 11).
Consequently, much of day-to-day physical security management can seem
more like personnel management and leadership.
Too often, however, guards are ineffective because they are untrained,
poorly instructed, poorly paid, poorly equipped, and poorly managed. It
is not uncommon to find a bed in the guardhouse of aid agency
compounds, virtually guaranteeing that the guard will fall asleep on
duty. During the day guards might be busy doing other things, and may
be distracted. When hiring guards, provide clear terms of reference and
make these a part of the contract. (Humanitarian Practice Network,
2010, p. 188)
Guards are hazards in the sense that they could become threats, perhaps
activated with the weapons and the access provided to them for their work,
perhaps colluding with external threats. In theory, properly vetted guards are
trustworthy guards, but vetting is imperfect and should include periodic
monitoring in case the initial conditions change. Responses to distrust of
guards should include banning the guards from the interior of the perimeter,
close supervision, random inspections, and covert surveillance of their
activities, perhaps including their activities while off duty.
Site managers might reject local guards as untrustworthy or just for
difficulties of linguistic or cultural translation. Multinational private security
providers proliferated during the rapid growth in demand in the 2000s,
particularly in Iraq. Providers might not have any loyalties to local threats
and might boast impressive prior military experience, but they might offer
new negative risks arising from insensitivity to local culture and a heavy
military posture.
Local security personnel are usually cheaper and more available than
foreign personnel, but their effectiveness might not justify their cheapness.
For instance, in 2012 the U.S. Mission in Benghazi, Libya, had agreed that
a local militia (the 17th February Martyrs Brigade) would provide guards
and a quick-reaction force.
Although the February 17 militia had proven effective in responding
to improvised explosive device (IED) attacks on the Special
Mission in April and June 2012, there were some troubling
indicators of its reliability in the months and weeks preceding the
September attacks. At the time of Ambassador Stevens’ visit,
February 17 militia members had stopped accompanying Special
Mission vehicle movements in protest over salary and working
hours. (Accountability Review Board for Benghazi, December 18,
2012)
Gates
Gates are the switchable barriers at the access control point. Gates are active
barriers in the sense that they can switch between open and closed. By
default (at least during higher risk periods), gates should be closed; if a
visitor is permitted entry, the gate would be opened. This seems like an
obvious prescription, but consider that gates are often left open because of
complaints about disruption to traffic or simple laziness on the part of the
guards, rather than because the risk level justifies the gate being left open.
The subsections below consider portable, antivehicle, multiple, and
containment gates.
Portable Gates
In an emergency, a gate can be formed with available portable materials such
as rocks or drums or trestles that can be moved by the guards. Vehicles and
human chains can be used as gates, but these are valuable assets to expose.
Sometimes gates are extremely portable, such as cable or rope, which are
adequate for indicating an access control, but are not substantial enough to
stop a noncompliant visitor. More repellent but still portable materials
include barbed wire and razor wire, mounted on sticks so that the guards can
handle without harm.
Sometimes, when the risk is sufficiently elevated or guards or gate
materials are short, the sites of access are closed and perhaps reinforced
with more substantial materials, in which case they act effectively as passive
barriers (see below).
Vehicle Gates
In prepared sites, gates normally consist of barriers that can be raised and
lowered or swung out of the way on hinges or that slide on rails. More
substantial barriers to vehicles could be formed by filling drums or boxes
with earth, rocks, or concrete, while keeping the item light and handy enough
to be removed whenever the guards permitted. As portable vehicle barriers,
guards could deploy spikes: a caltrop is easily improvised with four spikes
so that at least one spike presents upward however it falls; some systems
consist of a belt of spikes that collapses into an easy package when not
needed. One-way exits can be created with spikes that are hinged to fall
away when a vehicle exits but to present if a vehicle attempts to enter. These
spikes would disable most pneumatic tires, but some vehicles have solid
tires, pneumatic tires with solid cores, or pneumatic tires designed to deflate
gracefully, permitting the vehicle to run for some distance (more than enough
distance for a suicidal driver to reach any target within a typical perimeter).
More substantial vehicle gates consist of bollards or plates that are raised in
front of the visiting vehicle and are retracted when permitted.
Multiple Gates
Some access control points must include several gates or gates with controls
on more than one type of threat. For instance, a gate might be specified to be
robust enough at a low height to stop an energetic vehicle from crashing
through, but also tall enough to discourage a pedestrian from climbing over
the gate. Sometimes gates are placed in series, so that the visitor must
negotiate one gate successfully before the visitor is permitted entry through
the next gate. The traffic is sometimes slowed by passive barriers, such as
bumps in the road or fixed vertical barriers staggered across the road, which
the vehicle must negotiate at slow speed—these passive barriers slow traffic
in case guards need time to close a gate or take offensive action against the
driver. However, such measures add costs and inconvenience, so may not be
justified when the risks are low.
Containment Areas
Sometimes designers must consider developing multiple gates into a
containment area within which visitors, of all types, can be contained before
further access is permitted. Containment areas should be established in the
areas between where people, supplies, and mail arrive and where they are
accepted inside the perimeter. Supplies often arrive in large packages that
could contain hidden threats; the smallest mailed packages could contain
hazardous powders or sharp materials. Moreover, these items usually are
transported within large vehicles that themselves could contain threats. For
security reasons, delivery areas should be remote from the interior, but for
commercial reasons the delivery area should be close to the point of demand.
An attractive compromise is a dedicated area for deliveries, close to but
outside the perimeter, so that vehicles can make their deliveries without the
delays and transaction costs associated with accessing the interior. The
delivery or unloading area should be a contained area in which items can be
inspected for threats before allowance into another area where they would be
sorted and labeled for their final destination within the perimeter. The
interface between the contained delivery area and the main sorting space
should be strengthened against blast or forced human entry and provided with
access controls. For small, quiet sites, perhaps a single room would be
sufficient for storage of deliveries at peak frequency.
Emergency Refuges
Within the perimeter could be constructed a further controlled area for
emergency shelter from threats that breach the perimeter. Such areas are
sometimes called refuges, citadels, safe havens, panic rooms, or safe
rooms. They are differentiated by extra controls on access, with few access
points that are controlled solely from within.
Some safe areas are supplied in prefabricated form, the size of a standard
shipping container for quick and easy delivery to the requirer.
In theory, a safe area could be constructed anywhere with normally
available construction materials to a standard that would disallow entry to
anybody unless equipped with substantial military-grade explosives.
Passive Perimeters
This section describes the passive perimeters between the access control
points. The subsections below define passive perimeters and describe the
material barriers, human patrols and surveillance, and sensors.
Scope
The perimeter is the outer boundary of a site. The perimeter is controlled
more passively than the access points, usually in the sense that the more
passive part implies less human interaction and more material barriers.
Passive perimeters can be designed to block vehicles, persons, and animals
(rogue and diseased animals are more common in unstable areas) and can be
equipped with sensors that alert guards to any attempt to pass the barrier.
Material Barriers
Natural barriers include dense or thorny vegetation, rivers, lakes, oceans,
mud, and steep ground. Even a wide expanse of inhospitable terrain can be
considered a barrier. Indeed, most international borders have no barriers
other than natural barriers.
Discrete artificial barriers include ditches, fences, walls, stakes, wire, and
stacked sandbags, rocks, or earth. Artificial barriers include more deliberate
weapons, such as sharpened stakes, metal spikes, broken glass, hidden holes,
landmines, and even toxic pollutants, biological hazards, and pathogens.
The U.S. DOD (2008, pp. 4–5.3.2.1.2) prescribes a chainlink fence of
2.75 inch diameter cables, taller than the average man, with barbed wire
strands at top, as a barrier against pedestrians. As barriers against vehicles,
it prescribes either
• concrete bollards (no more than 4 feet apart, each 3 feet above ground
and 4 feet below ground in a concrete foundation, each consisting of 8
inches diameter of concrete poured inside a steel pipe 0.5 inch thick)
or
• a continuous concrete planter (3 feet above ground, 1.5 feet below
ground, 3 feet wide at base, with a trough at top for planting vegetation
as disguise or beautification).
Surveillance
Surveillance is systematic observation of something. Surveillance is an
activity open to both malicious actors and guards. Malicious actors often
survey potential targets before choosing a target, then survey the target in
order to plan an attack, and survey the target again in order to train the
attackers. In turn, guards should be looking to counter such surveillance.
Counter-Surveillance
Counter-surveillance is “watching whether you are being watched”
(Humanitarian Practice Network, 2010, p. xvi). Energetic guards, by
discouraging loitering or suspicious investigation of the site’s defenses, can
disrupt malicious surveillance before the attack could be planned. For
instance, in March 1999, the U.S. State Department’s Bureau of Diplomatic
Security introduced the concept of “surveillance detection teams” at most
diplomatic posts. These teams look for terrorist surveillance of diplomatic
sites and operations (U.S. GAO, 2009, p. 13).
Also, surveillance can be useful to the investigation after an attack.
Television cameras that record images of a site are less deterrent than guards
but more useful for investigators because their images are usually more
accurate than human memories and more persuasive as evidence during
criminal prosecutions.
Be careful not to underestimate the audacity of threats to the most secure
sites. For instance, in September 2010, British officials ordered trees to be
cut down around the headquarters in Northern Ireland of the British Security
Service (MI5), inside Palace Barracks, a secure military site. Four
surveillance cameras had been found hidden among the tree branches. On
September 26, 2012, Irish police arrested two men suspected of spying on
the operational headquarters in Dublin of the Irish police (Garda Siochana).
One of the suspects was recognized as a known member of the Real Irish
Republican Army, a terrorist group, by police officers passing through the
hotel opposite the headquarters. Police officers searched the suspect’s hotel
room, where they found parabolic microphones and digital cameras.
The Humanitarian Practice Network (2010, p. 194) describes five steps in
the typical attack on a site:
1. Initial target selection
2. Preattack surveillance
3. Planning the attack
4. Rehearsing the attack
5. Executing the attack
Surveillance Technologies
Surveillance technologies can be as simple as binoculars or even the optical
sights on weapons (although the latter can be provocative). Remote
television cameras allow guards to more efficiently and securely watch sites.
Radar devices can be used to track vehicles out of visual range. Range-
finders can be used to measure the ranges of targets within line of sight.
(Range-finders can be as simple as hand-held prismatic or laser-based
instruments.) Unmanned aerial vehicles (UAVs) can be launched to film
targets further away. Earth-orbiting satellites also can take images.
Most of these technologies are available in cheap and portable forms for
the poorly resourced user or for temporary sites. A small robotic camera, a
remote control interface, and display screen can be packaged inside a
briefcase. Radar devices can be packaged inside something the size of
standard luggage, yet still offer a range over miles. Prismatic range-finders
are cheap and satisfactory for most uses short of long-range gunnery; laser
range-finders are more accurate. UAVs too can be small enough to be carried
and launched from one hand. Commercial satellites can provide images to
any private client.
Passive barriers can be made more active if equipped with sensors, such
as video cameras or trip wires or lasers that detect movement, or infrared or
thermal sensors that detect body heat, although all motion or heat sensors can
be triggered by false positives such as harmless animals. Still a guard dog is
the best combined sensor/nonlethal weapon available to the human guard on
foot.
Surveillance of Communications
Scanning for two-way radio traffic near a site makes sense because the short-
range of such traffic and the lack of a service provider implies irregular
motivations. Organized actors often use two-way radio communications
during surveillance and attacks: cheap and portable scanners can search
likely bandwidths for traffic, although a linguist too might be required to
make sense of foreign languages. However, the threats could use coded
signals to defeat even a linguist.
Scanning for private telephone traffic is technically possible but is more
legally restricted and implies more false positives. An organization usually
faces few restrictions on tapping the e-mail systems, telephones, and radios
that the organization itself provides to employees or contractors. In fact,
organizations often unadmittedly intercept employee e-mails in search of
noncompliant behaviors. Judicial authorities with probable cause can seek
legal powers to seize any communications device.
Malicious actors can avoid such observation by avoiding the
organization’s communications systems. Indeed, terrorists now routinely use
temporary e-mail accounts, unsent draft messages with shared access,
temporary cell/mobile telephones, text messages that are deleted after one
reading, coded signals, and verbal messages to avoid surveillance of their
communications (see Chapter 14).
Security Engineering
Built structures accommodate people, stores, equipment, operational
activities and thus are part of the material protective system at a site. The
construction of buildings and other structures to be more secure is a technical
area sometimes termed security engineering.
The subsections below describe how to protect materially the site by
setback or stand-off construction, blast barriers, barriers to kinetic attack,
and protective glass.
Setback or Standoff
The most effective way to materially improve invulnerability within the
perimeter is to expand the distance between the perimeter and the site’s
assets, such as the buildings, although this ideal is often retarded by urban
constraints, material limitations, or desires for accessibility.
The effectiveness of setback can be appreciated from the many recent
attacks on sites using substantial explosive devices outside the perimeter
without killing anyone inside the perimeter (such as the van bomb that
detonated on the road outside the military headquarters in Damascus, Syria,
on September 25, 2012), compared to similar devices that penetrated the
target building in the days of lax attention to setback (on October 23, 1983, a
suicide vehicle-borne explosive device exploded inside the lobby of the U.S.
Marine Corps barracks in Beirut, killing 242).
The stand-off distance can be engineered simply with winding access
routes or access controls along the access route. Passive barriers can be used
to increase this stand-off distance. Passive barriers to vehicles can be as
unobtrusive as raised planters for trees or benches, as long as they are
substantial enough to defeat moving vehicles. Setback can be achieved
urgently by closing roads outside the perimeter. For instance, after Egyptian
protesters invaded the U.S. Embassy in Cairo on September 11, 2012,
Egyptian authorities stacked large concrete blocks across the ends of the
street.
The U.S. DOD (2012a) recommends at least 5.5 meters (18 feet) between
structures and a controlled perimeter or any uncontrolled vehicular roadways
or parking areas, or 3.6 meters (12 feet) between structures and vehicular
roadways and parking within a controlled perimeter. Reinforced-concrete
load-bearing walls could stand as little as 4 meters (13 feet) away from
vehicles only if human occupancy of the building was relatively light and the
perimeter were controlled. (An “inhabited” building is “routinely occupied
by 11 or more DoD personnel and with a population density of greater than
one person per 430 gross square feet—40 gross square meters”.) The same
walls would need to stand 20 meters (66 feet) away from the perimeter in the
case of primary gathering buildings or high occupancy housing. (Primary
gathering buildings are “routinely occupied by 50 or more DoD personnel
and with a population density of greater than one person per 430 gross
square feet”; “high occupancy” housing is “billeting in which 11 or more
unaccompanied DoD personnel are routinely housed” or “family housing
with 13 or more units per building”.)
The DOD issues different standards for stand-off distances by building
material, occupancy, and perimeter. For instance, containers and trailers that
are used as primary gathering places during expeditionary operations are
supposed to stand at least 71 meters (233 feet) away from a perimeter,
although fabric-covered structures (which generate fewer secondary
projectiles under blast) could stand 31 meters (102 feet) from the perimeter.
The DOD no longer endorses a previous standard (2007) that physical
security managers might remember in an emergency, if they were unable to
access the current standards: high occupancy and primary gathering buildings
were supposed to stand 25 meters or no less than 10 meters (33 feet) away
from vehicle parking or roadways or trash containers within the perimeter;
other “inhabited” buildings were supposed to stand 25 meters away from an
uncontrolled perimeter and 10 meters away from vehicles and trash
containers within the perimeter; and low-occupancy buildings were allowed
to be closer still.
The Department of State’s standard, since 1985, for diplomatic buildings
is 30 meters (100 meters) setback, although many diplomatic buildings
remain substandard due to the constraints of available land and demands for
higher occupancy in dense central urban areas. A full U.S. embassy requires
about 15 acres of land even before setback is factored in.
Where setback is not possible, buildings need to be hardened above
standard, which is very expensive, or the standards are waived. For instance,
in 2009, U.S. officials sought waivers to State Department standards in order
to build a consulate in Mazar-e-Sharif, in northern Afghanistan. The officials
signed a 10-year lease and spent more than $80 million on a site currently
occupied by a hotel. The setback distance between the buildings and the
perimeter was below standard; moreover, the perimeter shared a wall with
local shopkeepers and was overlooked by several tall buildings. Following a
revealing departmental report in January 2012, the site was abandoned
before the consulate was ever established.
Setback is specified mostly as a control on blast, but firearms can strike
from further away, particularly from elevated positions. For instance, in
Kabul, Afghanistan, on September 13, 2011, insurgents occupied an
unfinished high-rise building overlooking the U.S. Embassy about 300 yards
away. With assault rifles and rocket-propelled grenades they wounded four
Afghan civilians inside the U.S. perimeter before Afghan forces cleared the
building the next day.
In suburban locations, diplomatic missions typically have more space but
are more remote to urban stakeholders. Sometimes diplomatic missions are
established in suburban and rural locations with plenty of space, but with
exceptions to the security standards given their ongoing temporary status, the
constraints of surrounding private property, and diplomatic goals of
accessibility to locals. For instance, the small U.S. mission in Benghazi was
noncompliant with the standards at the time of the devastating attack of
September 11, 2012, which killed the Ambassador and three other federal
employees. In fact, it was little more than a small private villa on lease.
Blast Barriers
Blast barriers are required to baffle and reflect blast—often they are
incorporated into the barriers to vehicles and humans too.
Specialized blast walls are normally precast from concrete and designed
for carriage by standard construction equipment and to slot together without
much pinning or bonding. They can be improvised from sandbags, temporary
containers filled with earth (some flat-packed reinforced-canvas bags and
boxes are supplied for this purpose), vehicles filled with earth, or simply
ramps of earth. Stacked hard items of small size do not make good blast
barriers because they collapse easily and provide secondary projectiles.
The readiest and best blast barriers are actually other complete buildings.
Any structure will reflect or divert blast waves. Buildings have layers of
load-bearing walls. The first structure to be hit by blast will absorb or
deflect most of the energy that would otherwise hit the second structure in the
way, so the ideal expedient is to acquire a site with buildings around the
perimeter that are not needed and can be left unoccupied to shield the
occupied buildings on the interior of the site. However, this ideal sounds
expensive in material terms and also implies extra burdens on the guards
who must ensure that malicious actors do not enter the unoccupied structures.
This is why setback usually specifies an uncovered space between perimeter
and accommodation. However, advantages would remain for the site security
manager who acquires a site surrounded by other built sites with sufficient
security to keep blast weapons on the outside of them all. Indeed, this is one
principal behind placing a more valuable site in the center of an already
controlled built site.
Protective Glass
Glass is a frequent material in windows and doors and surveillance systems
(see above). However, it is usually the most fragile material in a built
structure, so is easiest for actors to break when attempting to access.
Moreover, when glass fails, it tends to shatter into many sharp projectiles,
although species are available that shatter into relatively harmless pieces.
Shatterproof and bulletproof species are essentially laminated plastic and
glass sheets, where more sheets and stronger bonds imply less fragility.
Glass can be minimized by designing buildings with fewer and smaller
apertures. Glass can be used in shatterproof or bulletproof species where
human break-ins or small arms attacks are sufficiently likely. Laminated
sheets of glass and plastic are useful for protecting guards who must have
line of sight in exposed locations in order to perform their duties. Pieces of
such material can be set up even in temporary or emergency sites, where no
built structures are present.
SUMMARY
Q UE S T IO NS AND E XE RCIS E S
14
Scope
This section defines information security, information communications
technology, and cyber space.
Information
Information security includes the security of information in all its forms, of
which the most important conventional categories are verbal and cognitive
forms, hard forms (including paper documents and artifacts), information
technology (IT), information and communications technology (ICT), and
cyber space (essentially electronically networked information and ICTs).
Information can take many forms—from data sets of confidential
personal information through to records of sensitive meetings, personnel
records, policy recommendations, correspondence, case files[,] and
historical records . . . Therefore, information risks are not necessarily
the same as IT security risks (although managing IT security is usually a
critical component of any strategy to manage information risks). (U.K.
National Archives, 2008, p. 2)
ICTs
Information technology normally refers to any electronic or digital means of
holding or communicating information. Some authorities prefer to refer to
information and communications technology (ICT) in order to bring more
attention to communications technologies, such as radios, telephones, and e-
mail.
Cyber Space
In recent decades, information and communications technology have tended
to be conflated inaccurately with cyber space. Cyber space is not a tight term
but best refers to digitally networked information and information
technologies, normally personal computer terminals, but increasingly also
mobile devices such as mobile telephones, connected to remote computers or
hard drives (“servers”) via a digital network (either the Internet/World Wide
Web or an organizational Intranet).
Sources of Attacks
The sources of cyber attacks are the human sources of the attacks. These
sources include official actors (such as spies), profit-oriented organized
criminals, terrorists, commercial competitors, ideologically motivated
hackers (including campaigners for political and Internet freedoms),
inquisitive and curious people, and journalists. Another key categorization is
between external and internal threats (those without or within the target
organization).
Some of the categories above overlap and some are defined more by their
vectors than motivations. (Vectors are described later.) Here, the subsections
below describe the four main categories of source: profit-oriented criminals;
insider threats; external threats; and nation-states.
1. National governments
2. Terrorists
3. Industrial spies and organized crime organizations
4. Hacktivists
5. Hackers
The U.S. GAO (2005b, p. 5) reviewed data from the FBI, Central
Intelligence Agency (CIA), and the Software Engineering Institute before
publishing the following list of sources:
• Hackers break into networks for the thrill of the challenge or for
bragging rights in the hacker community. While remote cracking
once required a fair amount of skill or computer knowledge,
hackers can now download attack scripts and protocols from the
Internet and launch them against victim sites. Thus, while attack
tools have become more sophisticated, they have also become
easier to use. According to the CIA, the large majority of hackers
do not have the requisite expertise to threaten difficult targets such
as critical U.S. networks. Nevertheless, the worldwide population
of hackers poses a relatively high threat of an isolated or brief
disruption causing serious damage.
• Bot-network operators are hackers; however, instead of breaking
into systems for the challenge or bragging rights, they take over
multiple systems in order to coordinate attacks and to distribute
phishing schemes, spam, and malware attacks. The services of
these networks are sometimes made available on underground
markets (e.g., purchasing a denial-of-service attack, servers to
relay spam or phishing attacks, etc.).
• Criminal groups seek to attack systems for monetary gain.
Specifically, organized crime groups are using spam, phishing, and
spyware/malware to commit identity theft and online fraud.
International corporate spies and organized crime organizations
also pose a threat to the United States through their ability to
conduct industrial espionage and large-scale monetary theft and to
hire or develop hacker talent.
• Foreign intelligence services use cyber tools as part of their
information-gathering and espionage activities. In addition, several
nations are aggressively working to develop information warfare
doctrine, programs, and capabilities. Such capabilities enable a
single entity to have a significant and serious impact by disrupting
the supply, communications, and economic infrastructures that
support military power—impacts that could affect the daily lives of
U.S. citizens across the country.
• The disgruntled organization insider is a principal source of
computer crime. Insiders may not need a great deal of knowledge
about computer intrusions because their knowledge of a target
system often allows them to gain unrestricted access to cause
damage to the system or to steal system data. The insider threat also
includes outsourcing vendors as well as employees who
accidentally introduce malware into systems.
• Phishers [are] individuals, or small groups, that execute phishing
schemes in an attempt to steal identities or information for
monetary gain. Phishers may also use spam and spyware/malware
to accomplish their objectives.
• Spammers [are] individuals or organizations that distribute
unsolicited e-mail with hidden or false information in order to sell
products, conduct phishing schemes, distribute spyware/malware,
or attack organizations (i.e., denial of service).
• Spyware/malware authors [are] individuals or organizations with
malicious intent carry out attacks against users by producing and
distributing spyware and malware. Several destructive computer
viruses and worms have harmed files and hard drives, including the
Melissa Macro Virus, the Explore.Zip worm, the CIH (Chernobyl)
Virus, Nimda, Code Red, Slammer, and Blaster.
• Terrorists seek to destroy, incapacitate, or exploit critical
infrastructures in order to threaten national security, cause mass
casualties, weaken the U.S. economy, and damage public morale
and confidence. Terrorists may use phishing schemes or
spyware/malware in order to generate funds or gather sensitive
information.
• Nation state.
• Organized criminal.
• “A cyber terrorist uses internet-based attacks in terrorist activities,
including acts of deliberate, large-scale disruption of computer
networks.”
• Hacker.
• “A hactivist uses computers and networks as a means of protest to
promote social, political, or ideological ends.”
• “A script kiddie uses existing computer scripts or code to gain
unauthorized access to data, but lacks the expertise to write custom
tools.”
• Malicious insider (Mateski et al., 2012, pp. 7, 11)
Profit-Oriented Criminals
Most profit-oriented criminals are phishing for information that would allow
them to steal the target’s identity for profit. Normally they are looking for
control of financial assets. Even if the victim loses no financial asset, they
would face at least opportunity costs in restoring the security of their identity.
More than 1.5 million people a year suffer the theft of their identity for an
annual economic loss estimated at $1 billion (UN Office of Drugs and Crime
[ODC], 2010).
Profit-oriented criminals target mostly individuals, by sending emails
pretending to be a friend in need, a charity, or a potential business partner,
and asking for money.
Rare, more sophisticated threats have stolen from the largest banks and
official departments. For instance, in July 2007 malware, nicknamed “Zeus,”
which had been downloaded mainly from phishing e-mails and fake
websites, was identified after it had stolen information from the U.S.
Department of Transportation. Zeus was designed to harvest login credentials
stored on the target computer and to capture keystrokes during the user’s
logins. Zeus was also a backdoor malware, meaning that it could take
commands from its controllers, who remotely upgraded it and changed its
missions. In June 2009, Prevx (a commercial security service provider)
reported that Zeus had compromised more than 74,000 File Transfer Protocol
accounts across dozens of websites, including websites owned by the Bank
of America and the U.S. National Aeronautics and Space Administration.
Thousands of login credentials were stolen from social media offered by
Facebook, Yahoo, and others. In July 2010, Trusteer (another security service
provider) reported that Zeus had captured information on credit cards issued
by 15 U.S. banks (Trusteer did not name the banks). On October 1, 2010, the
FBI announced that it had discovered an international criminal network that
had used Zeus to steal around $70 million from U.S. targets. Arrests were
made in the United States, Britain, and Ukraine. Around 3.6 million
computers had been infected in the United States, perhaps millions more
internationally (Netwitness, 2010).
In 2007, TJX Companies admitted that 45 million credit card numbers
were exposed to hackers who had accessed databases over a period of 3
years. In 2009, Heartland Payment Systems admitted that malware had
penetrated the servers that processed 100 million credit card transactions per
month, but did not know the actual number of credit cards compromised.
On May 9, 2013, prosecutors in New York unsealed indictments against
eight men accused of being the local team of a global cyber theft. Since
October 2012, hackers broke into computer networks of financial companies
in the United States and India and eliminated the withdrawal limits on
prepaid debit cards before withdrawing tens of millions of dollars from
ATMs in more than 20 other places around the world. First, hackers
breached an Indian firm that processes credit card transactions for
MasterCard debit cards issued by Rakbank, an institution in the United Arab
Emirates, then they withdrew $5 million in 4,500 ATM transactions. Second,
hackers breached a MasterCard processor in the United States that handled
transactions for prepaid debit cards issued by the Bank of Muscat in Oman,
then they withdrew $40 million in 36,000 transactions over a 10-hour period.
Insider Threats
Insider threats are personnel who are employed, authorized, or granted
privileges by the organization but who harm the organization in some way.
For instance, Dongfan Chung, an engineer who transferred secrets to China,
mostly relating to military aircraft and the Space Shuttle, had hidden 250,000
pages of paper documents with sensitive information under his home by the
time he was arrested in 2006. Almost twice as much information would fit on
one compact disc (ONCIX, 2011, p. 2). Similarly, in January 2010, Bradley
Manning, a soldier of Private rank in the U.S. Army, then assigned as an
intelligence analyst to a base in Iraq, stole the largest amount of restricted
data ever leaked from one source—more than 260,000 U.S. diplomatic
cables and more than 500,000 military reports about or from Iraq and
Afghanistan. He downloaded all the information on to digital media, which
he carried out of the secure facility. In March 2010, he started to leak
documents to the website Wikileaks. He was betrayed to the FBI in May by a
hacker to whom Bradley had described his activities in an online forum. His
correspondence included this damning revelation of the information security:
“Weak servers, weak logging, weak physical security, weak counter-
intelligence, inattentive signal analysis . . . a perfect storm.”
A public-private survey in the United States in 2007 found that 31% of
electronic crime perpetrators in the United States were insiders: 60% of
them were thieves; 40% of them intended to sabotage IT, of which very few
(2% of all cases) sabotaged for financial gain, while most were seeking
vengeance against an employer or colleague. Almost all of the sabotage was
of IT from inside the IT industry.
Insiders could be intrinsically inspired or directed by external actors,
perhaps unknowingly (the external actor could trick the internal threat into
thinking that they are acting on behalf of the same employer) or knowingly
(the insider could accept a bribe to traffic information). US-CERT drew
attention to the increasing role of external actors in insider threats after
finding that half of insider threats from 2003 to 2007 in the United States had
been recruited by outsiders, including organized criminals and foreign
governments. CERT found also more crimes perpetrated by the employees of
business partners that had been granted privileges inside the organization.
New mergers and acquisitions also increase the chance of insider threats
(Cappelli, Moore, Trzeciak, & Shimeall, 2009, p. 6). Germany’s Federal
Office for the Protection of the Constitution (BfV) estimates that 70% of all
foreign economic espionage involves insiders (U.S. ONCIX, 2011, p. B1).
Although espionage is likely to involve malicious choices, most insider
threats who release sensitive information are carelessly rather than
maliciously noncompliant with the access or transfer controls. Even the most
senior employees can be noncompliant. For instance, on November 9, 2012,
General David Petraeus resigned as U.S. Director of Central Intelligence
after revelations of his affair with Paula Broadwell, a former U.S. Army
intelligence officer (and his biographer). Her harassing e-mails to another
woman prompted a criminal investigation that unearthed her privileged
access to Petraeus’ private and classified information, partly through a web-
based e-mail account that they had shared in an attempt to communicate
privately.
Information security experts prescribe more monitoring and training of
compliance, but also suggest that about 5% of employees will not comply
despite the training. Most training is formal, but most people are better at
recalling than applying formally trained knowledge. More experiential
training would help the employee to become more self-aware of their
noncompliance, but even so, some people are not inherently compliant or
attentive. In order to catch the very few people who are chronically
noncompliant, the organization is forced to monitor them increasingly
obtrusively, which is restricted by ethical and legal obligations and the
material challenges—in a large organization, monitoring most people most of
the time would be prohibitively expensive, legally risky, and would raise the
employees’ distrust and stress. In many jurisdictions, dismissal of employees
is difficult. At the same time, the risks of an insider threat are increasingly
great. By 2010, some companies had added noncompliance as a dismissible
offense (after two or three breaches) to employment contracts. Nondisclosure
agreements (in which the employee promises not to release sensitive
information, even after separation) became commonplace in the 1990s.
External Threats
External actors are not employed by the target organization, but the target
organization may have granted them privileged access to internal information
or domains. Practically, any commercial or public relationship involves
some compromise of the boundary between internal and external actors. For
instance, when the organization outsources services, such as Internet
services, to an external actor, the external actor would be granted privileged
information about the organization’s network. Some competitors may pretend
to be potential clients or business partners in order to gain information that is
then useful for developing some competitive product or service. For
instance, in the 2000s, some French, German, and Japanese companies
complained that Chinese partners developed high-speed electric railways
from information gathered from bids for unawarded contracts, as well as
from supplied but patented technologies.
When the organization procures anything externally, the resulting supply
chain is exposed to malicious actors. For instance, official procurers worry
about buying computers from abroad where foreign intelligence services
could plant malware on the computers to spy on official users. In theory,
malicious actors could sabotage acquisitions in more traditional ways, such
as by planting an explosive device inside a delivered package. (These risks
—potential espionage or sabotage—are separate to traditional commercial
risks, such as a supplier’s nonperformance—its failure to deliver something
on the schedule or with the capabilities specified.) Official procurers also
worry about external ownership of their supply chain, where an external
actor could steal intellectual property (in addition to the more traditional
supply chain risk of simply interrupting supply).
External threats may access information or plant malware after
procurement, during the deployment, configuration, and integration of some
procured hardware, such as when external actors are employed to train users
in their new hardware or to set up the new hardware for use. External actors
could distribute malware through the software acquired for the hardware or
through peripheral devices. Periodic maintenance, servicing, and upgrades
are also opportunities for external malicious intervention. A final opportunity
for the malicious actor in the life cycle of the hardware is during the
organization’s retirement or deletion of the hardware from service or use.
The hardware is often sent for disposal without proper removal of the
information contained therein. (Many operating systems do not entirely delete
files when ordered to delete.) Insiders may contribute to this risk by selling
hardware to external actors or by diverting hardware to their friends and
family rather than obeying orders to destroy it.
Nation-States
The average national state government has more capacity than the average
private actor (although some private corporations are wealthier than most
governments). Less than 10 of the nearly 200 sovereign governments in the
world are commonly cited as having most of the capacity or intent for cyber
attacks. Official capacity for cyber attacks (also known as offensive cyber
warfare) is usually assigned to intelligence or military agencies. U.S.
intelligence and security agents often categorize foreign intelligence services
(FISs) and security services (FISSs) separately from other threats,
operationally meaning any foreign official threat, short of positive
identification as a particular service or agency. In recent years, U.S. officials
referred to advanced persistent threats (APTs) as code for national threats in
general, Chinese threats in particular.
National threats have more capacity for using information and
communications technologies as vectors for their phishing. The victims are
often unwilling to reveal events because of the commercial impacts (in the
case of commercial victims) or the bureaucratic or political impacts (in the
case of official victims), so the true frequency of these attacks is largely
underestimated.
Largely anonymous and anecdotal reports suggest that APTs will research
a particular organization over weeks before attacking over days, as widely
and repeatedly as possible before the different attacks are recognized as
threatening. The attacks are conducted in campaigns with multiple methods,
cumulatively penetrating deeper into a target’s defenses, despite frequent
failures. Human operators are more important than the technological tools:
the sources gather much of their information by phishing and social
engineering through direct communications with the targets and by adapting
to the defenses. The attacks themselves are usually e-mails to executives,
usually tailored to the target person by pretending to be someone whom the
target knows or by attaching malware disguised as something of interest to
the target (such as a document about the oil industry sent to an oil industry
executive).
Mandiant, a cyber security provider, describes (2013, p. 27) the following
activities within the APT “attack lifecycle”:
Unofficially, some U.S. officials have stated publicly that about 140
FISSs target the United States, of which about 50 have serious capacity to
harm the United States and five or six are severe threats. The commonly
identified national cyber threats are China, Russia, Israel, Iran, North
Korea, and France (the sources are not necessarily official; they could be
private activists, private actors with official support, or disguised official
actors).
On April 12, 2011, the Director of Intelligence at U.S. Cyber Command
(Rear Admiral Samuel Cox) told subordinates that “a global cyber arms
race is underway” and at least six countries have offensive cyber warfare
capabilities that they are using to probe U.S. military and private
computer networks.
[Russia, China, and Iran] will remain the top threats to the United
States in the coming years . . . Russia and China are aggressive
and successful purveyors of economic espionage against the
United States. Iran’s intelligence operations against the United
States, including cyber capabilities, have dramatically increased
in recent years in depth and complexity. (Director of National
Intelligence James R. Clapper Jr. written statement to U.S. Senate
Armed Services Committee on February 16, 2013)
Access Vectors
While the ultimate sources of cyber attacks are human actors, most cyber
attacks are vectored by some sort of information technology or
communication technology. The subsections below describe these vectors
and their controls: printed documents, social interactions, malware,
databases, webpages, social media, postal communications, telephone
communications, e-mail, removable digital media, cloud computing, and
unsecured wireless networks.
Social Interaction
Most harmful leakage of privileged information arises from a social
interaction, such as when people talk too loosely about private information
or are verbally persuaded to give up information to somebody who is not
what they pretend to be. Similarly, most unauthorized access to digital
information is gained socially, even if the source uses digital media as their
vector for the social interaction. ICTs (especially mobile telephones and
Internet-based communications) have enabled more remote communications
that users tend to treat casually.
A malicious social interaction could take one of four main directions:
1. The threat could contact you pretending to be someone you know, such
as a client or colleague (known as phishing or spear phishing)
2. The threat could pretend to be you in order to persuade a third-party,
such as your bank, to release information about you (known as
spoofing in American English or blagging in British English)
3. The threat could bribe someone to release information about you
4. The threat could blackmail you or someone you know
Of these routes to information, the route that has become much easier
through digital technologies is phishing. Phishing can be used as a means to
any end, including sabotage, but is commonly defined and discussed as a
form of espionage.
Pedagogy Box 14.10 Social Gathering by
British Journalists and Private Investigators
in the 2000s
Malware
Malware is software that is harmful. It is sometimes created by accident or
for fun, but is usually developed or exploited for malicious objectives. One
expert noted “that the current trend is that there is now less of a propensity to
make the user aware of the presence of malicious code on a computer, and
more of a will to have the code run silent and deep so that the attacker can
remotely control the target’s computer to launch massive attacks or exfiltrate
data from a sensitive network” (Yannakogeorges, 2011, p. 261).
Commercial software itself has become more complicated, increasing the
chance of inherent flaws or of vulnerabilities to attack. In 2005, the U.S.
National Institute of Standards and Technology estimated 20 flaws per
thousand lines of software code; Microsoft Windows 2000 (a computer
operating system) had 35 million lines (U.S. GAO, 2005b, pp. 9–10). U.S.
officials once estimated that 80% of successful intrusions into federal
computer systems are vectored through flawed software (Wilson, 2003, p. 6).
Databases
Almost everybody provides sensitive information that is held on some other
organization’s media—sensitive information such as ethnicity, gender,
sexuality, religion, politics, trade union membership, birth, death, marriage,
bank account, health care, and crimes (either perpetrator or victim). The
growth of bureaucratic capacity and digital communications has encouraged
wider handling of such data and the handling of more data. One British
authority estimated that such sensitive information about the average adult
Briton is held in around 700 databases (U.K. ICO, 2006, p. 7). ICTs have
made the holding of data easier but also exposed more data to cyber attack.
The Privacy Rights Clearinghouse estimated that in 2011, 30.4 million
sensitive records were exposed by just 535 cyber intrusions in the United
States that year.
The U.S. ONCIX (2011, p. A4) gives this advice on managing data:
Since the 1980s, standards, regulations, and law have increased the
responsibilities and liabilities of organizations for securing data and for
granting freedom of access to data by those whom it concerns. In 1980,
the Organization for Economic Cooperation and Development (OECD)
issued its seven “Guidelines Governing the Protection of Privacy and
Trans-Border Flows of Personal Data”:
Webpages
Most Internet activity involves online searches, browsing, and e-mail.
Visiting the associated webpages exposes the user to malware—particularly
if the user downloads or is misled into visiting a webpage resembling a login
page, where the threat gathers the user’s passwords and other access keys.
Users tend to underestimate the insecurity of their online activities. Unlike
the typical software on an Intranet or other organizationally secured domain,
web-based applications are designed to be accessible and easy to use more
than secure. Some sites and browsers, by default, place information packets
known as “cookies” on the user’s computer: legitimately these cookies are
used by the site to recognize the user; illegitimately they can upload
information back to the site that the user has not authorized. Some of this
information may be used for purposes that the user finds useful, such as more
targeted advertising, but often it leads to unwanted advertisements or can be
sold to third-party marketers, including “spammers”—people who send you
information you never requested and do not want. Worse, cookies can be
vectors for malware.
Social Media
Social media are normally websites on which personal users release
information about themselves or subscribe to information from other users.
The most used social websites are Facebook, LinkedIn, and Twitter. Some
sites specialize in sharing photographs, videos, and audio files, some in
romantic connections, some in professional connections, some in social
games. Many alternatives exist—some official actors, such as the Iranian
government, have developed alternatives in an effort to impose control on or
gain information from social media.
Social media are exposed to anyone who browses the same sites. Social
media encourage users to expand their social network, but unguardedly or
superficially. Many social media allow anonymous discussions and postings.
Generally, users tend to behave more anonymously but revealingly when
online. Users of social media may believe that their information is restricted
to their friends, but some social media allow anybody to view information on
anybody else, store such information in insecure domains, or even sell such
information. Some social media promise not to sell information or to prevent
access to the posted information except to the user’s “friends,” but the
friends, as defined by the user and the site, likely include casual
acquaintances—indeed, users, in pursuit of a larger count of online “friends,”
often agree to “friend” anyone who asks. Social media contribute to their
cumulative online presence, from which threats might gather enough
information to steal the individual’s identity.
Some very important persons and officials have been careless on social
networks. For instance, in November 2012, a Belgian newspaper published
an investigation into how many employees of the Belgian state security
service listed their employer on Facebook or LinkedIn. Several French users
of LinkedIn had listed their employer as France’s external intelligence
agency. American journalists found more than 200 users of LinkedIn listing
the Central Intelligence Agency as their employer.
Users are likelier to be more revealing of personal and professional
information when they are looking for another job or romantic partner. For
instance, official employees, after leaving official employment or when
seeking another job, have been known to distribute online, as evidence for
their professional qualifications, photographs of themselves inside secured
domains, of their official credentials, and of themselves with important
persons.
Additionally, the privacy and accuracy of information is less protected
legally when online than offline, encouraging rampant online slander,
defamation, misinformation, and abuse, including online bullying. Private
citizens face practically no criminal legal restrictions on online behavior.
The tort system is usually a waste of time for claimants or victims; the few
cases that have been heard in civil or criminal court usually fail to clarify
responsibility or harm. In practice, cyber information and misinformation are
controlled by users and hosts, motivated mostly by their intrinsic ethics and
external commercial pressures, not the law. In the United States, the
Communications Decency Act of 1996 makes clear that Internet service
providers and online hosts are not responsible for the content of posts from
outside parties, although the positive implication of this clarity is that hosts
are free to moderate or delete content without legal liability.
Traditionally, public concern has been raised over juvenile use of social
media where adults (probably pretending to be other juveniles) could groom
them for abuse, such as by asking them to share intimate information or even
to meet in the real world. Some social media are supposed to be reserved for
juveniles, but any adult could register as a user. In any social media,
juveniles routinely disclose their own identities or the identities of other
juveniles, in contrast to most news media, where journalists normally follow
codes of ethics that prohibit revelations of juvenile identities, even when
juveniles are accused of crimes. Juveniles are capable of undermining such
regimes. For instance, in February 2013, users of online forums on a website
devoted to issues in Fairfax, Virginia, revealed the names of three male high
school students who had been arrested for allegedly making videos of
themselves having sex with several girls, during a news embargo on those
same names. More worrying, juveniles are quite capable of exploiting each
other and adults—by online bullying, slander, and defamation. Adults are
disadvantaged against juvenile threats because juveniles are protected in
ways that adults are not and are not equally accountable.
Adult users too can be exploited sexually, in similar ways (posting of
intimate information or grooming for sexual exploitation). The average adult,
being wealthier and in possession of more valuable information than the
average juvenile, is more likely to be exploited financially or professionally.
In pursuit of friends, employers, or romantic partners, individual users tend
to post online information that they would not reveal offline—commonly
including their sexuality, age, address, profession, and hobbies. In
communicating through social media with supposed friends, they may discuss
private information such as their health or romantic partners; they may also
reveal plans, such as foreign travel, that encourage thieves to target their
homes. They also may be persuaded to send money to strangers or agree to
meet people in the real world who turn out to be robbers or worse.
Postal Communications
Posted mail can be intercepted; postal deliverers have been bribed to divert
mail; threats can also seize the mail from the container into which it has been
posted or delivered, before it is picked up by the deliverer or the recipient.
Some private actors, such as unscrupulous journalists, commercial
competitors, jealous romantic partners, petitioners for divorce, and stalkers,
are incentivized to intercept communications. Common thieves also steal
mail in pursuit of financial checks and credit cards or information that helps
them to steal personal identities. Official investigators may also intercept
mail in pursuit of evidence for crimes.
Nevertheless, postal mail remains ubiquitous, despite some replacement
by ICTs. The increasing use of electronic means of transacting business has
reduced the use of paper transactions, but increasing interception of the
electronic means (by stealing credit card numbers or intercepting
transactions, including wireless means of paying) suggest that electronic
means are not perfect replacements. Meanwhile, official authorities have
demonstrated increased capacity and willingness for intercepting or blocking
private communications.
Some people, including activists, businesspersons, diplomats, profit-
oriented criminals, and terrorists, have returned to verbal and postal
communications since more revelations in the 2000s of the insecurity of their
digital and electronic communications. For instance, al-Qaida’s most senior
staff switched back to traditional communications in the early 2000s, after
discovering that some of their e-mails, mobile telephones, and satellite
telephones had been intercepted occasionally since the 1990s, although U.S.
intelligence eventually identified a courier whose movement led them to the
hiding place in Pakistan of Osama bin Laden, where he was killed by U.S.
special operations forces on May 2, 2011.
Similarly, official authorities continue to use couriers because of concerns
about electronic and digital espionage. For instance, the U.S. Bureau of
Diplomatic Security (part of the Department of State) operates a courier
service for the carriage of classified materials in diplomatic pouches (which
international law treats as sovereign territory) between diplomatic sites at
home and abroad. In 2008, the service employed 98 couriers and delivered
more than 55 million pounds of classified diplomatic materials (U.S. GAO,
2009, p. 6).
Telephone Communications
Increasing use of information and communications technology implies
increasing exposure to interception of our communications. Telephones have
been commercially available for more than 100 years. By 2012, more than
half of the world’s population possessed a mobile telephone. The Internet
offers effective telephone replacement technologies, such as Voice Over
Internet Protocol (VOIP), of which the most well-known carrier is Skype.
Most people underestimate their exposure; new technologies encourage
more spontaneous and casual communications. The growth of e-mail tends to
obscure the inherent exposure of telephones to hacking. Telephones and the
cables carrying wired communications always have been easy to “tap” for
anyone with physical access to the hardware. Such tapping is still rife in
autocracies and is more likely where the government controls the service or
the service providers or the service providers are open to corruption. In most
democracies, taps are allowed only with a temporary warrant for the
purposes of criminal justice or national security, but some officials have
allowed unwarranted taps, while nonofficials, such as private detectives,
have illegally tapped telephones or obtained data from a corrupt service
provider.
A mobile or cellular telephone, like any telephone, can be tapped directly
if the threat can physically access the device and place a bugging device
within it. Otherwise, mobile telephones are more difficult to tap physically
because their communications pass wirelessly, but the threat can use cheap
technology to intercept wireless communications (if the threat knows the
user’s telephone number and is proximate to the target). Smartphones
(telephones that run software) can be infected, through Internet downloads or
an open Bluetooth portal, with malware that records or allows a remote
threat to listen in on the target’s conversations. Sophisticated threats can
remotely turn on a target’s mobile telephone and use it as a bugging device
without the target even realizing that the telephone is on: the only defense is
to remove the battery.
Voicemail and text messages in many ways are more exposed than is a
verbal conversation. A telephone conversation is fleeting and not recorded
by default, but voicemail and texts are stored until the user deletes them.
Many service providers allow remote access to voicemail and texts through
a third party telephone or website, perhaps after passing some control on
access, such as by typing in a personal identification number (or password).
Many users do not change whatever default password was issued with a
particular telephone or by a particular provider, so a threat (hacker) could
access private voicemail using a third-party telephone or computer and a
known default password. The hacker could call the provider and pretend to
be the user in order to reset the password. The hacker could configure
another telephone to pretend to have the same telephone number as the
target’s telephone. The hacker could also bribe an employee of the service
provider to reveal confidential information.
Concerned users should add access controls to their telephones, such as
passwords, before the telephone can be used. They should check and delete
their voicemails and text messages frequently. They could choose to prevent
access to their voicemails and texts except with a passcode entered from
their own phone. They could remove the battery from their mobile telephone
except when they need to use it. They could also use temporary telephones
that expire or can be discarded regularly. They could avoid smartphones or
Bluetooth devices or at least eschew any access to the Internet from such
devices. If they need to use such devices, they should procure security
software and keep it up to date. They could avoid personal ownership of
communications technology entirely—an extreme solution, but one that more
officials are adopting.
These controls may sound severe to casual users of current
communications devices, but many official organizations and some highly-
targeted corporations now ban their employees from using or carrying
smartphones, Bluetooth, or third-party devices, inside secure domains or for
the discussion of any organizational information. In addition, many malicious
actors, such as terrorists, avoid such devices after revelations of how easily
counter-terrorist authorities can use them for spying.
Under its charter, the U.S. Central Intelligence Agency is not allowed to
gather intelligence on U.S. citizens at home, but from 1961 to 1971 the
CIA spied on domestic anti-Vietnam War groups, communists, and leakers
of official information. Both CIA and NSA intercepted private telephone
communications during that time. In 1972, the CIA tracked telephone calls
between Americans at home and telephones abroad.
The Foreign Intelligence Surveillance Act (FISA) of 1978 criminalizes
unauthorized electronic surveillance and prescribes procedures for
surveillance of foreign powers and their agents (including U.S. citizens)
inside the United States. The Foreign Intelligence Surveillance Court
(FISC) issues the warrants for such surveillance. The USA PATRIOT Act
of October 2001 extended FISA’s scope from foreign powers to terrorists.
The Protect America Act of August 2007 removed thewarrantrequirement
if the individual is “reasonably believed” to be corresponding with
someone outside the United States. It expired on February 17, 2008, but
the FISA Amendments Actof July 2008 extended the same amendment. On
May 26, 2011, President Barack Obama signed the PATRIOT Sunsets
Extension Act, which extended for 4 years the provisions for roving
wiretaps and searches of business records. The FISC is known to have
approved all warrant requests in 2011 and 2012. In December 2012,
Congress reauthorized FISA for another 5 years.
Confidential information includes the parties to a private conversation,
not just the content of the conversation. Internet and telephone service
providers hold data on the location and other party in every
communication made or received. Most democracies have strict legal
controls on the storage and use of such data, and do not allow official
access except with a court order granted to official investigators who can
show probable cause of a severe crime, although such controls are easy to
evade or forget in emergencies.
European law requires providers to retain such data for at least 6
months and no more than 24 months, and not to record content.
On October 4, 2001, before the PATRIOT Act was introduced,
President George W. Bush secretly authorized NSA to collect domestic
telephone, Internet, and e-mail records, focusing on calls with one foreign
node. In 2006, the USA Today newspaper reported that the NSA had
“been secretly collecting the phone call records of tens of millions of
Americans, using data provided by AT&T, Verizon and BellSouth” and
was “using the data to analyze calling patterns in an effort to detect
terrorist activity.” The NSA’s legal authority is the “business records”
provision of the PATRIOT Act. Since 2006, the FISC has approved
warrants to service providers every 3 months; Congress was informed.
The most recent FISC order to Verizon was published by The Guardian
newspaper on June 5, 2013. Senator Diane Feinstein, the chair of the
Senate’s Intelligence Committee, confirmed the 3-monthly renewals back
to 2006.
In January 2010, the Inspector General at the Department of Justice
reported that, between 2002 and 2006, the FBI sent to telephone service
providers more than 700 demands for telephone records by citing often
nonexistent emergencies and using sometimes misleading language.
Information on more than 3,500 phone numbers may have been gathered
improperly, but investigators said they could not glean a full
understanding because of sketchy record-keeping by the FBI.
Pedagogy Box 14.17 British Journalists’
Intercepts of Private Telecommunications
In November 2005, the News of the World published a story about Prince
William’s knee injury. After a royal complaint to police, they opened an
investigation into whether mobile phone voicemail messages between
royal officials had been intercepted. In January 2007, the editor (Clive
Goodman) of royal affairs at the News of the World and one of his private
investigators (Glenn Mulcaire) were convicted of conspiring to intercept
communications. Mulcaire had kept a list of 4,375 names associated with
telephone numbers. Police estimated that 829 were likely victims of
phone hacking. Other journalists and private investigators certainly were
involved, but few had kept such records, without which police struggled
to gather evidence against more perpetrators.
In January 2011, the Metropolitan Police (Met) opened a new
investigation into phone hacking. From January 2011 through October
2012, 17 of the 90 related arrests were for interception of mobile phone
voicemail messages (Leveson, 2012, pp. 8, 13, 19; U.K. House of
Commons, 2011a, pp. 47–50).
E-mail
E-mail or electronic mail is a digital communication sent via some computer
network. The ease of e-mail, and the easy attachment of files to e-mail, has
improved communications, but also increased the leakage of sensitive
information. E-mails are prolific, users are casual in their use, and service
providers tend to hold data on every e-mail ever sent, including user-deleted
e-mails.
Email systems are often less protected than databases yet contain vast
quantities of stored data. Email remains one of the quickest and easiest
ways for individuals to collaborate—and for intruders to enter a
company’s network and steal data. (U.S. ONCIX, 2011, p. A3)
E-mail is a vector for sensitive information in three main ways, as
described in the subsections below: external phishing, unauthorized access to
stored e-mail, and insider noncompliance with controls on the release of
information.
Phishing
Network applications, such as organization-wide e-mail, are exposed to
phishing attacks (see the higher section above on sources for a definition of
phishing). If the threat could identify the e-mail address of a target within the
network, the threat could send an e-mail that persuades the target to
download malware that infects the target’s computer; the infected computer
could infect the whole network, perhaps by sending a similar e-mail to
everyone else on the network.
Noncompliant Users
E-mail can be hacked maliciously, but most unauthorized access of
information through e-mail is due to noncompliant releases of information by
the legitimate user of that e-mail. Phishers could encourage such release, for
instance by pretending to be a colleague asking for information, but insiders
could e-mail sensitive information on a whim, such as when they want to
share a story for amusement or complaint, without realizing that they are
violating privacy.
A study by MeriTalk (October 2012) reported that the U.S. federal
government sends and receives 1.89 billion e-mails per day—the average
federal agency sends and receives 47.3 million e-mails each day; 48% of
surveyed federal information managers reported that unauthorized
information leaked by standard work e-mail, more than by any other vector,
38% by personal e-mail, and 23% by web-based work e-mail. 47% wanted
better e-mail policies, 45% reported that employees did not follow these
policies, and only 25% of them rated the security of their current e-mail
system with an “A” grade.
E-mail can be encrypted before it leaves the internal network, but
encryption can be used to hide sensitive information as it leaves through the
e-mail gateway. Eighty percent of federal information security managers
were concerned about the possibility of unauthorized data escaping
undetected through encrypted e-mails, 58% agreed that encryption makes
detection of such escapes more difficult, and 51% foresaw e-mail encryption
as a more significant problem for federal agencies in the next 5 years
(MeriTalk, 2012).
Over the past year, cloud services have proven no more or no less
secure than other platforms. Cloud computing is a hot business
opportunity in government, but both providers and customers seem to be
cautious enough about the security of the services that has not become a
major issue. But with major cloud service providers having
experienced several high-profile service outages in the past two years,
reliability has emerged as more of an issue than security. Google
suffered a brief outage in October, but Amazon was the worst hit (or the
biggest offender) with three outages of its Web Services in 2011 and
2012. Most recently, its Northern Virginia data center in Ashburn was
knocked out by severe weather in June and then again because of an
equipment failure in October. Planning for outages and data backup are
as important as security when moving critical operations or services to
the cloud. (Jackson, 2013a, p. 20)
The U.S. DOD intends to transition from more than 1,500 data centers to
one cloud that would be secure against the most sophisticated foreign state
sponsored attacks. In 2011, DOD announced its Mission-oriented Resilient
Clouds program, and scheduled testing of a system for 2015. The projects
include redundant hosts, diverse systems within the system, and coordinated
gathering of information about threats.
Wireless Networks
In theory any digital network could be tapped. Wired networks must be
tapped inside the network, either by malware or a hard device on the cables
between computers. A wireless network is less exposed to a hard tap but
more exposed to wireless taps. Since wireless traffic is broadcast, no one
has to join the network just to record the traffic, but one would need to break
into the nodes in order to read the traffic. Directional antennas can collect
traffic from further away.
This exposure is more likely for public networks, such as at hotels and
cafes, where access controls are low so that guests can use the network
temporarily using their personal computers. Malicious actors could dwell in
such spaces waiting for a high-value target to use the same network. If the
target’s security is poor, the threat could observe the target’s wireless
communications and even access the target computer itself.
Private networks, such as a network that a family would set up within a
household, become public when households forget to set any access controls
(as simple as a password), share their password with the wrong people, or
forego any password on the common assumption that nobody would be
interested enough to attack a home network. Since households increasingly
work online, bank online, and send private digital communications from
home networks, the exposure could include professional information,
personal identities, financial assets, and intimate private information.
Sometimes holders assume that their wireless network is too short range or
remote for anyone else to access, but a skilled threat only needs minutes of
proximate access to do harm.
The exposure of private information on unsecured wireless networks is
illustrated by Google’s inadvertent gathering of private data (including the
users’ passwords, private e-mails, and websites visited) from unsecured
household wireless networks while gathering data on locations for its Street
View service. In 2008 and 2009, Google sent especially equipped personnel
to drive around streets taking images and collecting data on locations for the
database that would found Street View. Their equipment triangulated
locations by tapping into unencrypted wireless signals but gathered more
data than was needed. In 2011, the French Data Protection Authority fined
Google $142,000 for taking the private data. In April 2012, the U.S. Federal
Communications Commission found no violation of U.S. law, but fined
Google $25,000 for obstructing the investigation. (Google’s revenue topped
$50 billion in 2012.) Most states within the United States continued to pursue
Google on criminal grounds. In March 2013, Google settled with attorneys
general from 38 states and the District of Columbia: Google committed to
pay $7 million, destroy the personal information, and implement a 10-year
program to train its own employees on privacy and to make the public more
aware of wireless privacy.
Malicious Activities
The two preceding sections respectively described the human sources of
malicious activities and the vectors for malicious activities. This section
describes the actual activities. The subsections below describe malicious
activities by their four primary objectives or effects:
1. Misinformation
2. Control of information or censorship
3. Espionage, including the collection of information and the observation
of the target
4. Sabotage, or some sort of deliberate disruption or damage of the
target, and terrorism
Misinformation
New technologies have eased the distribution of information, potentially
helping individuals to access the facts, but also helping malicious actors to
spread misinformation.
The trends are addressed in the subsections below by the democratization
of information, social media, public liabilities, and private liabilities.
Democratization of Information
Traditionally, information was gathered and distributed by officials and
journalists, but consumers have more access to the sources of information,
are better able to gather information for themselves and to share information,
are becoming more biased in their choices, and are more self-reliant.
Potentially this trend creates more discerning and informed consumers and
less centralized control of information, but it also allows for more varied
misinformation. Certainly, the traditional sources, filters, and interpreters are
diminishing. In the new year of 2013, the World Economic Forum offered a
warning about “digital wildfires” due to digital interconnectivity and the
erosion of the journalist’s traditional role as gatekeeper. The report warned
that the democratization of information, although sometimes a force for good,
can also have volatile and unpredictable negative consequences.
Officials often complain about journalists or private interpreters
misrepresenting them, although officials also often manipulate or collude
with these other sources. In effect, we need higher ethical standards on the
side of information providers and fairer discrimination on the side of the
consumer.
The British judicial enquiry into journalistic ethics found that most editors
and journalists “do good work in the public interest” but too many chased
sensational stories with no public interest, fictionalized stories, or
gathered information illegally or unethically, such as by hacking
telephones, e-mail accounts, and computers and by paying officials and
commercial service providers for confidential information (Leveson,
2012, p. 11). “[T]he evidence clearly demonstrates that, over the last 30–
35 years and probably much longer, the political parties of UK national
Government and of UK official Opposition have had or developed too
close a relationship with the press in a way which has not been in the
public interest” (Leveson, 2012, p. 26).
Social Media
Social media can be used to undermine centralized control of information
and to spread misinformation rapidly or convincingly, although social media
can be used also to counter misinformation and to gather intelligence.
Private use of information and communications technology, through social
media particularly, can challenge any person’s or organization’s control of
information and increase their exposure and liabilities. For instance, after
United Airlines denied a claim for damages to a passenger’s guitar, he wrote
a song (“United Breaks Guitars”) about it; in July 2009, he uploaded a video
of himself performing the song to YouTube, where it was viewed more than
12 million times within a year, by when United Airlines stock had dropped
by about 10%, costing shareholders about $180 million. On October 18,
2012, NASDAQ halted trading on Google’s shares after a leaked earnings
report (coupled with weak results) triggered a $22 billion plunge in
Google’s market capitalization.
Sometimes sources are deliberately misinformative and disruptive. For
instance, in 2012, 30,000 people fled Bangalore, India, after receiving text
messages warning that they would be attacked in retaliation for communal
violence in their home state (Assam). In July 2012, a Twitter user
impersonated the Russian Interior Minister (Vladimir Kolokoltsev) when he
tweeted that Syria’s President (Bashar al-Assad) had “been killed or
injured”; the news caused crude oil prices to rise before traders realized the
news was false. On April 23, 2013, Syrian hackers sent a fictional Tweet
through the Associated Press’ Twitter account to more than 2 million users
about explosions in the White House that injured the U.S. President. The
stock market briefly crashed; although it recovered within minutes, the crash
had been worth more than $130 billion in equity market value.
In politics, the practice of creating the false impression of grassroots
consensus or campaigning is called astroturfing. For instance, in 2009,
during a special election for a U.S. Senator from Massachusetts, fake Twitter
accounts successfully spread links to a website smearing one of the
candidates.
Sometimes, satirical information is mistaken for accurate information. For
instance, in October 2012, Iran’s official news agency repeated a story that
had originated on a satirical website (“The Onion”), claiming that opinion
polls showed Iran’s President (Mahmoud Ahmadinejad) was more popular
than the U.S. President (Barack Obama) among rural white Americans.
Misinformation can be corrected by social media. For instance, during
Storm Sandy in October 2012, social media in general proved very effective
for public authorities seeking to prepare the affected population in the north-
east United States. Additionally, after a private tweet claimed falsely that the
New York Stock Exchange was flooded, and it was reported by a televised
news channel (CNN), it was discredited by other tweets (as soon as 1 hour
after first tweet).
Social media can be used to track misinformation or emerging issues. For
instance, in 2012, the U.S. Centers for Disease Control and Prevention
developed a web application that monitors the Twitter stream for specified
terms that would indicate an epidemic or other emerging public health issue.
Meanwhile, the same technologies have been configured by official
authorities to track political opposition (see the section below on
censorship).
A combination of new consumer technologies and increased personal and
social responsibility would be the least controversial solution:
Public Liabilities
Popular information technologies challenge official control of information,
yet encourage popular perceptions that governments are responsible for
permitting freedom of objectionable information. The same people that
challenge censorship of wanted information might challenge freedoms to
distribute unwanted information. For instance, on September 30, 2005, a
Danish newspaper published cartoons featuring the Prophet Mohammed; a
few outraged local Muslims made threats against the newspaper and the
cartoonist, but the issue became global only in January 2006 after news
media in largely Islamic countries publicized the story. The news then spread
rapidly, prompting protests within days in several countries, targeting varied
Western diplomatic and commercial sites (few of them Danish). Clashes
between protesters and local authorities, including foreign coalition troops in
Afghanistan and Iraq, resulted in perhaps 200 deaths.
Similarly, violent protests started in Egypt on September 11, 2012, after a
video clip from a critical film about the Prophet Mohammed appeared online
dubbed into Egyptian Arabic. The film featured American actors and was
produced by American residents but was known to few other Americans.
Nevertheless, protesters focused on U.S. diplomatic sites: The same day, the
U.S. Embassy in Cairo was invaded; that night, four U.S. personnel were
killed at the mission in Benghazi, Libya; protests spread internationally and
claimed around 50 lives.
Private Liabilities
Unfortunately, we live in an increasingly digital world where severe
censorship and abuses of information live side by side with largely
unrestricted freedoms to slander and defame. Deliberate misinformation is
difficult to prosecute as a crime because of the ease with which the
perpetrators can disguise themselves through social media, the burden of
proof for intent and harm, and the right to freedom of speech. New
technologies have created practical and legal difficulties for the traditional
actors who countered misinformation.
Consequently, some official prosecutors have focused their attention on
easy targets without a clear case for the public good, while ignoring more
socially damaging misinformation and the rampant defamation and slander on
social media. For instance, a British court convicted a man for tweeting that
he should blow up an airport in frustration at the cancellation of his flight, but
in July 2012, his conviction was overturned on appeal—justly, given absence
of malicious intent or harm.
False allegations can be very costly if made against someone with the
capacity to pursue civil claims. For instance, in November 2012, the British
Broadcasting Corporation (BBC) broadcast an allegation that a senior
politician had been involved in child abuse, which transpired to have been a
case of mistaken identity on the part of the victim. Although the BBC had not
named the politician, he had been named in about 10,000 tweets or retweets.
On top of pursuing legal action against all the people who had spread this
false information on Twitter, the injured politician settled on £185,000 in
damages from the BBC. The BBC’s culpability emerged after revelations that
BBC staff had neglected warnings, including from BBC journalists, that one
of its long-standing television presenters (Jimmy Savile) had abused
children, allegations that were publicized widely only after his death
(October 2011).
Global Trends
In 2012, Freedom House reported, for the period January 2011 through May
2012, that restrictions on Internet freedoms had grown globally and had
become more sophisticated and subtle. Freedom House collected data on
only 47 countries; it claimed to lack data on other countries, which included
most of central America, western South America, most of Africa, and parts
of the Middle East and central Asia, which would make the world look even
less free.
Twenty of the 47 countries examined experienced a negative trajectory in
Internet freedom, with Bahrain, Pakistan, and Ethiopia registering the greatest
declines. Four of the 20 in decline were democracies. In 19 countries
examined, new laws or directives were passed that restricted online speech,
violated user privacy, or punished individuals who had posted uncompliant
content. In 26 of the 47 countries, including several democratic states, at
least one person was arrested for content posted online or sent via text
message. In 19 of the 47 countries, at least one person was tortured,
disappeared, beaten, or brutally assaulted as a result of their online posts. In
five countries, an activist or citizen journalist was killed in retribution for
posting information that exposed human rights abuses. In 12 of the 47
countries examined, a new law or directive disproportionately enhanced
surveillance or restricted user anonymity. In Saudi Arabia, Ethiopia,
Uzbekistan, and China, authorities imposed new restrictions after observing
the role that social media played in popular uprisings. From 2010 to 2012,
paid progovernment commentators online spread to 14 of the 47 countries
examined. Meanwhile, government critics faced politically motivated cyber
attacks in 19 of the countries covered.
Freedom House rates Internet freedom by country by judging restrictions in
three main dimensions:
• Obstacles to access: Infrastructural and economic barriers to access;
governmental efforts to block specific applications or technologies;
and legal, regulatory, and ownership control over Internet and mobile
phone providers.
• Limits on content: Filtering and blocking of websites; other forms of
censorship and self-censorship; manipulation of content; the diversity
of online news media; and usage of digital media for social and
political activism.
• Violations of user rights: measures legal protections and restrictions
on online activity; surveillance; privacy; and repercussions for online
activity, such as legal prosecution, imprisonment, physical attacks, or
other forms for harassment.
Freedom House scores each country from 0–100 points and then ranks
each country on a 3-level ordinal scale (free or 0–30 points; partly free or
31–60 points; and not free or 61–100 points). In 2012, 11 countries received
a ranking of “not free” (in order from the least free): Iran, Cuba, China,
Syria, Uzbekistan, Ethiopia, Burma, Vietnam, Saudi Arabia, Bahrain,
Belarus, Pakistan, and Thailand.
In 2012, Freedom House rated 14 countries as Internet free (in order):
Estonia, United States, Germany, Australia, Hungary, Italy, the Philippines,
Britain, Argentina, South Africa, Brazil, Ukraine, Kenya, and Georgia. In 23
of the 47 countries assessed, freedom advocates scored at least one victory,
sometimes through the courts, resulting in censorship plans being shelved,
harmful legislation being overturned, or jailed activists being released.
Fourteen countries registered a positive trajectory, mostly democracies;
Tunisia and Burma experienced the largest improvements following dramatic
partial democratization.
In March 2013, Reporters Without Borders listed (in order) Finland,
Netherlands, Norway, Luxemburg, Andorra, Denmark, Lichtenstein, New
Zealand, Iceland, and Sweden as the ten free-est for the world press. It listed
(in order) Eritrea, North Korea, Turkmenistan, Syria, Somalia, Iran, China,
Vietnam, Cuba, and Sudan as the ten least free for the world press. It
identified five states (Bahrain, China, Iran, Syria, Vietnam) whose online
surveillance results in serious human rights violations.
Pedagogy Box 14.21 International
Governance of ICTs
United States
Freedom House rated Estonia and the United States respectively as the first
and second free-est countries for Internet freedom. U.S. courts have held that
prohibitions against government regulation of speech apply to material
published on the Internet. However, in almost every year from 2001 to 2012,
Freedom House noted concerns about the U.S. government’s surveillance
powers. For instance, under Section 216 of the USA PATRIOT ACT, the FBI,
without warrant, can monitor Internet traffic. At the same time, an executive
order permitted the National Security Agency (NSA), without warrant, to
monitor American Internet use. In early 2012, campaigns by civil society and
technology companies helped to halt passage of the Stop Online Piracy Act
(SOPA) and the Protect Intellectual Property Act (PIPA), which they claimed
would have compromised personal privacy.
Since American companies dominate online sites and the software used
online, more restrictive countries clash with American companies and the
U.S. government. Google is the main private target: It provides the world’s
most popular search engine; Google’s Gmail is the world’s most popular free
online e-mail service; and Google provides popular free portals of or hosts
for information such as Google Maps and YouTube. Yet Google has been
more aggressively investigated and punished under antitrust and data privacy
legislation in Europe than in the United States, suggesting that the United
States is weak on violations of privacy, even while strong on freedoms of
information.
China
China is home to the world’s largest population of Internet users, but also the
largest system of controls, known as China’s Electronic Great Wall, although
these are implemented largely locally and inconsistently at the ISP level.
Users know they are being watched, which results in self-censorship and
evasion, including non-technical evasion, such as exploiting linguistic
ambiguities to confuse the censors. Freedom House (2012) reports that in
2011 alone, Chinese authorities tightened controls on popular domestic
microblogging platforms, pressured key firms to censor political content and
to register their users’ real names, and detained dozens of online activists for
weeks before sentencing several to prison. In March 2013, Reporters
Without Borders reported that China had again pushed Internet service
providers to help to monitor Internet users and stepped up its counters against
anonymization tools.
Russia
Russian authorities were late to censor political opposition on the Internet,
having focused on using social media itself for misinformation, but in January
2011, massive distributed denial-of-service (DDOS) attacks and smear
campaigns were used to discredit online activists. In December 2011, online
tools helped antigovernment protesters to organize huge assemblies, but the
government signaled its intention to further tighten control over Internet
communications.
Iran
After disputed elections in 2009, in which protesters had used social media
to evade restrictions on freedom of assembly and on freedom of speech,
Iranian authorities upgraded content filtering technology, hacked digital
certificates to undermine user privacy, and handed down harsh sentences in
the world for online activities, including the death penalty for three social
media activists.
ViewDNS, a site that monitors servers, estimates that the Iranian
government censors roughly 1 in 3 news sites and 1 in 4 of all sites on the
World Wide Web. All Iranian Internet service providers must buy bandwidth
from a state-owned company and comply with orders to filter out certain
websites, servers, and keywords. Iranian authorities also monitor social
media and prosecute noncompliant users. In April 2011, Iran announced
plans for an Iranian Internet (designated Halal— an Arabic religious term for
permissible ), which is closed to the World Wide Web for all but official
users. In January 2012, Iranian authorities announced new regulations on
Internet access, declared the Halal Internet under official test (with a fully
operational target of March 2013), and opened a Halal online search engine
(designated to replace the Google search engine, which Iran accused of
spying). Iran blocked access to Google and Gmail in May 2012 after Google
Maps removed the term Persian Gulf from maps. For 1 week in September
2012, Iran blocked all Google-owned sites (Google, Gmail, YouTube, and
Reader). In February 2013, Iran selectively blocked some foreign servers
until, in mid-March, it blocked all VPNs, effectively including all Google
sites (ViewDNS, 2012).
Syria
The Syrian government controls the licensing of or owns all
telecommunications infrastructure, including the Internet, inside Syria. In
1999, the Syrian Telecommunications Establishment first invited bids for a
national Internet network in Syria, including extensive filtering and
surveillance, according to a document obtained by Reporters Without
Borders (2013). Following revolution in March 2011, Syria has increased its
domestic control to the Internet and targeted online opponents.
Egypt
In January 2011, during mass protests, the regime of President Hosni
Mubarak shut down the Internet across Egypt in order to stifle the protesters’
use of social media. The official effort took a few days to achieve; the result
was a significant stifling of online protest and organization, but also of
economic activity.
In February 2011, the Supreme Council of the Armed Forces (SCAF) took
executive power after the resignation of Mubarak, but mobile phones, the
Internet, and social media remained under vigorous surveillance, bandwidth
speeds were throttled during specific events, SCAF-affiliated commentators
manipulated online discussions, and several online activists were
intimidated, beaten, shot at, or tried in military courts for “insulting the
military power” or “disturbing social peace.”
Pakistan
In the 2000s, mobile phones and other ICTs proliferated in Pakistan and were
readily applied by citizen journalists and activists. Freedom House (2012)
reports that between January 2011 and mid-2012, official actions resulted in
“an alarming deterioration in Internet freedom from the previous year,”
including a ban on ncryption and virtual private networks (VPNs), legislation
for a death sentence for transmitting allegedly blasphemous content via text
message, and blocks on all mobile phone networks in Balochistan province
for 1 day. After civil society advocacy campaigns, Pakistani authorities
postponed several other initiatives to increase censorship, including a plan to
filter text messages by keyword and a proposal to develop a nationwide
Internet firewall.
Additional restrictions on Internet freedom emerged in the second half of
2012: A brief block on Twitter, a second freeze on mobile phone networks in
Baluchistan, and a new directive to block 15 websites featuring content about
“influential persons.” In September 2012, Pakistan banned the popular
video-sharing website YouTube after clips of the movie “Innocence of
Muslims” sparked protests throughout Pakistan. In June 2013, Pakistan
considered lifting the ban, but only if Google were to install a “proper
filtration system” to remove content Muslims may find offensive.Ahead of
general elections in April 2013, the government prepared sophisticated
Internet surveillance technologies.
Espionage
In a world of increasing cyber espionage, espionage continues in old-
fashioned ways that are easily neglected, such as surreptitiously recording or
eavesdropping on private conversations. Espionage is commonly interpreted
as an official activity directed against foreign targets, but as the sections
above on the sources of attacks and activities suggest, official authorities
continue to gain capacity and incentives for espionage on private actors too.
For instance, in February 2013, a U.S. military lawyer acknowledged that
microphones had been hidden inside fake smoke detectors in rooms used for
meetings between defense counsels and alleged terrorists at the U.S.
detention center at Guantanamo Bay, Cuba. The U.S. military said the
listening system had been installed before defense lawyers started to use the
rooms and was not used to eavesdrop on confidential meetings. The
government subsequently said that it tore out the wiring.
As shown above, any digital media (including telephones, Bluetooth
devices, computers) with a recording or imaging device can be used to
surreptitiously spy. Recording devices can be hidden inside other items, such
as buttons and keys.
Cyber espionage normally involves use of a computer network to access
information. Some attempts to physically or digitally bypass access controls
are detectable by protective software, but as the section on vectors
illustrates, many vectors permit a threat to bypass such controls. Once a
threat has gained the user’s identification and password, it should be able to
log in to a controlled space, such as a personal computer, e-mail account, or
online directory, within which the threat can browse for information.
Snooping and downloading describe such unwanted access within the
controlled space (Yannakogeorges, 2011, p. 259).
Pedagogy Box 14.22 U.S. Official
Definitions of Cyber Espionage
Cyber Sabotage
Cyber sabotage can disrupt private access to the Internet, damage private
computers, and cause damage to infrastructure on a national scale.
As described in the subsections below, cyber sabotage is aimed mainly at
denial of Internet service or sabotage of control systems.
For U.S. DOD (2012b), sabotage is “an act or acts with intent to injure,
interfere with, or obstruct the national defense of a country by willfully
injuring or destroying, or attempting to injure or destroy, any national
defense or war material, premises, or utilities, to include human and
natural resources.”
Denial of Service
A denial of service (DOS) attack aims to disrupt Internet sites, principally by
overloading the servers that provide the information on the site. The attacks
are strengthened by using multiple sources to deliver the malicious traffic, a
technique known as distributed denial of service (DDOS). Typically,
malware is delivered by virus or worm, shutting down servers from the
inside or taking over a network of computers (a “botnet” of “zombie”
computers) so that they send requests for information that overwhelm the
servers. An attacker would use a botnet in most cases, but also could recruit
colleagues or volunteers.
Most DOS attacks are aimed at particular organizations but some have
wider implications. In November 1988, a worm known as “Morris” brought
10% of systems connected to the Internet to a halt. In 2001, the “Code Red”
worm shut down or slowed down Internet access from millions of computer
users.
Denial-of-service attacks have become easier with increased availability
of Internet bandwidth and technical skills. Cloud-hosting structures give
attackers more processing power and more digital space in which to hide
their activities.
Viruses, worms, and Trojan Horses each could inject malicious code into
other software, potentially causing the software to malfunction or to
execute destructive actions, such as deleting data or shutting down the
computer. A root kit is especially stealthy because it modifies the
computer’s operating system or even its kernel (core) (Yannakogeorges,
2011, p. 261). Software tampering or diddling is “making unauthorized
modifications to software stored on a system, including file deletions”
(Denning, 1998, pp. 33–34). A logic bomb is “a form of sabotage in
which a programmer inserts code that causes the program to perform a
destructive action when some triggering event occurs, such as terminating
the programmer’s employment.” (U.S. GAO, 2005b, p. 8)
Providing Security
The sections above, in reviewing the sources, vectors, and activities
associated with attacks on information, have provided much advice on
countering these things. This (final) section presents advice for higher
managers in providing information security in general and cyber and
communications security in general.
Information Security
Advice on securing information has been swamped with advice on securing
digital information alone, but information security must have a wider scope
than digital information. Well-established advice separates information
security by “three components”:
Cyber Security
This book does not have space for a full review of the technical provisions
of cyber security, but many technical provisions are described above
following a description of particular threats.
In general, private actors can provide minimal cyber security via the
following defensive measures:
• monitoring insiders for noncompliant communication of information;
• analyzing incoming traffic for potential threats;
• consulting experts on threats, subscribing to documents issued by
hacking supporters, and monitoring online forums for hackers;
• automated intrusion-detection systems;
• automated intrusion-prevention systems;
• automated logs of anomalous behaviors or penetrations and regular
audits of these logs;
• firewalls (effectively: rule-based filters of traffic or gates to
information, permitting information or requesters that pass some rule-
based criteria);
• antivirus software;
• antispam software; and
• monitoring for security patches.
Access Controls
Access controls are the measures intended to permit access into certain
domains or to certain information by the appropriate users but to deny access
to anyone else. Increasingly, cyber security experts advocate the “least
privilege principle”—according to which users are granted only the
permissions they need to do their jobs and nothing more.
Access controls on authorized users normally consist of a system that
demands an authorized user name and password. These data are much more
likely to be compromised at the user-end of the system than the server-end.
The servers normally hold the users’ passwords in an encrypted form (a
password hash—a number generated mathematically from the password).
When the user attempts to log in, the server generates a hash of the typed
password that it compares to the stored hash. If they match, the server
permits access. The hash is difficult to crack, but some national threats have
the capacity. Future technologies promise better encryption, but also better
decryption.
The requirement to maintain high grade cryptographic security will be
imperative for commercial, defense and security requirements. Potential
developments such as “quantum key distribution” will aim to guarantee
secure communication between users, preventing and also detecting any
information interception attempts. However, the advent of quantum
information processing, before the widespread application of quantum
encryption, may exponentially increase the speed and effectiveness of
attacks on data, meta-data structures, networks and underlying
infrastructures. Development of algorithms, such as Shor’s, will break
crypto keys with a one-way function, and make public key systems
vulnerable to attack, increasing the susceptibility of coded information
to be deciphered. Further challenges will arise if quantum computing
can be realized before 2040; potentially stagnating other developments
in either encryption or processing. (U.K. MOD, 2010a, p. 140)
Most threats are admitted to a secure space by some action on the part of
the user. The user may be noncompliant with a rule (such as do not share your
password) but much of the user’s work activity (such as communicating by e-
mail or attending conferences) implies inherent exposure to threats.
Consequently, most cyber security managers, having implemented controls on
access to or transfer of information, focus their attention on improving the
user’s compliance with secure behavior.
In theory, the advice is mostly simple (such as use only allowed software;
do not download from untrusted sources; keep your security system updated),
but one expert has warned that cyber security relies too heavily on users.
Since most users lack the knowledge to discover bugs on their own, they
rely on patches and updates from software manufacturers. They then
have to know how to fix the problem themselves, and vigilantly keep
tabs on whether or not their information systems are running software
with the latest security patches installed. This focus on the user
protecting his or her own information system by vigilantly patching their
systems is perhaps the greatest pitfall of current cyber security systems.
(Yannakogeorges, 2011, p. 261)
Defense
Systems with more bandwidth, processing power, or on-demand access to a
cloud are less vulnerable to denial of service attacks. Operating systems can
be configured to disable services and applications not required for their
missions. Network and application firewalls can be designed to block
malicious packets, preferably as close to the source as possible, perhaps by
blocking all traffic from known malicious sources. Prior intelligence about
the threats, their domains, and their IP addresses can help to prevent attacks.
System managers should coordinate between Internet service providers,
site hosting providers, and security vendors. Contingency planning between
these stakeholders before an attack can help during management of the
response. Some stakeholders treat an attack as an incident triggering a
business continuity plan or human-caused disaster response plan.
Deterrence
Increasingly, officials are seeking to deter as well as defend, to punish as
well as to prevent, although this is usually an option that must involve a
judicial authority or a government. For instance, on May 31, 2011, U.S.
officials first warned publicly and officially that harmful cyber attacks could
be treated as acts of war, implying military retaliation. On September 18,
2012, U.S. Cyber Command hosted a public conference at which the State
Department’s legal adviser (Harold Koh) stated that the United States had
adopted and shared through the UN ten principles, of which the most
important implications included
SUMMARY
QU E S T I O N S A N D E X E R C I S E S
15
Transport Security
his chapter considers the security of transport. The three main sections
T below consider ground transport, aviation security, and maritime
transport security (including counter-maritime piracy).
Scope
Ground transportation is mostly road and railway transportation but includes
pedestrian and animal carriers, which tend to be more important in
developing countries.
Maritime and aerial forms of transportation tend to be more secure
because they are less coincident with malicious actors (at least between
ports or stations) and have more restrictive controls on access (at least at
ports and stations), but are more expensive up-front (even though they are
usually cheaper to operate in the long term). Short-range aviation (primarily
helicopters and small fixed wing aircraft) offers accessibility and speed, but
it is much more expensive to operate and is exposed to short-range ground-
to-air weapons.
Road transportation is commonplace and will remain commonplace
because of private favor for the accessibility and freedom of roads, even
though road transport is expensive (at least operationally, over longer
distances), suffers frequent accidents, and is widely coincident with
malicious actors.
Railways are more efficient and safer than roads, but some authorities
cannot afford the upfront investment, in which case they invest in roads and
bus services, even though these are more operationally costly and harmful in
the long term. Railway lines are often more important economically and
socially at local levels and in larger, less developed countries, where the
railway is the only way to travel long distances or through rough terrain
(short of using slow animals or expensive off-road vehicles or aircraft).
In insecure or underdeveloped areas, road transportation is often the only
means of transportation, after the collapse of the infrastructure required for
aviation and railway alternatives. Consequently, road transportation becomes
more important in unstable or postconflict areas, even though road transport
remains very exposed to threats with simple weapons and skills.
Sometimes, road transportation is the only option for routine
communications, such as in the mountainous areas of Afghanistan and
Pakistan, where the poor weather, thin air, and insurgent threats discourage
use of even helicopters. Indeed, some of these areas are so underdeveloped
as to stop automobiles altogether, leaving pedestrians and equines (primarily
donkeys) as the only reliable routine means of communications between
bases and the most remote official and military outposts.
On June 15, 1991, militants attacked two trains near Ludhiana, India, with
firearms, killing nearly 80 people.
On July 25, 1995, a device exploded in a commuter railway station in
Paris, killing eight and injuring 80. On August 26, a device was defused
on the tracks of a high-speed railway line near Lyon. On October 6, a
device exploded inside a Paris underground railway station, wounding
13. On October 17, a device exploded on a commuter train in Paris,
wounding 29.
On July 24, 1996, four bombs planted by the Liberation Tamil Tigers
on a commuter train in Dehiwala station, Colombo, Sri Lanka, killed 64
and injured more than 400 people.
On September 10, 2002, a train from Howrah to Delhi, India, was
derailed on the bridge over the Dhave River in Bihar, probably by a local
Maoist group, killing more than 130. On March 13, 2003, a bomb
exploded on a train as it pulled into Mulund railway station, India, killing
10 people and injuring 70.
In 2003, Lyman Farris, a naturalized U.S. citizen from Pakistan, was
apprehended in New York after plotting, with al-Qaida sponsorship, to
bring down the Brooklyn Bridge and derail a train. In 2004, Shahawar
Matin Siraj, a Pakistani immigrant, and James Elshafay, an American with
Egyptian and Irish immigrant parents, were arrested after an informer
encouraged and recorded their plot to bomb a subway train station in
Manhattan.
On March 11, 2004, within 3 minutes, most of a dozen improvised
explosive devices detonated across four commuter trains, inside or just
outside a single commuter train station in Madrid, Spain, killing 191
people and injuring more than 2,000. At least four Spanish Jihadis were
involved—more than 3 weeks later, these four men blew themselves up,
and killed a policeman, during a police raid on their residence. They
were at least inspired by al-Qaida; they may have been assisted by other
Jihadis who escaped.
On July 7, 2005, suicide bombers detonated themselves on each of
three underground trains in London, within seconds of each other, killing
39 people and the three bombers. Almost 1 hour later, a fourth bomber
detonated on a bus, probably after an electrical failure prevented his
device from exploding on a train along with the others. He killed himself
and 13 others. Two weeks later, two Somalis, one Ethiopian, and one
Eritrean-borne British citizen attempted to copy the 7/7 attacks but their
devices failed to explode. Like the bombers in Madrid, the British
bombers were all first- or second-generation immigrants, Muslims (one
was a convert), some with probable terrorist training abroad. The
Spanish bombers used dynamite procured illegitimately from miners,
while the British bombers used liquid explosives produced from
hydrogen peroxide.
On July 28, 2005, an explosion on an express train, leaving Jaunpur,
Uttar Pradesh, India, for Delhi, killed 13 and injured more than 50.
On July 11, 2006, seven bombs within 11 minutes across seven trains
in Mumbai, India, planted by Lashkar-e-Taiba (an Islamist terrorist group
based in Pakistan), killed 209 and injured more than 700 people.
On November 20, 2006, an explosion on a train between New
Jalpaiguri and Haldibari in West Bengal, India, killed five.
On February 18, 2007, bombs detonated on the Samjhauta Express
Train, soon after leaving Delhi in India for Lahore in Pakistan, killing 68
and injuring more than 50 people. The main perpetrator was probably
Lashkar-e-Taiba.
In 2006, Lebanese authorities arrested Assem Hammoud on evidence
gathered in cooperation with the Federal Bureau of Investigation (FBI)
that he was plotting with Pakistani terrorists for suicide attacks on trains
between New Jersey and New York.
In 2009, Najibullah Zazi, a childhood immigrant to the United States
from Afghanistan, and two high-school friends (one another immigrant
from Afghanistan, the other from Bosnia) were arrested close to
implementing a long-planned al-Qaida-sponsored plot to blow
themselves up on subway trains in New York.
On November 27, 2009, a high speed train was derailed by a bomb
near the town of Bologoye on its way between Moscow and Saint
Petersburg causing 27 deaths and about 100 injuries.
On April 22, 2013, Canadian authorities arrested a Tunisian immigrant
and an ethnic Palestinian immigrant for an alleged plot, sponsored by al-
Qaida, to derail a train between Toronto and New York.
The program allocated grants to truckers worth $4.8 million for the
first 2 fiscal years (2005–2006), $11.6 million in 2007, $25.5 million in
2008, and $7 million in 2009, before termination in 2010.
Meanwhile, critics of the government’s focus noted that road traffic
accidents make up a much greater risk than terrorism and their long
decline was stabilizing. Critics also noted that the infrastructure was
aging (the greatest surge in road building was in the 1960s) with little
funding for replacement. Infrastructure failures are very rare, although the
risk seems to be increasing. For instance, in 2008, a bridge carrying the
Interstate 35W highway across the Mississippi River in Minneapolis,
Minnesota, collapsed, killing 13 and injuring around 100.
Navigation
Good navigation saves time in transit and thus reduces exposure to the risks
in the system and reduces wear to the system. Navigation is also important to
avoiding and escaping particular threats. Users of the transport system should
be advised how to avoid natural hazards. In unstable or high crime areas,
drivers and passengers should be trained to evade malicious roadblocks,
hijackers, and other threats. The best routes of escape and the places to
gather in an emergency should be researched and agreed in advance of travel.
Personnel should have access to suitable maps marked with the agreed
bases, other safe areas (such as friendly embassies), escape routes, and
rendezvous locations. Compasses are useful acquisitions for each person and
automobile. Where the budget allows, each vehicle could be acquired with
an electronic navigation system (a Global Position System triangulates
locations with data sent from earth-orbiting satellites; an inertial guidance
system, using motion and rotation sensors, plots movements based on the
vehicle’s attitude and speed), although in case this system fails the personnel
should be trained to read a paper map too.
Communications
Vehicles should be equipped with radio or telephone communications so that
passengers can communicate with emergency services or a base in case of
any emergency while in transit. Vehicles can be equipped with tracking
technology in case a vehicle is hijacked or the passengers otherwise lose
communications (trackers are simple and cheap enough to be widely used to
track vehicles in commercial operations). Some trackers can be configured to
communicate with base if they sense that the vehicle has been involved in an
accident or the driver has been away from the vehicle outside of the
programmed schedule. Passengers too can be equipped with trackers—
usually in their clothing or mobile phones.
Vehicle Survivability
Typically, vehicle manufacturers and users must fulfill some obligations for
the safety of vehicles in terms of their reliability and the passenger’s
survivability during an accident. The vehicle’s survivability under malicious
attack is a dramatically more challenging requirement. The subsections
below explain why the requirement has increased, how to improve resistance
to kinetic attacks, how to improve blast resistance, how to control access to
vehicles, the balance between overt survivability and stealth, and the
personal aid equipment that should be carried.
Requirement
The demand for more survivable vehicles has risen dramatically in response
to increased terrorism and insurgency. For instance, prior to the terrorist
bombings of U.S. Embassies in Kenya and Tanzania in 1998, the U.S.
diplomatic service provided around 50 armored vehicles for chiefs of
mission at critical and high-threat posts. Thereafter, the service prescribed at
least one armored car for every post. By 2009, the service had acquired
more than 3,600 armored vehicles worldwide, including 246 vehicles for
chiefs of mission (U.S. Government Accountability Office [GAO], 2009, pp.
13, 24). Meanwhile, as insurgencies in Afghanistan and Iraq grew in quantity
and quality, operators there realized requirements for more survivable
vehicles of all types, from the smallest liaison vehicles to large “force
protection” vehicles and logistical vehicles. The period of most rapid
acquisition of armored vehicles was in 2007.
The North Atlantic Treaty Organization (NATO) long ago agreed on
standards of protection for military vehicles that are widely used to define
the survivability of all vehicles (see Table 15.1). The standard (STANAG
4569) specifies five protection levels, where most available vehicles do not
meet Level 1, most of the military armored vehicles (including wheeled and
tracked armored personnel carriers) that were acquired through the Cold War
do not surpass Level 1, most armored vehicles fall within Level 2, a few of
the larger wheeled vehicles (normally six- or eight-wheeled) fall within
Level 3, including the mine-resistant ambush protected vehicles that were
widely acquired in the 2000s, light tanks and infantry fighting vehicles lie
within Level 4, and only main battle tanks surpass Level 5.
No armor is proof against all threats and uncomfortable trade-offs must be
made between protection, mobility, and expense.
Blast Resistance
Vehicles can be designed and constructed to be dramatically more survivable
against blast, which is typically produced by chemical explosives hidden on
or in the ground. This blast resistance was the main capability offered by a
class of vehicles known as “mine resistant ambush protected” vehicles or
MRAPs in the U.S. military, “Heavy Protected Patrol Vehicle” in the British
military, which was urgently required in Afghanistan and Iraq from the mid-
2000s. From 2005 to 2009 alone, the U.S. military urgently ordered more
than 16,000 MRAPs. The U.S. Army’s National Ground Intelligence Center’s
Anti-Armor Incident Database suggests that a MRAP vehicle reduced
interior deaths, compared to an armored Humvee, by between 9 times
(Afghanistan) and 14 times (Iraq) (based on data for the average number of
troops killed per explosive attack on each vehicle, 2005 to 2011).
However, MRAPs are more than four times more expensive than replaced
vehicles, about twice as heavy, larger, slower, less mobile, and more
burdensome to sustain. Their size meant that often they could not fit in
confined urban areas or on narrow roads, while their weight often caused
roads or bridges to collapse. Their height contributed to higher rates of roll-
overs (such as when roads collapsed and a vehicle tumbled into a ravine).
Often they were confined to good roads (where insurgents could more easily
target them). In 2009, the U.S. military required, especially for Afghanistan,
another 6,600 vehicles of a smaller more mobile class of blast resistant
vehicle, designated MRAP all-terrain vehicles (M-ATVs). Also in 2009, the
British required a similar class that they called Light Protected Patrol
Vehicle. By mid-2012, around 27,000 MRAPs and M-ATVs had been
produced to urgent orders, of which 23,000 had been deployed to
Afghanistan and Iraq.
In practice, most operators and situations require all classes of vehicle:
MRAPs should patrol and escort on the good roads and in the spacious urban
areas, but M-ATVs are required for poorer terrain. However, MRAPs and
M-ATVs each remain imperfect trade-offs; M-ATVs proved insufficiently
survivable for some roles, so more than 6,000 of them were upgraded to be
more resistant to blast from beneath.
Areas facing blast should be made from tougher materials and can be
filled with energy-absorbing materials or constructed to collapse gracefully
(although these materials tend to reduce interior space). Higher ground
clearances increase the distance between ground-based blast and the
vehicle’s interior, although tall vehicles tend to roll easier and to be more
difficult to hide. The bottom armor of the vehicle should be v-shaped so as to
deflect blast to the sides and should contain the automotive parts so that they
do not separate as secondary missiles. Monocoque hulls (where the same
structure bears all loads and attachments) eliminate some of the potential
secondary missiles associated with a conventional chassis. Wheel units
should be sacrificial, meaning that they separate easily under blast without
disintegrating further, taking energy away from the vehicle interior without
producing further secondary missiles. The interior passenger compartments
should be protected as interior compartments separate from the automotive
and engine compartments. Interior passengers should be seated on energy-
absorbing or collapsible materials, or suspended from the roof, to reduce the
energy transmitted from below into the passenger’s body. Foot rests and foot
pedals also should be energy absorbing (otherwise they would transmit
energy that could shatter the legs) without separating as secondary missiles.
Access Controls
Vehicles need apertures for human ingress, egress, and visibility, but
apertures increase the interior’s exposure to sudden ingress of projectiles,
human attackers, and thieves. Vehicles should be secured from unauthorized
access by specifying locks on all doors and hatches, and windows
constructed from a puncture-resistant material. Door and hatch hinges should
be designed and constructed to be resistant to tools. In hot environments or
prolonged duties, crews will tend to leave doors and hatches open for
ventilation, where they can be surprised, so an air-conditioning system
should be specified. Rules on closing and locking hatches and leaving at
least one guard with a vehicle should be specified and enforced.
When civilian vehicles are converted to armored versions, some minor
upgrades may be forgotten. For instance, on February 15, 2011, two U.S.
Immigrations and Customs Enforcement agents were shot (one was killed)
inside an armored civilian vehicle in north-eastern Mexico after the driver
was forced off the road by armed threats (probably robbers targeting an
expensive vehicle)—unfortunately, the vehicle was configured to
automatically unlock the doors when the driver put the transmission in “park”
(a typical safety feature in ordinary cars), but this allowed the threats to open
a door; during the struggle to close and relock the door, the window also was
lowered, through which the threats fired some bullets. Once the door and
window were secured, the vehicle survived all further bullets (around 90),
but the harm to the two agents was already done.
Operators face a choice between complying with external requests to stop
and ignoring such requests in case they have malicious intent disguised as
official duties or requests for assistance. For instance, on August 24, 2012, in
Mexico City, two U.S. agents (probably from the U.S. intelligence
community) were wounded by some of around 30 bullets fired by Mexican
federal police during a chase after the Americans refused a checkpoint,
probably influenced by the event in 2011. They were driving in a ruggedized,
armored vehicle, but its rear wheel and some of its apertures did not survive
the bullets. A Mexican passenger was unharmed.
Stealth
Operators face a choice between armored, ruggedized vehicles and vehicles
that do not attract as much attention. Some operators have very contrasting
preferences in the trade-off between visible deterrence or defense and
stealth, with some operators insisting on travelling everywhere alone in
randomly hailed taxis, while others insist on travelling nowhere without
visibly armed escorts and armored vehicles. Some operators like to hire
local vehicles that resemble the average local vehicle and to remove any
branding from their vehicles, while others prefer to procure more robust and
armored vehicles, even though they stand out from most other vehicles.
Commercial interests, home funder requirements, and local laws may force
operators to display their branding and ride in specialized vehicles,
whatever the cost in stealth.
Mobility
Survivable vehicles are normally acquired with run-flat tires, which are
pneumatic tires with solid or rigid cores, which will continue to run for
dozens of miles after a puncture. The pneumatic tire may be reinforced with a
tear- and puncture-resistant material. Given a sudden change in threat level,
pneumatic tires can be replaced with solid rubber tires, which cannot be
punctured (although they can be chipped), or filled partly with water, which
helps to dampen chemical blast, although their extra weight and reduced
flexibility transfer more vibration and wear to the vehicle.
Armoring the vehicle and adding equipment implies an added load, which
implies a need for upgraded running gear and to permit running over rougher
grounds and inferior roads in case the vehicle needs to escape threats on
superior roads. The U.S. President’s limousines offer excellent armor
protection around a voluminous passenger compartment, but they do not offer
good off-road capabilities. On March 23, 2011, the President’s spare
limousine for his official visit to Ireland grounded on a small hump in the
gateway leaving the U.S. Embassy in Dublin and was temporarily abandoned
in front of crowds of spectators and journalists. Most civilian armored
vehicles are based on chassis designed for off-road use, while military
armored vehicles are based on more specialized platforms.
Still, procurers must trade expense and mobility against survivability, so
survivable vehicles tend to be very expensive with short life cycles. For
instance, as of October 2009, 914 (32%) of U.S. diplomatic armored
vehicles were in Iraq, at a procurement cost of about $173,000 with a life
cycle of just about 3 years due to Iraq’s difficult terrain (U.S. GAO, 2009).
Increased survivability implies increased risks associated with accidents
and reduced mobility. Increased armor and equipment on and in the vehicle
implies reduced internal space, which implies more heat stress,
biomechanical stress, and acceleration injuries. Reduced mobility implies
that the vehicle is restricted to the best terrain, helping threats to target the
vehicle. Increased protection also implies more separation between the
passengers and locals on the ground, thereby alienating locals and
interrupting opportunities for local engagement and intelligence.
Aviation Security
This section concerns civilian aviation security. The sections below explain
the scope of civilian aviation, aviation accidents, sovereign threats, aviation
terrorism, and the provision of aviation security.
Scope
Civilian aviation covers commercial transportation of cargo by air,
commercial carriage of passengers by air, privately owned and operated
aircraft, and all associated infrastructure, such as airfields and service and
support facilities.
About 28,000 flights take off from the United States per day, accounting
for half of global commercial air traffic. In a year, commercial flights
carried about 600 million people and 10 million tons of air freight within
the United States or between the United States and another country.
In 2011, DHS counted as aviation infrastructure: 19,576 general
airports (including heliports), 211,450 general aviation aircraft, 599
airports certified to serve commercial flights—including 459 federalized
commercial airports (Guam is most remote to the continental United
States).
At the 459 federalized airports, 43,000 Transportation Security
Officers and 450 Explosives Detection Dogs from the U.S. TSA work at
more than 700 security checkpoints and 7,000 baggage screening points.
In 2006 (the last year for which numbers are available), the TSA
screened 708,400,522 passengers on domestic flights and international
flights coming into the United States. This averages out to over 1.9
million passengers per day (data source: TSA).
Accidents
Air accidents make up a specialized subject across engineering, industrial
psychology, and policy science, for which this book has insufficient space,
but the risks of air accidents should be acknowledged here as low. Compare
the section above on road traffic accidents: Air safety is rigorously regulated
and inspected, whereas individual car owners and drivers effectively
regulate their own safety, outside of infrequent and comparatively superficial
independent inspections. Consequently, an average aircraft flight is much less
likely to cause fatality than an average car journey. According to data from
the U.S. Centers for Disease Control, fatalities compute at about 0.00001 per
flight. Yet the fatalities of air travel seem large because an aircraft typically
carries more passengers than a car carries, so a catastrophic failure in an
aircraft tends to kill more people per failure.
Aviation Terrorism
Passengers use ground transport more frequently, and most cargo is carried in
ships, so rationally terrorism would be more cost-effective if it targeted
ground or maritime transport, yet terrorists like to target passenger airliners.
Attacks on passenger airliners offer catastrophic direct effects, great human
harm (one airliner could carry 850 passengers), and major indirect economic
effects. Additionally, airliners are mostly Western-produced, -owned, -
operated, and -used. Airliners are symbols of globalization and material
development, which some terrorists, particularly Jihadi terrorists, oppose.
Cargo
The security of air cargo is addressed below via its scope, the threats, and
the controls.
Scope
Global air freight is worth about $100 billion per year. Due to increased
security after 9/11, air freight costs rose about 15% from 9/11 to January
2002 (according to the World Bank).
Security measures have always focused more on passenger than freight
aircraft because terrorists have attacked more passenger than cargo planes,
even though rationally they could more efficiently damage the economy by
shutting down air freight. (See above for reasons why terrorists prefer to
attack passenger aviation.)
The security of air cargo is an issue for passenger aviation too, since most
passenger aircraft carry also cargo—usually smaller, faster mail, while bulk
items travel in dedicated cargo planes. In the United States, about 80% of air
cargo is carried by cargo-only domestic flights, 20% by passenger flights.
About 3.5 million metric tons per year travels as cargo on U.S. domestic and
international passenger flights.
Threats
The most frequent illegal activities relating to air cargo relate to smuggling
and trafficking, including banned drugs, exotic and protected animals and
plants, banned animal products, firearms, and gems. Sometimes, smuggling
exposes how easily large and animated items can be carried as air cargo
without discovery. For instance, in 2003, a young man sealed himself into a
box that was carried as air freight from New York city via Niagara Falls and
Fort Wayne, Indiana, to his parent’s home in Dallas, through the hands of
several commercial haulers, without discovery.
Of more concern than private stowaways are hidden explosives that could
cause a catastrophe that destroys an aircraft or the storage area, kills
personnel or (if an aircraft is destroyed in the air) people on the ground, and
disrupts air transport, with huge economic consequences. A small device,
smaller than a briefcase, would be small enough to hide inside the typical
large freight containers and still threaten the aircraft catastrophically,
although the blast would be mitigated by surrounding cargo. The larger
containers on dedicated cargo planes are of concern for the carriage of
weapons of mass destruction that would be too heavy or easily detected to be
loaded as passenger luggage on a passenger flight.
In recent years, terrorists have attempted to mail devices that were
designed (probably using a timed detonator) to explode in mid-air over the
destination country. In October 2010, after a tip-off derived from a
multinational intelligence operation directed against al-Qaeda in the Arabian
Peninsula and American exile Anwar Al-Awlaki, East Midlands airport in
England and Dubai airports discovered explosives hidden inside printer
toner cartridges on UPS and FedEx flights respectively, both mailed from
Sanaa, Yemen, to addresses in the United States, and configured to detonate
over the U.S. east coast. On October 30, 2010, Britain banned all air cargo
from Yemen and Somalia. On the same day, the U.S. DHS banned all air
cargo from Yemen. Later (November 9), it banned all air cargo from Somalia
and all “high-risk” cargo on passenger planes.
In October and November 2010, a Greek anarchist group mailed bombs to
various embassies and foreign leaders, including two that were intercepted at
Athens airport before being loaded as air freight.
In March 2011, Istanbul airport discovered a dummy bomb inside a
wedding cake box that had arrived on a UPS flight from London. British
police arrested a Turkish man on suspicion of a hoax. On June 17, 2011, the
British Department for Transport banned UPS from screening air cargo at
some facilities in Britain until it could meet British security requirements.
Controls
Since 9/11, all cargo is supposed to be screened before loading on to
passenger aircraft within or inbound to the United States (this is the same
rule as for passenger baggage). Cargo on nonpassenger aircraft faced lighter
controls for several years after 9/11 so that most air cargo was practically
not inspected during transport. Effective October 2003, the U.S. Customs and
Border Protection agency required electronic submission of the manifest, if
the plane originates abroad, at least 8 hours before an air courier boards or
12 hours before any air cargo is loaded abroad. The 9/11 Commission Act of
2007 required the TSA to screen 50% of cargo on cargo-only flights by
February 2009, 100% by August 2010. TSA screens all packages from
Afghanistan, Algeria, Iraq, Lebanon, Libya, Nigeria, Pakistan, Saudi,
Somalia, and Yemen.
Businesses generally oppose more security because of the costs and the
delays to commercial flows. Many commodities and raw materials must
travel quicker or more sensitively than routine inspections would allow.
Consequently, officials have focused on acquiring technologies that allow
quicker, nonintrusive inspections or intelligence that enables more targeted
inspections. In 2009, the TSA piloted (at Houston) a Pulsed Fast Neutron
Scanner (which can differentiate materials at the molecular level), but the
unit cost $8m, and the funding covered just a few months of operating costs.
Luggage
Before 9/11, much checked luggage (luggage intended for the hold, separate
to the passenger cabin) was not inspected or screened at all, although higher-
risk airlines (such as the Israeli and Jordanian national airlines) screened all
luggage for explosives and temporarily passed all luggage through a pressure
chamber before loading on to aircraft (because some explosive devices had
been improvised with detonators triggered by changes in pressure as the
aircraft climbed into the air).
After 9/11, the United States ordered a rate of 100% inspection of anything
loaded on to passenger aircraft, including checked luggage, involving at least
x-ray screening for explosives. Initially, the promise was 100% manual
inspection, but that promise fell away given the burden. Information on the
true rate of manual inspection remains guarded.
Metallic screening is less important for checked luggage than for carry-on
baggage, because the passenger has no access to the hold, so any metallic
weapons hidden there would be an issue of trafficking rather than hijacking.
Liquids Screening
Terrorists have planned to use liquid explosives, to be prepared on the flight
using materials carried aboard without alerting the screeners. According to
the plot intercepted in August 2006, the main component would have been
hydrogen peroxide, hidden inside commercially available bottles, whose
tops remained sealed because the terrorists had replaced the contents using
syringes stuck through the plastic sides. A detonator would have been
disguised as the battery inside a disposable camera. Another disposable
camera could have served as the electrical ignition source. Intelligence
during the planning stage led to arrests that prevented the plot of August 2006
from being finalized, but, under the regulations of the time, the materials
probably would have passed the access controls.
The initial solution was to ban liquids entirely from passing through the
access controls (although passengers could purchase more from the secure
area of the airport). This led to long lines as baggage and persons were
inspected manually, and some farcical confrontations (such as intrusive
inspections of colostomy bags, breast milk, and hand creams), before
screening technology was adjusted to better detect liquids, agents aligned
their activities, and the public became more familiar with the new
regulations. Soon the rules were adjusted to allow liquids in small
containers, although cynics have pointed out that enough terrorists travelling
with enough small containers could carry enough materials to make the same
explosive device that had been planned in August 2006.
In 2009, the TSA procured 500 bottled-liquid scanners in a $22 million
contract with a commercial supplier. It deployed more than 600 of the
scanners to airports nationwide in 2011, 1,000 by 2013. These scanners are
used to screen medically required liquids (data source: TSA).
Clothing Screening
Since 9/11, authorities have demanded more screening of passengers and
their clothing by x-ray and evolved systems, but are opposed by passengers
who prefer to remain covered, on personal, cultural, or religious grounds.
Thick or outer garments are usually passed through Explosive Detection
Systems—conveyer-fed machines, often described inaccurately as x-ray
machines but using evolved technology in order to differentiate explosive
materials. However, they are unlikely to differentiate small amounts, as could
be distributed thinly within the lining of heavy clothing, or trace amounts, as
left behind when someone handles explosives. Some suppliers have offered
hand-held electronic devices for detecting the chemical traces, but these are
ineffective unless practically touching the person. From 2004 to June 2006,
the U.S. TSA acquired 116 full-body explosives sniffers (“puffers”) at 37
airports, despite poor tests of effectiveness. They were deleted because of
poor detection and availability rates, after a sunk cost of more than $30
million.
Since 2009, the TSA’s systems for detecting explosives have included
table-top machines (Explosives Trace Detection machines) for detecting
explosive residues on swabs. In February 2010, TSA announced that these
machines would be deployed nationwide. These machines are used mainly
for random or extra inspections of clothing worn by the passenger.
Explosives Detection Dogs are the best sensors of explosives, although
they are hazardous to some people and have short periodic work cycles, so
could not screen lines of people efficiently or effectively. They are used
mostly for random patrols on the land side.
Body Screening
Body imagers produce an electronic image of the person’s body underneath
their clothes and can be configured to reveal items hidden between the body
and clothing. They are quicker than a manual inspection (about 30 seconds to
generate the scan and inspect the image, compared to 2 minutes for a pat-
down), but are expensive and violate common expectations of privacy and
health security.
Body imagers come in two main technologies: backscatter x-ray and
millimeter wave. Backscatter units are less expensive but still costly (about
$150,000 per unit at the time of first acquisition). They use a flat
source/detector, so the target must be scanned from at least two sides (the
system looks like a wrap-around booth inside of which the passenger stands
facing one side). Many passengers are understandably reluctant to subject
themselves to a backscatter x-ray, having been told to minimize their
exposure to x-rays except for medical purposes. Additionally, health
professionals have disputed official claims that the energy emitted by a scan
is trivial. Meanwhile, reports have emerged that backscatter images do not
adequately penetrate thick or heavy clothing; some entrepreneurs have
offered clothing to shield the body from the energy, casting doubt on their
effectiveness at their main mission.
Millimeter wave systems produce 360-degree images, so are quicker, and
their images are more revealing, but they are more expensive, their health
effects are less certain, and their revealing images are of more concern for
privacy advocates. Authorities claim that software is used to obscure
genitalia, but obfuscation of genitalia would obscure explosives hidden in
underpants. Operators generally keep the images and the agent hidden inside
a closed booth and promise not to record any images. However, these
measures do not resolve all the ethical and legal issues. For instance, British
officials admitted that child pornography laws prevented scans of people
under 18 years of age.
Backscatter imagers have been available commercially since 1992. They
have been deployed in some U.S. prisons since 2000 and in Iraq since 2003,
but at no airports before 2007. The United States trialled one at Phoenix
airport in February 2007, then deployed 40 at 18 airports before 2010. The
United States also donated four to Nigeria in summer 2008. Britain trialled
one at London Heathrow airport in 2007. By then, officials already realized
preferences for millimeter wave systems. The Netherlands trialled three
millimeter wave systems at Schipol airport in 2007. Canada trialled one in
fiscal year 2008. Britain trialled some at Manchester from December 2009.
Body imagers of all types were deployed slowly and restrictively because
of privacy, health, and cost issues. In 2009, the GAO faulted the TSA for
poor cost-benefit analysis: The agency’s plan to double the number of body
scanners in coming years would require more personnel to run and maintain
them—an expense of as much as $2.4 billion. Until 2010, all of the deployed
machines were used for secondary inspections only, as a voluntary
alternative to a manual inspection, which itself was occasional.
After the new controls on outer clothing and liquids, terrorists planned to
hide explosives in underpants (as worn by Umar Farouk Abdulmutallab in
December 2009). He passed screeners and boarded an aircraft to the United
States but failed to detonate catastrophically (although he burnt himself),
probably because damp had degraded the explosive.
Within days of this attack, U.S., British, and Dutch governments required
scans of all U.S.-bound passengers or a manual inspection of the outer body
(pat-downs). Within one year, more than 400 body imaging machines had
been deployed at 70 of the 450 airports in the United States. Today, some of
these major airports require all passengers to pass through imaging machines
or opt-out in favor of a manual inspection, although cynics noted that the
requirement was sometimes abandoned during heavy flows of passengers.
After years of more access controls and delays at airports, passengers
rebelled most against body imagers and intrusive pat-downs. Certainly some
of the inspections were farcical or troubling, such as an agent inspecting a
baby’s diaper or making contact with an adult’s genitalia during a pat-down.
In November 2010, private citizens launched “We Won’t Fly,” essentially an
online campaign against excessive access controls, including a National Opt-
Out Day on Thanksgiving, 2010, although few passengers opted out on
America’s busiest travel day. Polls showed that the public disliked the new
procedures but also thought them necessary. Meanwhile, in November 2010
and repeatedly in subsequent months, U.S. Representative Ron Paul
(Republican from Texas) introduced a bill, the American Traveler Dignity
Act, to hold officials accountable for unnecessary screenings.
Although officials would not admit that procedures can change due to
public pressure rather than changes in threat, in fact security is an evolving
balance between commercial and popular and official requirements.
Universal body imagers and pat-downs probably represented the high-tide of
controls on access to passenger aviation. From January 2011, the TSA tested
a new millimeter wave software for 6 months at three airports, including
Reagan National, on the promise of less violations or privacy with the same
detection of weapons. On July 20, 2011, the TSA announced that the new
software would be installed on 241 millimeter wave units at 41 airports.
Information on the full distribution, use, and effectiveness of units remains
guarded.
In-Flight Security
On 9/11, hijackers burst into cockpits and overpowered flight crews. After
9/11, national and commercial regulations specified that cockpit doors were
to be locked during flight from the inside (previously, pilots on long flight
were in the habit of allowing other staff to visit the cockpit with refreshments
and of inviting select passengers to view the cockpit). Some cockpit crew
were armed and trained in self- defense. Some cabin crew were trained in
self-defense, although not armed.
The Federal Air Marshal Program deploys officers with concealed
weapons on commercial aircraft in case of hijackings. At the time of 9/11,
perhaps 33 air marshals were active, according to press reports, primarily
on international flights. The number was expanded rapidly after 9/11 but
remains secret—perhaps in the thousands. The TSA, which administered the
Marshals since 2005, admits that only about 1% of the 28,000 daily flights
have an air marshal aboard.
Passengers have proven to be most effective guardians of cabin security.
Before 9/11, officials and the industry advised passengers not to confront
hijackers in case they retaliated, but after 9/11, that conventional wisdom
was reversed. On 9/11, the fourth plane to crash did so after passengers
attempted to take control of the cockpit, having heard reports from
colleagues, friends, and family on the ground that the other three hijacked
planes had been flown into buildings. Passengers overpowered Richard Reid
in December 2001 and Umar Farouk Abdulmutallab in December 2009 after
they tried to detonate their explosives.
Small Arms
Small arms fired from the ground could critically damage an aircraft, but
their bullets are not energetic enough except when the aircraft is at very low
altitude or when the discharge is from within a pressurized compartment
during flight.
Rocket-Propelled Grenades
Rocket-propelled grenades are as available and portable as small arms, but
have similar range and are not accurate enough to attack aircraft, although the
self-destruct timers could be shortened so that the grenade would explode at
a predictable altitude near the target. This altitude is below most flight paths,
but could threaten any aircraft in theory as it takes off and lands. Helicopters
at low altitude and slow speed are most exposed. Such threats are most
profound in peacekeeping, counterinsurgency, and counterterrorism
operations, which rely heavily on helicopters for logistics, transport,
surveillance, and fire support. In 1993, Somali militia reconfigured their
rocket-propelled grenades to explode at the low altitude used by U.S.
military helicopters in support of ground operations. On one day in October
1993, they brought down two U.S. UH-60 Blackhawk helicopters that were
providing support to U.S. special operations forces on the ground in
Mogadishu. On August 6, 2011, a rocket-propelled grenade struck the aft
rotor of a U.S. Chinook (a twin rotor transport helicopter) while it was
transporting special operations west ofKabul, Afghanistan, killing all 38
people on board.
Cannons
Projectiles of less than 15 mm caliber are not energetic enough to harm
aircraft at normal flying altitudes. Automatic cannons (15–40 mm) fire
projectiles with sufficient energy and explosive content at an automated rate
to be catastrophically destructive at altitudes up to several thousand feet, just
enough to threaten light aircraft at cruising altitudes. These weapons are
specialized military weapons and are not man-portable, but many have fallen
into the hands of malicious actors and some can be transported by ordinary
pick-up trucks.
MANPADS
Scope
More than 40 civilian aircraft have been hit by MANPAD missiles from
1970 through 2011, resulting in 28 crashes and more than 800 fatalities. All
occurred in conflict zones. Almost all targets were cargo aircraft, delivering
freight to peacekeepers, counter-insurgent forces, or unstable authorities. The
count is an underestimate, because some smaller aircraft disappear without a
full explanation and some authorities may be unwilling to admit such a loss
during a counter-terrorist or counter-insurgency campaign.
In November 2002, al-Qaida’s agents attempted to use two Soviet-
produced SA7 MANPADs to shoot down an Israeli commercial passenger
aircraft departing Kenya. This attack failed probably because the range was
too short for the missiles’ sensors to lock on to the target, although rumors
persist that the target was equipped with an unadmitted antimissile system.
Since then, al-Qaida seems to have considered antiaircraft weapons
against very important persons. According to documents captured during the
U.S. operation to kill him in May 2011,Osama bin Ladenordered his network
in Afghanistan to attack aircraft carrying U.S. President Obama and General
David H. Petraeus (then commander of the International Security Assistance
Force in Afghanistan) into Afghanistan.
Countering MANPADs
Maritime Security
This section reviews maritime security. The subsections below describe its
scope, port security, cargo security, maritime terrorism, and maritime piracy.
Scope
Maritime risks include potential theft of cargo, damage to cargo, sabotage of
vessels, sabotage of ports and related infrastructure, smuggling and
trafficking, accidental release of hazardous materials, accidental collisions,
illegal immigration, maritime terrorism, and maritime piracy.
Any of these risks have direct commercial and economic implications—
potentially some of the returns include a temporary shutdown of global
logistics and thence of national economies. Some of these risks have
implications for society and politics at the national level, including slow-
onset risks, such as potential harm to individuals and societies from illegal
drugs. Others are rapid-onset risks, such as potential terrorist attacks via
shipped weapons or personnel.
The Convention for the Suppression of Unlawful Acts against the Safety
of Maritime Navigation (SUA), agreed in Rome on March 10, 1988,
defines unlawful acts against ships, such as
The Protocol for the Suppression of Unlawful Acts against the Safety of
Fixed Platforms Located on the Continental Shelf, agreed in Rome on
March 10, 1988, specifies the following offenses:
Port Security
Ports as small as village harbors handle commercial trade of one sort or
another, at least fish or tourists, but most concern is expressed over busy
commercial ports with capacity to handle standard shipping containers and to
service container ships and large passenger ships.
The busiest ports are generally in the northern hemisphere, in east Asia,
south-east Asia, the eastern coast of South Asia, the southern Middle East,
Egypt, Greece, Italy, Spain, Germany, the eastern and western coasts of the
United States, and central America.
Interruptions to these major ports could have national and international
implications. A 10-day labor lockout in 2002 at the Port of Los
Angeles/Long Beach cost the U.S. economy $1 billion per day (Blumenthal,
2005, p. 12). Some observers claim that a single terrorist attack on a prime
container port could trigger a global recession (Flynn & Kirkpatrick, 2004).
Cargo Security
Scope
Ships carry 99.5% of transoceanic trade. States with long coastal borders
tend to depend most on oceanic trade. For instance, Britain, Japan, and South
Korea each import or export by ship more than 90% of their trade by value
or 95% by weight.
More than 90% of global cargo moves in shipping containers. At any time,
12 million to 15 million containers are in use. In 2011, the equivalent of
more than 300 million containers were handled around the world.
Approximately 10.7 million containers arrived in U.S. ports that year (U.S.
GAO, 2012, p. 1).
Cargos are of concern because they can be stolen, hijacked, held to
ransom, used for smuggling or trafficking, and they could carry hazardous
materials. In 2003, the Organization for Economic Cooperation and
Development estimated worldwide cargo theft at $30 billion to $50 billion
per year. Vessels and hazardous cargoes could be used as weapons or
weapon delivery systems, harming directly at the local level and generating
cascading economic consequences.
Safe Traders
The U.S. Customs-Trade Partnership Against Terrorism (C-TPAT) came into
existence in November 2001; it was formalized into law with the SAFE Port
Act in October 2006. It is administered by the CBP agency within the DHS. It
invites businesses (such as importers, carriers, brokers, port authorities) to
voluntarily ensure compliance, internally and within their supply chains, with
U.S. security standards, in return for which they can expect fewer
inspections. The program has no regulatory authority, so responsibility by
members to maintain standards is self-regulating.
In 2005 the World Customs Organization (WCO) established the
Framework of Standards to Secure and Facilitate Global Trade (SAFE
Framework) to effectively extend C-TPAT beyond the sphere of U.S. bound
trade. In 2006 the terms and conditions of an Authorized Economic Operator
(AEO) status were put into document form. For firms that are involved in
international trade, AEO is the corresponding designation to C-TPAT
certification. C-TPAT certification automatically affords the holder AEO
status. As of 2011, more than 10,000 companies had enrolled in C-TPAT.
Manifest Rules
In October 2003, the United States first implemented the 24-Hour Advance
Manifest Rule, which mandates all sea carriers (except for bulk carriers and
approved break bulk cargo) to submit specified information about the cargo
to the U.S. CBP through the Sea Automated Manifest System, 24 hours before
loading cargo intended for a U.S. port. Risk assessments are produced from a
rule-based system known as the Automated Targeting System at the National
Targeting Center in Virginia. The U.S. CBP can order the carrier not to load
the cargo if it is assessed as too risky ahead of a local inspection.
Foreign ships must send information about the cargo, passengers, crew,
and voyage to the U.S. Coast Guard 96 hours before arrival in a U.S. port. If
the Automated Targeting Center rates the vessel as high risk, the Coast Guard
can be ordered to board the vessel before it enters port. The Coast Guard can
also inspect vessels randomly.
The SAFE Port Act of 2006 requires that importers electronically file 10
additional data elements (manufacturer; seller; consolidator; buyer name and
address; ship name and address; container stuffing location; importer record
number; consignee record number; country of origin of goods; and the
Commodity Harmonized Tariff Schedule number) to the CBP no less than 24
hours prior to the landing of containers at a U.S. port of exit. This
requirement is known as the “10 Plus 2 Program” or just the “24-hour rule.”
In 2010, the EU announced its own 24-Hour Advance Manifest Rule,
effective from 2011. It demands 24-hours notice before cargo is loaded
aboard any vessel that will enter the EU across deep seas, or 2 hours before
short sea shipments arrive in an EU port, or 4 hours before break bulk cargo
arrives in an EU port.
Inspections at Port
Cargo security authorities can inspect containers afloat or ashore randomly
but mostly respond to intelligence, tip-offs, and automated inspections. Most
inspections are conducted at the exit from the port. For instance, the U.S.
CBP deploys large “nonintrusive” x-ray inspection systems and nuclear
“radiation portal monitors,” through which a truck can drive. For “intrusive”
inspections, fiber-optic cameras can be pushed through apertures into the
container without opening its door. Most intrusively, a container can be
unloaded and opened for a full intrusive inspection with handheld detection
systems, Explosives Detection Dogs, and by hand.
Inspections at port are heavily biased toward nuclear radiation. U.S.
capacity to detect nuclear and radiological material at the ports of entry has
improved dramatically in recent years. The GAO (2012) agreed with the
DHS that “the likelihood of terrorists smuggling a WMD into the United
States in cargo containers is low, [but] the nation’s vulnerability to this
activity and the consequences of such an attack—such as billions of losses in
US revenue and halts in manufacturing production—are potentially high” (p.
1). In 2006, the DHS and Department of Energy launched the Security Freight
Initiative (SFI) to counter-nuclear and radiological terrorism through shipped
containers. All U.S. ports and 75 foreign ports (as of 2013) have U.S.-
supplied radiation portal monitors in order to screen containers destined for
U.S. shores. Honduras and Pakistan joined the SFI immediately, followed by
Britain, Hong Kong, and Singapore.
Maritime Terrorism
The subsections below summarize maritime terrorism attacks and terrorist
flows by sea.
Maritime Terrorism Attacks
Maritime terrorism is rare but is potentially catastrophic to the international
economy and has encouraged wide-ranging and expensive international legal
and material responses. A useful estimate of maritime terrorist risks to the
United States is summarized in Table 15.2.
Most terrorist smuggling in these areas does not directly lead to a maritime
terrorist attack but could smuggle weapons, persons, or money that enable
other terrorist attacks.
Maritime Piracy
Maritime piracy is much riskier than maritime terrorism, because maritime
piracy is much more frequent and imposes routine costs. The subsections
below describe the scope of maritime piracy, pirate operations in practice,
the frequency of maritime piracy over time, the geographical distribution of
piracy, the costs of piracy, and counter-piracy.
Scope
Maritime piracy is any attempt to board a vessel in order to steal or extort
for profit. The UN effectively separates maritime theft from extortion:
The term “piracy” encompasses two distinct sorts of offences: the first
is robbery or hijacking, where the target of the attack is to steal a
maritime vessel or its cargo; the second is kidnapping, where the vessel
and crew are threatened until a ransom is paid. (UN Office of Drugs and
Crime, 2010, p. 193)
Pirate Operations
Pirates generally operate in full view of foreign shipping, including navies,
while searching for targets. Their small boats are difficult to identify on the
open ocean and are difficult to distinguish from fisherman, and are not
actually committing any crime until they attack.
Pirates generally use small fast boats (skiffs; each carrying half a dozen
men) to attack ships within 50 nautical miles of shore; they generally need a
larger boat for operating further off shore. A mother ship normally tows a
fast boat that would carry the attackers. Despite the term mother ship, it is not
a large vessel, just a small boat carrying a dozen or two dozen men, often
powered by small engines and sails, indistinguishable from fishing boats
until the attack starts. Sometimes mother ships tow more than one fast boat or
cooperate with each other. Multiple fast ships will confuse the target’s crew,
whose visibility and room for evasion is limited. The attackers are armed
with ubiquitous Soviet-manufactured automatic firearms and rocket-
propelled grenades. An attack is normally completed within 30 minutes.
Pirates are often controlled from shore. Since the 1990s, pirates have used
satellite telephones to communicate from ship to shore. They also use
commercially available navigation equipment that can direct them anywhere
given a target coordinate and information about current position from the
Global Positioning System. The coincidence of pirates with valuable vessels
in more remote seas suggests that some are tipped off by employees with
access to the vessel’s coordinates (UN Office of Drug and Crime [ODC],
2010, p. 198).
Most successful piracy ends in theft and resale of the cargo, the crew’s
possessions, the vessel’s portable items, or sometimes the vessel itself (in
the case of small craft). West African pirates target fuel cargo for illegal sale
through the developed infrastructure of Nigeria.
Off the Horn of Africa, which lacks Nigeria’s infrastructure, most pirates
aim at holding a large ship, cargo, and crew for ransom. Pirates target
vessels that can be held ransom for large amounts of money. These tend to be
large container and tanker ships; small yachts suggest wealthy owners who
could be held for ransom, although they are more likely to outpace pirate
mother ships, given sufficient warning. Shippers can negotiate with the
hijackers via the hijacked vessel’s communications systems or the pirate’s
satellite telephones. Ransoms are normally paid in cash and delivered
physically to the hijackers by an intermediary. Sometimes the ransom is
dropped by parachute from an aircraft. Rarely, the pirates will accept
payment to a trusted third party. The pirates normally honor agreements to
release their prizes for ransom, but they usually release a vessel, cargo, and
crew only after stripping them of anything valuable and portable. The vessel
and cargo may be damaged too during the attack, the subsequent thievery, or
long periods under hijack without proper maintenance.
Costs
From 2003 to 2008, the global economic cost of piracy was $1 billion to $16
billion per year. From 2009 to 2012, it was somewhere between $7 billion
and $12 billion per year. Most of these costs relate to controlling the risks
rather than the direct costs of pirate attacks.
In controlling the risks, shippers pay additional costs, such as insurance,
armed guards, antiboarding materials, defensive weapons, extra fuel costs
due to higher speeds and evasive routes in riskier waters, and the extra
operational costs of longer journeys to avoid riskier waters.
The cost of ransoms is difficult to calculate because shippers and pirates
tend to underreport. In 2010, the reported average and median ransoms were
around $4 million to $5 million, with a high of $9.5 million. By 2012, the
average and median were about the same, but the high had reached $12
million. Ransoms totaled around $135 million in each of 2011 and 2012.
Delivering the ransom and recovering the vessel is costly in itself. In some
cases shippers have procured a light aircraft to drop the ransom on the ship
or near the pirates’ base. The ship must be recovered, cleaned, and repaired.
The cargo may be spoiled. In worst case, all are lost. For instance, on July 8,
2013, the Malaysian-flagged and -owned M/V ALBEDO, which in
November 2010 had been pirated with 15 crewmen aboard, sank at anchor
off the coast of Haradhere, Somalia.
The human costs are confined to the few crew who are unfortunately killed
or detained. Fortunately, these rates have declined in recent years but are still
terrible for those directly affected. In 2011, 15 crew were killed, 714 crew
were taken hostage afloat on 45 vessels, and 15 were kidnapped (taken
ashore). In 2012, four were killed, 484 taken hostage on 28 vessels, and 25
kidnapped. The first 6 months of 2013 gives an annualized rate of two killed,
160 taken hostage on 12 vessels, and 56 kidnapped. If the crew are lucky
enough to survive their abduction, they likely will need rest, treatment, and
compensation, although the flexibility of crew contracts often leaves the
shipper with few obligations (data source: International Maritime Bureau).
Frequency
Piracy is an underreported problem because of the shipper’s desire to hide
vulnerability and costs from stakeholders. Worldwide frequency of piracy
increased in the early 1990s, mostly due to increased sources from China and
Indonesia. In the late 1990s, austerity, competition, and automation
encouraged ship owners to reduce crews and defenses. Piracy peaked in
2000 and remained high in that decade (around 350 to 450 events per year,
according to the International Maritime Organization). Official reactions to
9/11 encouraged states to divert resources away from counter-piracy to
counter-terrorism. Many states declined into further instability during that
decade. Pirates found their environments more permissive and also found
weapons and equipment more accessible (Chalk, 2009). Global incidents
peaked at 445 in 2010, a peak last seen in 2003, although still short of the
peak of 469 in 2000. The frequency then declined as official and commercial
focus returned. The International Maritime Bureau reported 439 vessels
attacked and 45 hijacked in 2011, 297 attacked and 28 hijacked in 2012, and
an annualized rate of 270 attacked and 12 hijacked given the first half of
2013.
In the 1990s, sources surged most rapidly from Indonesia—sources surged
in Bangladesh, India, and Malaysia too. Piracy dropped by half in 2005,
thanks to international responses in South East Asia and South Asia, but from
2008 to 2009, it surged again, mostly due to increasing piracy from Somalia,
a persistent failed state. Sources surged in Nigeria too (data source:
International Maritime Bureau).
Geographical Distribution
Shipping is not evenly distributed across the world’s oceans. Most ships
follow predictable routes due to the predictability of markets, ports, and
coastlines. Thus, shipping routes offer at least 11 bottlenecks:
1. Measured over the last couple decades, the most concentrated piracy
has occurred in the Gulf of Aden and off the Horn of Africa. About
10% of shipping passes through these seas, although some shippers
have chosen the less risky but slower route around the Cape of Good
Hope between the Indian and Atlantic Oceans. According to the
International Maritime Bureau, the number of pirate attacks off the
Horn of Africa doubled from 111 in 2008 to 217 in 2009 and peaked in
2010, but fell to 237 in 2011 and to 75 in 2012, thanks to sustained
counter-piracy. Nevertheless, Somali pirates still held eight ships and
127 hostages at the end of 2012. Somalia is the main national source of
the pirates in these seas.
2. About 10% of shipping passes the Gulf of Guinea off West Africa.
Piracy there is less frequent than off the Horn of Africa, but more
successful now, thanks to international attention on Somalia. In 2012,
West African pirates attacked ships 62 times and threatened 966
sailors, compared to 851 in Somali waters. West African pirates
hijacked 10 ships and took 207 new hostages, of which five were
killed, according to the International Maritime Bureau. Nigeria is the
largest and most populous state in West Africa and the main national
source of pirates in the region.
3. In the early 2000s, most piracy risks were concentrated in the seas
between Vietnam, Indonesia, and Malaysia—particularly the Malacca
Straits between Malaysia and the Philippines, where about 50% of
shipping passes. In 2012, 81 pirate attacks were reported in South-
East Asia, 31 in the rest of the Far East.
4. The Caribbean Sea and the sea off Panama is a hazardous and poorly
policed area rife with short-range piracy.
5. The Bay of Bengal is another poorly policed area, opening up on the
Indian Ocean. Bangladesh is the main national source here.
Pirates from one of the world’s poorest countries (Somalia) are holding
to ransom ships from some of the richest, despite patrols by the world’s
most powerful navies. Almost all piracy in the Gulf of Aden and off the
Horn of Africa originates in Somalia, a failing state since the 1980s.
Central government finally collapsed in 1991, leaving an independent
state of Somaliland to the north, an effectively autonomous province of
Puntland, and a failed state of Somalia in the south-west, where Somalia’s
capital and largest port (Mogadishu) is located. In 2012, international
forces secured Mogadishu and drove south, but most of the nominal
territory of Somalia remains insecure.
Somali pirates numbered about 1,400 as of 2009. They are
concentrated in Puntland and south-central Mudug. The lowest ranked
pirates are easily replaced with recruits from the majority destitute
population, which includes 40,000 internally displaced Somalis. A pirate
earns somewhere from $6,000 to $10,000 for each $1 million in ransom
paid. About 30% of ransoms go to the pirates, 10% to local militia, 10%
to local elders and officials, and 20% to the financier. The period of
hijack ranged from 6 days to 6 months with an average around 2 months;
ransoms totaled somewhere between $50 million and $100 million. From
2008 to 2009, insurance premiums for vessels in these waters jumped
from $20,000 to $150,000.
In 2004, the International Maritime Board warned all vessels to travel
further than 50 nautical miles off the Somali coast. In 2005, it raised the
specification to 100 nautical miles. By 2006, some Somali pirates were
operating as far as 350 miles from Somalia, into the Red Sea and the
Indian Ocean. From 2007 to 2008, piracy shifted from mainly southern
Somali waters to the Gulf of Aden. By 2009, some Somali pirates were
attacking ships more than 1,000 nautical miles from Somalia.
Some shippers accepted the risks of operating closer to shore in return
for faster journeys, while the World Food Program continued to ship
30,000–40,000 metric tons of food aid every month into the Horn of
Africa (by late 2008, 43% of Somalis were dependent on food aid, of
which 95% arrived by sea). In late 2008, the World Food Program
required naval escorts from European Union or Canadian forces, while
some Somali pirates consented to honor humanitarian aid (such cargos
would garner lower ransoms anyway) (UNODC, 2010).
Since December 2008, the European Union Naval Force Somalia
(EUNAVFOR) and the North Atlantic Treaty Organization’s Combined
Task Force 151 have been the main international antipiracy forces off
Somalia, but they patrol more than one million square miles of ocean.
Somali pirates now operate in a total sea space of approximately 2.5
million square nautical miles (about the size of the continental United
States). Nevertheless, relative to prior enforcement, naval impact was
great: up to August 2009, they seized or destroyed 40 pirate vessels and
rendered 235 suspected pirates. The Indian Navy extended its patrols
from the Indian Ocean into the Gulf of Aden. Since January 2009, the
Chinese Navy has maintained at least two frigates in the Gulf of Aden. By
2012, up to 30 vessels from 22 navies were patrolling the Gulf of Aden.
These naval forces operate with fewer restrictions on the use of force
than their predecessors. In November 2008, an Indian warship in the Gulf
of Aden destroyed a pirate mother ship. In April 2009, French marines
rescued four French hostages from their yacht and detained three pirates,
although one hostage was killed. On April 12, 2009, U.S. Navy personnel
shot to death three pirates and freed the captain of the Maersk Alabama
from a lifeboat under tow. In December 2009, an Indian navy helicopter
with marines helped to deter hijackers from boarding a Norwegian ship.
In January 2012, U.S. forces freed a Danish hostage.
Meanwhile, states have agreed international responses beyond naval
force. In December 2008, the UN Security Council (Resolution 1851)
established the international Contact Group on Piracy off the Coast of
Somalia, chaired by the United States in 2013 and the EU in 2014.
Also in December 2008, the U.S. National Security Council issued the
Partnership and Action Plan for Countering Piracy off the Horn of Africa
and established the Counter-Piracy Steering Group, coled by the
Departments of State and Defense, with representatives from the
Departments of Justice, Homeland Security, and Treasury and the U.S.
Maritime Administration and the U.S. Agency for International
Development. The U.S. Partnership and Action Plan has included a
Maritime Security Sector Reform framework.
The Contact Group now boasts 62 states and 31 international
organizations and maritime trade associations as participants. Working
Group 1 (chaired by Britain) promotes international military coordination
and the development of regional capacity for maritime security. Working
Group 2 (Denmark) works on legal issues. Working Group 3 (Korea)
helps to develop commercial shipping’s awareness and protections.
Working Group 4 (Egypt) works on public awareness and support for
counter-piracy. Working Group 5 (Italy) coordinates the countering of
pirate financing. The Contact Group claims a “marked reduction” in
piracy: successfully pirated ships off Somalia fell from 47 in 2009 and
2010 to 25 in 2011, and five in 2012.
Local stabilization and capacity building are solutions to the root
causes or enabling conditions of piracy in Somalia. On April 23, 2009, a
European donor conference raised $160 million for security sector reform
in Somalia. In February 2012, Britain hosted an international conference
on the future of Somalia, which reiterated that force alone could not solve
the problem and advocated more support of local communities. In August
2012, central authorities in Mogadishu adopted a new provisional
constitution, legislature, and president, and, with international military
support, started to expand their control outside the capital.
Counter-Piracy
Counter-measures include stricter laws and law enforcement, naval
enforcement, defensive options for ships, and activities to counter the wider
pirate networks, as explained in the subsections below.
Legal Responses
International norms and laws have proscribed maritime piracy for a long
time, including the Paris Declaration of 1856, the Geneva Convention of
1958, the UN Convention on the Law of the Sea of 1982, and the Convention
for the Suppression of Unlawful Acts Against the Safety of Maritime
Navigation (SUA) of 1988, which allows any state to detain, extradite, and
prosecute maritime terrorists and pirates; 150 countries are party to SUA.
While the norms and laws are strong on paper, they are difficult to enforce
in practice. International law constrains an outside state’s right to intervene.
Mackubin Owens (2009) has argued that “a sovereign state has the right to
strike the territory of another if that state is not able to curtail the activities of
latrunculi” (pirates and other outlaws), but no state has exercised this right
in decades.
Pirates are based in unstable or weakly governed areas where the judicial
system tends to be weak. For instance, international authorities still regard
Somalia as unable to try pirates properly. Outside states can prosecute
pirates, but democracies, which have been most engaged in countering piracy
at source, have honored human rights legislation or the detainee’s claims of
asylum to the extent that most pirates were released after arrest without any
indictment. For instance, for many years, European naval forces were
advised by home governments that rendition to states around the Indian
Ocean, each of which had unreliable human rights, would violate the
European Human Rights Act.
Even if the judiciary is strong, practical difficulties remain. Pirates are
difficult to prosecute unless caught in the act. Witnesses are difficult to bring
to court: Sailors are keen to return to work, particularly because most are not
compensated for their time as hostages or in court; most sailors are from
developing countries and spend most of their time at sea without a mailing
address. Given the diverse nationalities of sailors and pirates, the court often
requires as many translators as witnesses.
In the late 2000s, outside states started to act more responsibly. In late
2008, France started to send pirates for prosecution in France. In March
2009, Kenya agreed to try pirates in return for help from the European Union
with judicial reform. In September 2010, international naval forces handed
over to Kenya nine Somali citizens who had hijacked a vessel, MV Magellan
Star, in the Gulf of Aden. In June 2013, a Kenyan Court sentenced each of
them to 5 years in prison.
In April 2009, after U.S. naval forces shot to death all remaining pirates,
except one, who were still holding the crew from the U.S.-flagged Maersk
Alabama, U.S. authorities rendered to New York the surviving Somali pirate;
in February 2011, he was sentenced to 33 years in prison. By March 2012,
the United States had prosecuted 28 Somali pirates. The United States, like
many states, maintains that the flag state should prosecute the pirate but has
arranged for Somali pirates to serve out their sentences in Somalia, thanks to
international military support of a new provisional constitution and
government in Somalia in 2012.
During the February 2012 London Conference on Piracy and Somalia,
Britain pledged more than $1 million and founding staff from the British
Serious Organized Crime Agency to establish a Regional Anti-Piracy
Prosecutions and Intelligence Co-ordination Centre (RAPPICC) in the
Seychelles. The temporary offices opened on June 1, 2012, the permanent
building in January 2013, including a fusion center for coordinating
international judicial information and enforcement and a 20-person detention
facility for conducting interviews. By 2013, more than 1,000 pirates were in
detention in 20 countries, of which most had been or would be convicted.
Naval Enforcement
Ships are supposed to travel through national waters patrolled by national
coast guards or international shipping lanes patrolled by multinational naval
forces. International naval counter-piracy surged in 2009 off Somalia in
response to new international forums and national commitments, but even
there, the area of operation is vast and the forces are small. The Gulf of Aden
is a well patrolled channel, although highly exposed to pirates in the Indian
Ocean and Horn of Africa. The narrow channel through the Straits of Hormuz
is easiest to patrol, although the threats there are more military and terrorist
than pirate. The Straits of Malacca have a well-patrolled channel and are
tightly contained by Malaysian and Indonesian land, but exposed at either end
to traffic from the Indian Ocean and South China Sea. Other oceans with
pirates are barely patrolled at all.
Ship Defenses
Requirements
The reality is that international naval forces simply might not be there to
respond. The problem of piracy is one that can’t simply be solved by
national governments. Therefore, we have also supported industry’s use
of additional measures to ensure their security—such as the employment
of armed security teams. To date, not a single ship with Privately
Contracted Armed Security Personnel aboard has been pirated. Not a
single one . . . At the State Department, we have encouraged countries to
permit commercial vessels to carry armed teams. However, we do note
that this is a new area, in which some practices, procedures, and
regulations are still being developed. We are working through the
Contact Group and the International Maritime Organization or IMO on
these issues. For instance, we have advised that armed security teams
be placed under the full command of the captain of the ship. The captain
then is in control of the situation and is the one to authorize the use of
any force. Last September [2011], we were encouraged to see language
adopted by the IMO that revised the guidance to both flag States and
ship operators and owners to establish the ship’s master as being in
command of these teams. (Andrew J. Shapiro, Assistant
Secretary,Bureau of Political-Military Affairs, prepared remarks to the
U.S. Chamber of Commerce, Washington, DC, March 13, 2012)
SUMMARY
the infrastructure,
navigation,
communications,
vehicle survivability, including resistance to kinetic projectiles,
resistance to blast, access controls, stealth, and personal aid
equipment,
mobility, and
escorts and guards,
• described the scope of civilian air transport security,
• noted the risks of aviation accidents,
• reviewed sovereign threats to civilian aviation,
• described aviation terrorism,
• explained how to improve aviation security via
cargo screening,
passenger luggage screening,
human access controls,
metallic screening,
footwear screening,
liquids screening,
clothing screening,
body screening,
intelligence,
in-flight security, and
countering antiaircraft weapons,
• described the scope of maritime security,
• described port security,
• described cargo security, including
safe traders,
manifest rules,
container security,
smart boxes, and
inspections at ports,
• reviewed maritime terrorist attacks,
• reviewed maritime terrorist flows,
• defined maritime piracy,
• explained how piracy works in practice,
• assessed the costs of piracy,
• reviewed the frequency of piracy,
• explained the geographical distribution of piracy, and
• explained how to counter piracy, by principally
legal responses,
naval enforcement,
ship defenses, and
countering the wider networks.
Q UE S T IO NS AND E XE RCIS E S
16
Personal Security
Crime
Personal security is often defined as security from crimes or criminal-like
behaviors, particularly violence. For instance, the U.K. Ministry of Defense
(MOD) (2009) defines personal security as “that part of human security
which ensures protection of an individual from persecution, intimidation,
reprisals and other forms of systematic violence” (p. 6).
The sections below describe criminal threats, how to assess a particular
threat, how to manage them, how to train the person to avoid them, how to
train the person to deter and defend against them, and close protection for the
person.
Criminal Threats
Humans are hazards to each other because they could harm each other, even
accidentally. Human hazards and threats are described in more detail in
Chapter 4. This section is concerned with criminal threats to a particular
person. A minority of humans deserve special attention as criminal threats—
humans who have criminal intent and capability. The Humanitarian Practice
Network (2010) identifies several human threats to the person, as described
in the subsections below: crowds; thieves; detainers, kidnappers, and
hostage takers; and sexual exploiters and aggressors.
Crowds
Crowds, mobs, and looters may start out with no intent against a particular
person, but groups tend to become more reckless with size and could turn on
a passer-by, someone who disturbs their malicious activities, or someone
who blocks their path. At the personal level, such threats are controlled by
avoiding their area or preventing their access to your area.
Intended: The intended criminal had the conscious intent before the
crime. A suggested path to intended violence starts with the same
grievances and imaginings (ideation) of acting vengefully as would
be experienced by the howler, but the intended criminal researches
the target, plans the attack, prepares the attack, and executes the
attack.
Impromptu: The impromptu criminal had some grievance, some
intent to harm and some ideas about how to harm but had not
prepared the crime before encountering the opportunity. An
impromptu violent criminal is a criminal who planned some
nonviolent harm, such as covert theft, but encountered the
opportunity to thieve more through violence or reacted violently to
the defender.
• Situational inhibitors: The situational inhibitors are the situational
factors that inhibit the crime.
For instance, someone who is happily employed, married, and
parenting seems to have more to lose than someone who is recently
separated from their job, spouse, or children.
• Psychology: An individual’s psychology can inhibit or not the crime.
For instance, former employees are more intimate with the site of
employment or the employees than would be an outsider, so they are
more likely to know how to break into a site or when to attack
employees at their most vulnerable moment.
Similarly, a recently separated spouse or parent is much more
knowledgeable about how to harm their spouse or children than
would be the average person. (Calhoun & Weston, 2012, Chapter 1)
Managing Criminal Hazards and Threats
A simple process for managing human hazards and threats would follow at
the least the following four steps:
Having assessed the hazard or threat, security managers face these choices
of response:
Close Protection
Some officials receive personal protection (close protection) from guards.
For instance, the U.S. Secret Service protects U.S. executive personnel and
visiting executives. The U.S. Diplomatic Security protects the Secretary of
State, foreign dignitaries visiting the United States, senior U.S. diplomats
overseas, and U.S. athletes in major competitions.
Sometimes private individuals are granted official protection against
certain threats. They can hire close protection from commercial providers or
can employ guards directly. Private security proliferated in the 2000s, but
some providers raised new risks, such as underperformance by guards or
service providers who were not as competent or loyal as promised. Close
protection also can stimulate local jealousies and opposition, particularly to
pushy, trigger-happy, or culturally insensitive guards. Close protection has
turned increasingly expensive, due to high demand, high turnover, high
casualties, and increasing legal liabilities raised by local victims or former
guards who had suffered injuries or stress during service. Official authorities
funded most of the growth in private security contractors in Iraq and
Afghanistan in the 2000s, but they have since realized their preferences for
more internally controlled security—primarily military and police personnel.
Close protection involves mostly guarding the target person, escorting the
person, and guarding the person’s residences, offices, and means of
transportation, and sometimes the person’s wider family or social network.
The activities of close protection extend beyond guarding the person in the
moment; they include research on the hazards and threats, surveying sites
(such as residences and offices), acquiring sites and vehicles, reconnoitering
routes, and liaising with local authorities, although the latter may be
unwilling or untrustworthy. Close protection tends to be most complicated
when the person must travel or meet with the public.
The person faces trade-offs between close protection and stealth
(prominent guards may deter but also attract attention), between close
protection and external access (guards keep threats away but also discourage
potential contributors), and between close protection and operational
performance (guards may interrupt the person’s work). For instance, the U.S.
Ambassador to Libya (Chris Stevens) in 2012 had developed a reputation for
proactive relations with local stakeholders, but the Accountability Review
Board (December 18, 2012) retrospectively criticized his protections around
the time of his death on September 11, 2012.
The Board found that Ambassador Stevens made the decision to travel
to [the U.S. Mission in] Benghazi [from the U.S. Embassy in Tripoli]
independently of Washington, per standard practice. Timing for his trip
was driven in part by commitments in Tripoli, as well as a staffing gap
between principal officers in Benghazi. Plans for the Ambassador’s trip
provided for minimal close protection security support and were not
shared thoroughly with the Embassy’s country team, who were not fully
aware of planned movements off compound. The Ambassador did not
see a direct threat of an attack of this nature and scale on the US
Mission in the overall negative trendline of security incidents from
spring to summer 2012. His status as the leading US government
advocate on Libya policy, and his expertise on Benghazi in particular,
caused Washington to give unusual deference to his judgments.
Injuries and Accidents
This section covers personal injuries and accidents. The subsections below
review the scope, violent injuries, work-related (occupational) accidents,
fire and smoke hazards, and safety from animals.
Scope
Personal accidents and injuries are caused by trips, falls, collisions,
acceleration injuries, sharp cuts, crushes, burns, electrocutions, drownings,
and poisonings. The causes of injuries and accidents include external
malicious actors, self-harm, and accidents. Accidents can be caused by
somebody else’s carelessness, the victim’s own carelessness, faulty systems,
or the victim’s unfortunate coincidence with some hazard, like an unstable
pavement.
An average accident or injury, such as a sprained ankle, may be of little
long-term consequence, but they are very frequent (much more likely than
crimes) and they can increase other risks, including crime and disease. The
cumulative interaction between ill-health, violence, and accident is
illustrated by this notional cycle: An unstable region is a less controlled
environment; as controls decline, crimes and reckless behavior proliferate; if
a person suffers a crime, they are more likely to suffer stress; as a person
suffers stress, they are more likely to have an accident; while injured, the
person is more likely to become ill, suffer another accident, or become a
victim of crime. As a result of any of these events, the organization suffers
the costs of caring for the person or of lost work. The person’s carers may be
exposed to disease, stress-related violence, or other trauma, which increase
the chance that the carers suffer stress, an accident, or crime. The family and
friends of each of these victims also are affected. The cumulative effects of
all these events can disable a mission or cause such concern among friends
and family or at higher levels that the mission is terminated.
In any employment, even in war zones, people are more likely to be
harmed by accident than by malicious intent. Accident rates rise in unstable
areas as public regulations and enforcement decline, judicial systems
collapse, governments are corrupted, people behave more recklessly,
become more stressed (stress is associated with cognitive inattention and
physiological weakness), and operate more dangerous equipment (including
weapons). The British military has admitted that more than half of its
casualties during operations in Afghanistan from 2001 to 2009 were caused
by human error.
For 2008 (the last year for which the World Health Organization estimated
deaths globally), the World Health Organization (WHO) estimated that less
than 9% (5 million) of global deaths (57 million) were caused by external
causes of injuries, such as road traffic accidents, crime, or combat. Normal
industrial and urban pollutants kill far more people through
noncommunicable diseases such as lung cancer than through accidents.
Some injury rates seem to be decreasing but injury severity is increasing.
The Global Burden of Disease Study (2012) found that the DALYs
(disability-adjusted life years)lost to injuries as a proportion of all DALYs
increased from 10% in 1990 to 11% in 2010, even though the overall
numbers of DALYs remained practically the same. The explanations include
the increased frequency and destructiveness of human-made systems that can
harm, such as weapons and road vehicles.
Violent Injuries
According to the WHO (2009, 2012a), intentionally violent injuries caused
1.6 million deaths in 2004: 51% of these were by suicide, 37% by
interpersonal violence, and 11% in wars and other mass conflicts.
The chance of violent injury or fatality is much higher in war than in
civilian life (the chance of psychological harm is even higher). Of U.S.
military personnel deployed to Afghanistan or Iraq from 2001 to 2011, 2%
were wounded and 0.25% killed (data source: DOD). These might seem like
low rates, but consider that 0.01% of Americans were killed by guns in
America in 2010, mostly suicides (returning soldiers are much more likely
suicides than are civilians). The rate of Americans killed by others in firearm
crimes within America was 0.003% in 2010 (data source: U.S. Centers for
Disease Control).
Work-Related Accidents
Of accidental deaths, road traffic accidents account for most deaths—these
are described in Chapter 15 (transport security). Occupational (work-
related) accidents and injuries are more frequent but normally less injurious.
Globally, occupational accidents and injuries kill about 350,000 people
per year and disable or reduce the life expectancy of many more but account
for just 1.7% of global DALYs.
The U.S. Department of Labor received reports for 2011 of nearly 3
million nonfatal workplace injuries and illnesses from private employers
alone, a rate of 3.5 cases per 100 equivalent full-time workers. More than
half of cases affected the employee’s duties or time on work. Almost all of
the cases (95%) were nonfatal injuries, not illnesses.
Employees bear the burden of these accidents if they are not granted paid
time off work or compensation. Employers bear the burden when they
turnover employees due to accidents, pay compensation, or pay for insurance
against the risks. Often laws and regulations or the tort system obliges
employers to bear these burdens.
Psychological Health
Psychological health tends to be more difficult to assess and provide than
physiological health. As described above, organizations and individuals
share an interest in not exposing psychologically dysfunctional people to
situations that exacerbate psychological conditions. Ideally, those with
psychological ill-health should not be employed in such situations, but often
regulations prevent an employer from discovering such ill-health until it
manifests during employment. Stress is the main situational cause of ill-
health and is subject to control. The subsections below describe stress and
how to manage it.
Stress
Operational psychological health issues are likely to arise from or be
exacerbated by stress during operations. Stress is useful where it helps the
person to focus on a task or avoid a risk, but severe or routine stress is
debilitating, in two main ways:
• Acute stress disorders are quick to develop, within hours or days, and
are often described as emotional breakdowns.
• Posttraumatic stress disorders (PTSDs) are delayed, sometimes years,
often because the victim is focused on the immediate mission or on
more direct risks.
Managing Stress
Managing stress is a responsibility shared between the individual,
organization, and society. Generally, managing stress involves iterating the
person’s exposure to stress (so that the victim can recover between
exposures), training awareness of stress, training relaxation and other stress
relief, encouraging opportunities for social support, encouraging social
cohesion, encouraging positive leadership, and intervening with medical help
in extreme cases (Newsome, 2007, Chapter 4). The Humanitarian Practice
Network (2010) admits “the management of stress” as “a dimension of
security management. Like security more generally, managing stress is both
an individual and an organizational responsibility” (p. 129).
In effect, personal security trades controls on the stressors with acceptance
of the stress. Some external management of stress can be counter-productive.
For instance, preparing personnel for some stressors may make them more
sensitive to them. Similarly, mental health care providers who attempt to
encourage personnel to externalize their stress often draw attention to stress
and encourage personnel to be symptomatic. Additionally, some related
predeployment activities can introduce new stresses, such as preparing a
will or leaving instructions for family and friends in case of death.
“When you‘re out on the front lines representing America today, you
would be foolish to feel too secure. On the other hand, we know that’s
part of the package and we know that our government . . . is making a
sincere effort to provide the security for personnel first and facilities
second.” (President of the American Foreign Service Association, Susan
Johnson, in answer to a question about how secure diplomatic personnel
feel overseas, posed after the death of four US personnel during the attack
on the US diplomatic outpost in Benghazi, Libya, 11 September 2012, in:
Joe Davidson, 24 September 2012, “Foreign Service workers know risks
come with job,” Washington Post).
SUMMARY
Q UE S T IO NS AND E XE RCIS E S
1. What are the differences between criminal threats to the person and
criminal threats to sites?
2. What could motivate a criminal interest in a particular person?
3. What differentiates a threatening person from a hazard?
4. What are your options for managing a criminal hazard within your
organization?
5. What are your options for managing a criminal threat within your
organization?
6. What minimum knowledge and skills should you expect when being
trained for a hazardous environment?
7. What are the extra negative risks associated with the acquisition of
close protection?
8. When would you expect accident rates to rise?
9. Why are accident, illness, and stress rates correlated?
10. Why would you expect increased frequency of fire in an unstable area?
11. Under what circumstances does someone become more exposed to
smoke?
12. What are the personal risks associated with feral animals?
13. What control on the risks from smoke would increase the risks from
feral animals?
14. What should a manager assess about a person’s health before allocation
to a particular task or operation?
15. How can stress be managed during an operation?
References
Capability, 74–75
assessing, 75
countering the acquisition of, 188–189
defining, 74
gap, 86 (box)
Capacity, 19–22
defining, 20
distribution of, 21–22
fungibility of, 20–21
security and, trading, 21
See also Security
Cargo
aviation security for, 325–327
maritime security for, 338–340
Catastrophe, 118–119
Censorship and controls, on freedom of information, 292–294
Chance, 25–26
Charities, 197
Chung, Dongfan, 265
Civil protection, 190
Clarke, Richard, 333
Closed-circuit television (CCTV), 252 (box)
Close protection, 364
Cloud computing, 286–287
Cognitive availability, 158–159
Communicating risk, 208–214
alternative ways, 210–214
requirement, 208–210
Communications, surveillance of, 252–253
Complex risks, 200
Confidence, 101
Consequence, 114 (box)
Consequence management, 191
Container security initiative, 339–340
Containment, 60
Containment areas, 247
Contingencies, range of, 36–37
Contingency, 115
Contingency planning, 190–191
Continuity, 194
Continuous assessment, 62
Contractors, 198
Contributor(s), 63–64
categorizing, 64–77
Control effectiveness map, 213
Controlled tolerable risks, 178–179
Controls, 173–179
controlled tolerable risks, 178–179
defining, 173, 174–176 (table)
establishing tolerable risks, 173, 176–177
incompatible tolerability, 179
intolerability versus practicality, 178
See also Access controls; Strategies
Cost and benefit, 114 (box)
Counter-piracy, 349–353
Counter-surveillance, 250–251
Crime, and personal security, 357–364
assessing hazards and threats, 359–361
avoiding hazards and threats, 362
close protection, 364
criminal threats, 357–359
deterrence and defense, 363
managing hazards and threats, 361
Crime risk assessment, 43 (box)
Crime victimization, 83 (box)
Criminal risks, ACTION process for assessing, 149 (box)
Criminals, profit-oriented, 264–265
Criminal threats, 357–359
crowds, 358
detainers, kidnappers, and hostage takers, 358
estimating, 71 (box)
sexual exploiters and aggressors, 359
thieves and robbers, 358
Criminal violence, and military posttraumatic stress disorder (PTSD),
369 (box)
Crisis, 117
Cultures, 135–136
assessing, 136
developing, 135, 136
See also Processes; Structures
Cyber attacks, sources of, 261–271
external threats, 268–269
insider threats, 265–266
nation-states, 269–270
profit-oriented criminals, 264–265
Cyber crime, 288 (box)
Cyber sabotage, 298–299
Cyber security, 301–302. See also Information security
Cyber space, 260–261
Data
managing, 274
privacy, regulation of, 275–276 (box)
Delphi surveys, 46
Denial of service (DOS) attacks, 298–299
Deter, 188
Disability-adjusted life years (DALYs), 127–128
Disaster, 118–119
Disease
avoidable, 164
causes and sources of, 187 (box)
Disruption, 116
Distributed denial of service (DDOS), 299
Distrust, 161–162
Diversification, 184, 198
Domain and cross-domain expertise, trading, 138–139
Dynamic risk assessment, 40, 43–44 (box)
Take, 195–196
Target, 81–91
defining, 81
exposure, 87–91
identifying, 82–84
vulnerability, 84–87
Target-standard gap analysis, 87
Target-threat gap analysis, 86
Tebbutt, David, 343 (box)
Tebbutt, Judith, 343 (box)
Technological hazards, definitions of, 78 (box)
Technological risks, categories of, 77
Telephone communications, surveillance of, 282–283 (box)
Territorial/geopolitical insecurity, 131
Terrorism, 168–169
compensation for, 129 (box)
U.S. definitions of, 298 (box)
Terrorist attacks, on transportation, 312
at sea, 341 (box), 342–343
aviation, 323–325 (box)
railways and trains, 312–313 (box)
Thin the risk, 198–199
Threat(s), 55–56
activation of, 57–58
analysis, 233 (box)
assessing, 60–62, 263 (box)
categorizing, 64–77
defining, 55–57
emerging, 56
exposure, 88
external, 268–269
identifying, 61–62
insider, 265–268
level of, matrix for, 70 (table)
specific, 53
See also Hazard(s)
Time horizons and periods, 109
Tolerability
defining, 153–154
incompatible, 179
See also Sensitivity
Tolerable risks
controlled, 178–179
establishing, 173, 176–177
Tolerate, 184
Tort system, 197
Total economic losses, 122
Transfer, 196–198
Transport security, 309–355
aviation, 321–334
ground transport, 309–321
maritime security, 334–353
Treat (and sometimes terminate), 184–195
Trojan horse, 274 (box)
Turn, 195