Network Security
Network Security
Credit Hours: 3
Prerequisite: Basic knowledge of computer networking and Security
Instructor
Mudassar Hussain
1
Contents
Networking Overview
Risks
Threats
Exposures
Vulnerabilities
Consequenses
What is network Security?
Designing for security
Designing secure networks
Separation mechanisms
Airgaps
Firewalls
Routers
Bridges
2
Networking Overview
3
Networking Basics
What is Internetworking?
“The art and science of connecting individual local-
area networks (LANs) to create wide-area networks
(WANs) and connecting WANs to form even larger
WANs. Internetworking can be extremely complex
because it generally involves connecting networks
that use different protocols. Internetworking is
accomplished with routers, switches, bridges, and
gateways”
4
Networking Basics(cont…)
5
OSI Model
Performance: Refers to the achievable recognition accuracy,
speed, robustness, the resource requirements to achieve the
desired recognition accuracy and speed, as well as operational
or environmental factors that affect the recognition accuracy and
speed.
Acceptability: The extent to which people are willing to
accept a particular biometric identifier in their daily lives.
Circumvention proof: How easy it is to fool the system by
fraudulent methods
7
TCP/IP Model
8
OSI vs TCP/IP
9
What is Risk?
Risk = Threat x Exposure x Vulnerability x Consequence
Threat: probability of an attack
Vulnerability: probability of an exploitable vulnerability
Exposure: probability a vulnerability is exposed to attack
Consequence: cost of a successful attack
Risk can be defined in any number of ways. The most common one
is probability of failure times cost of consequence. For this topic,
failure is a successful attack, and that probability can be broken
down further. Network security is about reducing risk and is
motivated by the fact that networked systems typically have greater
exposure and greater threats than non-networked systems. We’ll
look at some of the risk factors, and some of the mechanisms for
reducing risk.
10
Threats
Networking changes the attacker’s risk analysis
More networked systems: more profitable targets
Benefit of an attack increases
Networking makes the attacker less visible
Reduced risk of capture
Networking increases pool of potential attackers
Networking itself mainly leads to increases in exposure, not in threats, but does have
consequences for threats as well. Attackers also perform risk analysis. They determine
if the potential gain of an attack is worth the cost and risk of being caught. By
attacking networked systems, they reduce the risk of being caught, and as more
systems become networked, there are more and more profitable targets out there. One
might say that increased consequences begets increased threat. Networked systems are
also exposed to a greater pool of attackers, which we’ll classify as an increased threat,
not an increased exposure. As the pool of attackers increases, the probability that one
will be motivated to perform an attack also increases.
Network security addresses threats by increasing the risk to the attacker.
11
Exposure
Non-networked systems becoming more networked
Systems become accessible to more attackers
Convergence on IP technology
Attackers have better understanding of the systems
Mobility, wireless and multi-access technology increasing
Easier to access devices than before
Networking increases exposure considerably. The increase in wireless technology
is making networks and devices more accessible to attackers. Where an attacker
previously might need to physically access a network, now a good antenna and
amplifier can be enough. Mobility and multi-access technologies also increase
exposure, as these devices are designed to silently connect to various networks.
An attacker can exploit this by tricking the device into connecting to a network the
attacker controls. Convergence on IP technology, in e.g. the telephone network
infrastructure and in control system networks means that attackers now know a lot
more about the systems being used before. For example, in process control
networks, one might previously have used proprietary protocols in HDLC
encapsulation over a serial line, but now moved to DNP3 over IP carried on an
unencrypted radio link.
Network security is traditionally all about reducing exposure.
12
Vulnerabilities
Weakness or flaw in software, hardware, or organizational
processes, which can be compromised by a threat.
Systems becoming more complex
Complexity breeds vulnerabilities
Non-networked systems becoming networked
No security focus in these systems
Security awareness is increasing
Modern software is often more secure than old software
Users are starting to understand that security is important
Networking in itself doesn’t really change the vulnerabilities a lot, but networking tends to encourage
complex interacting systems, and complexity is certainly a source of trouble. Another serious issue is that
non-networked systems are becoming networked, and these systems have rarely had any security to speak of.
Ideally, they would be secured before becoming networked, but that is not always the case. Even worse,
sometimes these systems become networked by accident. However, we’re also seeing some positive trends in
this area. Security awareness is increasing, and people are producing more secure systems today than they
were not long ago. Increased use of standard components means that efforts to reduce vulnerabilities can be
focused on these components. Unfortunately, it also means that vulnerabilities in popular standard
components can have an enormous impact. Network security doesn’t really do a lot for vulnerabilities. Here,
we need to look to secure programming techniques, and good system and network administration practices.
13
Consequences
Networked systems are increasingly critical
Even properties of networked systems can be
critical
Networked systems can be used to launch attacks
It is common for attackers to use systems they take
control over to attack other systems
Business continuity
Survivability
High availability
14
Consequences (cont….)
Networking in itself doesn’t change the consequences of an attack all that
much. However, systems with large consequences are becoming
networked, and networked systems are becoming more and more critical.
For example, in 1996, a company wouldn’t be affected adversely by their
website being offline for a few days. Since several years, businesses
consider their websites to be a business critical resource. Since at least the
last year, the position of a business website in major search engines is
starting to be considered a critical issue. Thus, several years ago the
consequences of taking a website down were minor. Now, they are huge.
A few years ago, planting fake content to a website would be an
annoyance. Today, it can adversely affect search engine rankings. Another
potential consequence is that networked systems can be used to launch
attacks. It is common for attackers to use systems they take control over
to attack other systems. Aside from the obvious embarrassment this
causes, there can be legal repercussions. Network security doesn’t do a lot
to reduce consequences. Here, one has to look to other disciplines. Some
keywords would include ”business continuity”, ”survivability”, and ”high
availability”.
15
Some Attacks
Automated attacks
Worms and viruses
Extremely common
Targeted attacks
Aimed at specific targets
Performed with specific aim
Uncommon
Fallout
Side-effects of other attacks
16
Types of Attackers
Curious attackers: The original motivation for breaking into
systems is curiosity.
Ideological attackers: The ideological attacker uses attacks
on information systems to further a cause. defacing websites of
businesses or governments they disagree with.
For-profit attackers: Make money from breaking into
systems, targets systems that have value to him or his clients.
Corporate attackers: These are attackers with significant
resources backing them, who attack specific targets for specific
purposes.
Terrorists
Nation states
Insiders
17
Purposes of attacks
Break into systems
To steal information
To manipulate information
To use resources
Take control of systems
To perform new attacks
To manipulate systems
Disrupt service
To extort target
To discredit target
To facilitate other attack
18
Purposes of attacks(cont…)
There are lots of reasons for people to attack
information systems. The classic attack is the break-
in, usually followed by taking control of the system.
This is done for a variety of reasons: to access or
manipulate the information in the system; or to use
resources of the system (perhaps to launch new
attacks, to send spam, to host websites, commit fraud,
etc). Another common type of attack is the denial-of-
service attack, which can be used for extortion, to
discredit e.g., an online business or drive customers to
a competitor, or to facilitate attack techniques, such as
spoofing. The underlying reason for most serious
attacks is money, in one way or another.
19
Summary
Risk = Threat x Exposure x Vulnerability x Consequence
Evaluate threats, exposure, vulnerability,
consequence
Low-probability events difficult
Lack of information an obstacle
Reducing risk
The reason that risk is such an important concept is that it tells you what you need to,
and should do, to protect a system. Not in detail of course, but in the abstract. A good
risk analysis will show what the threats are, what the exposure is, what the
vulnerabilities are and what the consequences are. Ideally, one would want to produce
a monetary value on risk, because then it becomes easy to determine if an effort to
reduce risk is really worth it. In some industries, this is exactly how it’s done.
However, this approach may require placing a monetary value on the environment or
on human life, which is not only a loaded topic – many people feel that it is immoral
to place a monetary value on human life – but also very difficult to do in a rational
way.
20
Network Security?
First of all security awareness is very important.
Prevent security problems
Policies and organization to support security
Secure network design
Secure protocols
21
Network Security (Cont….)
Network security goes hand-in-hand with system security:
even if your network security is great, you need to make
sure that accounting, auditing, monitoring, access control,
and all that works on the systems too.
In order to do any of this, you need security awareness.
You need to know about risk. You need to know why
security is important. You need to know what’s happening
in the world, how the security situation is changing. You
probably have a bit of this already, or you wouldn’t be
here. But that’s not enough. When attempting to secure a
system or network, everyone involved needs to have
security awareness, or there will be problems.
22
Designing for Security
Prerequisites
Risk and security awareness
Accepted security policy
Network design
Multi-layered defense strategy
Principle of least privilege
System design
Strong access control
Strong software security
Accounting and auditing
23
Designing for Security (cont…)
Designing for security is the ultimate in prevention. If you don’t
design for security, you’ll probably be spending a lot of time
trying to patch systems and network to bolt security on to what is
a fundamentally insecure design. This is often the case with
legacy systems, that have been in place since before security
was a concern. The problem with this is that it’s very difficult to
plug all the holes, and if you don’t get them all, someone will
eventually find the open ones. Before even starting to design for
security, we need risk and security awareness, and we need an
accepted security policy. The security policy states the goals of
the design, and is based on the risk assessment. It must have
wide acceptance, or there is a large risk that it will be ignored or
circumvented by users. You also need to make sure that the
network isn’t the only thing designed for security: each and every
system also needs to be secure.
24
Designing secure networks
Network segmentation
Different zones for different functions
Contains threats to specific resources
Perimeter defense
Protects the borders between network zones
Network containment
Limits network to known extent
25
Designing secure networks (cont…)
When designing a secure network, there are any number of things to keep in
mind. Here are some of the big ones.
By network segmentation, I mean creating a multi-layered security
architecture by dividing the network into different parts, with barriers between
them. The reason for doing this is to reduce exposure by limiting network
access to those people and systems that need it. If the entire network is a
single segment, then everyone can access everything. By dividing the
networks into pieces, we limit the domain that an individual system can
access. Often, the result is a shell-like structure, where the innermost
network is the least accessible, and the further towards the perimeter we
get, the most easily accessible the network is.
By perimeter defense, I mean the barriers between segments. These protect
the network from attacks from the outside. Typically, perimeter defense will
consist of a firewall and an intrusion detection system.
Finally, by network containment, I mean limiting the network to a known
extent. Surprisingly often, networks reach to where nobody thought they
reached. This is true for wired networks, but doubly so for wireless networks.
26
Network segmentation
27
Network Segmentation (cont…)
This is a typical network design. This particular design is for a business
that has a networked control system, such as a power grid, or a process
control system. There are four layers here: the outer DMZ, which is
accessible from the Internet, and provides public services, the office
LAN, which is the main business network for the company, and is not
accessible from the Internet, the inner DMZ, which is accessible from
the office LAN and the SCADA LAN, and the SCADA LAN, which is not
accessible from the other networks at all.
This design implements multi-layer security through the various
perimeters, and implements the principle of least privilege: internet
access is limited to those systems that provide services to the Internet;
the office LAN can get data from the SCADA LAN only thorough
services on the inner DMZ, and so forth.
28
Separation mechanisms
Airgaps
Physically disconnected network segments
No integration between networks
Firewalls
Devices that can block disallowed traffic
Tunable integration between networks
Routers
Devices that forward traffic between networks
Not for segmenting networks for security
Bridges
Some people say you can use bridges to segment networks
You can’t in any useful way for security
29
Separation mechanisms (cont…)
So, how do we separate network segments?
There are several ways. The strongest for of separation is an airgap:
simply don’t connect the networks. In my experience, Ethernet packets
have a really hard time jumping from one cable to the next, so this is pretty
secure. The most common mechanism is a firewall, which is essentially a
router that has rules for which traffic it allows through.
In some businesses, where security hasn’t penetrated yet, and in some
places where people simply don’t know any better, you may see
recommendations to separate networks with routers or even switches.
That’s complete poppycock.
Routers and switches are built to connect, not separate networks, and are
not useful for security.
30
Airgaps
No physical connection
No traffic can flow
Complete security!
31
Airgaps (cont…)
The ideal separator is the airgap. But in reality, airgaps
often don’t work. The main reason is that we generally
need to transfer data to and from airgapped networks,
and if we can transfer data, it is possible to attack the
network. Perhaps not easily, but it is possible. And if we
transfer data regularly, chances are that we’ve
implemented a convenient way to do so, and that makes
attacking the network easier.
For most applications, ensuring that the airgap works is
simply too difficult for it to be reliable.
32
Does that airgap really exist?
Airgaps don’t always exist
Temporary connections
Misconfigurations
Dual-homed systems
Why?
Poorly understood policy
Design does not support
Business needs
33
Does airgap really exist? (cont…)
Airgaps can be missing from reality even if they exist in design.
In fact, most networks that are thought to be airgapped aren’t.
The problem? Other security mechanisms are designed with the
airgap in mind.
Examples
Temporary connections for software updates, system installation,
maintenance, emergencies don’t always get taken down, and
temporary can be enough for an intruder.
Misconfigurations of e.g. switches where a ”virtual” airgap is
created by partitioning the switch or using VLANs can create a
connection.
Dual-homed hosts, e.g. database or IT support servers often
exist.
34
Laptop defeat airgap
35
Laptop defeat airgap (cont…)
Here’s a classic example of how airgaps can be
defeated through simple carelessness. An operator
uses a laptop at home, or at a cafe to access the
Internet, and gets infected with a worm. The
operator then moves to the protected network, and
connects. The worm begins to spread on the
protected network. In essence, the laptop is a time-
lapse network connection. This scenario is not
unusual, and there have been instances where
exactly this happened.
36
Good network management defeats
airgaps
Management LAN
Restricted LAN used for network device management
Must be (and usually is) well protected
Does it extend to the protected network?
Virtual switches and virtual LANs
One physical device implements several networks
As long as the device is secure, the networks are separate
Is this used to implement a separated network?
37
Good network management defeats
airgaps (cont…)
Surprisingly, good network management practices work against
airgapped networks. Network managers like having the entire
network at their fingertips, and often do so by using virtual LANs.
These are LANs that are logically disconnected, but may run on
the same wires and equipment. Network managers also like
having a management LAN from which they can reach all the
network equipment. The management LAN is typically a virtual
LAN, and is usually only accessible from a small number of
places. Nevertheless, this LAN essentially connects all networks
with each other – an attacker could instruct routers to start
sending data over it – and if any ”airgapped” networks use
equipment accessible from the management LAN, they are
connected, in a way, to all the other networks.
38
Airgap conclusions
Yes, airgaps offer excellent separation
39
Firewalls
A firewall is a computer that can act as a router,
and can filter traffic passing through it based on a
set of rules.
It restricts traffic flows from inside to outside
It restricts traffic flows from outside to inside
Used to
Enforce network security policy
Enforce network partitioning
Abused as
An excuse not to secure the inside
40
Firewalls (Cont…)
Firewalls are routers with rules that specify what
traffic they let through. Firewalls are used to
enforce network security policy – who may
speak to whom, using what protocols, and what
applications – which means they enforce
network partitioning. Firewalls are also often
abused as an excuse for shoddy security on the
inside: the typical reasoning is that if you have a
firewall, then everything behind it is safe. Well,
that’s not true, and we’ve known that it’s not true
for a very long time.
41
Firewall rules
Traffic criteria
Source and destination address, source and destination port,
protocol, physical interface, rate…
Typically not application-level information
Action to take
Allow traffic to pass
Drop traffic without notification
Reject traffic with notification to source
Policy
Actions for traffic that does not match any criteria
42
Firewall Rules (Cont…)
Firewalls contain a set of rules that say what to do with
traffic. A rule usually consists of a set of criteria, such as
source address or destination port, and an action take if
the criteria match. Typically, criteria are things that can
be seen in network or transport layer headers – not
things at the application layer. There are exceptions: you
can build a firewall that operates on the application layer,
but it will be slower and require more resources than one
that doesn’t. Finally, any firewall needs to decide what to
do with traffic that doesn’t match any criteria. This is
called a policy.
43
Firewall Rule Example
44
Network address translation
Rewrite addresses on packets through the
firewall/NAT box
Addresses on inside rewritten to address on outside
Allow hosts with private addresses to access outside networks
Prevents direct connections to NATed systems
45
NAT (cont…)
Network address translation is a technique that allows you to use private
address space and still get limited connectivity with the Internet. The theory
is that you can connect to stuff, but stuff can’t connect to you. Many people
consider that a security feature: if the hacker can’t connect to you, the
hacker can’t attack you.
To some extent this is true, but every time a host connects out, it is at risk.
It could be attacked by malicious websites or other services, and once
compromised, it can initiate attacks behind the NAT gateway,
compromising nearby systems. The other problem is that it is occasionally
possible to penetrate NAT gateways. Getting arbitrary connections through
is generally hard today (not always impossible), but it certainly is possible
for an attacker to connect to a compromised host without the connection
being initiated from the inside.
46
NAT Penetration
47
NAT Penetration (cont…)
So, you have a firewall. It does NAT for all the hosts behind it. That way an attacker
can’t connect to the hosts. Not only that, but you’ve limited outgoing connections to
HTTP to specific sites. You don’t even allow DNS to go out. That goes through a
corporate proxy server. So even if a host is compromised, it’s of no use to the
attacker. Right? They can’t connect to it. It can’t connect to them. Wrong. And if you
haven’t limited outgoing connections, VERY wrong. But even if you have, you do have
a channel out, via DNS. Your clients can send queries, which are resolved by the
server. Now, that means you can send data to a client if the client initiates contact.
And DNS allows you to send arbitrary data by using TXT records or by encoding in
other kinds of records. This much is straightforward, but what about connecting to a
client on the inside? You need a couple of things. First, you need a DNS server that
can reach the inside of the network.
This will typically be the corporate resolver. Second, you need to trick someone or
something on the corporate network to make a legitimate DNS query for e.g. a PTR
record (to map an IP address to a name). This could be the web server, the IDS or
even a human. Finally, you need the internal address of the owned client. Finally, you
need to control a name server.
Let’s say you get the corporate DNS to initiate a query for 17.189.236.130.in-
addr.arpa, and you control that DNS zone. When the query reaches your server, you
respond with a CNAME containing the information you want to send to your owned
host. The information is a name in another zone you control. When the query reaches
you, you reply with a delegation to the owned host. The resolver, which can reach that
computer, will forward the question there, and you’ve just succeeded.
48
Some firewall concerns
Only as good as its configuration
Studies have shown that firewalls are very often misconfigured
Typical IT testing is not particularly thorough
Firewall weaknesses
Firewalls provide little protection from insiders
Firewall failure can lead to network failure
Firewalls may have vulnerabilities that intruders can exploit
49
Some firewall concerns (Cont…)
Firewalls are good tools, there’s no doubt about that. But it’s
important to realize that a firewall is only as good as its
configuration, and studies have shown that many firewalls are
miconfigured in some way. One reason for this is that typical IT
testing practices are lax, and that many firewalls are set up without
giving proper thought to the security policy beforehand.
50
Firewall conclusions
A very useful security tool
They can provide perimeter security
They can implement security policy
You should probably have one (or more)
Management required
Careful design and configuration
Careful management and monitoring
One link in the security chain
Firewalls are not excuse for forging weak links elsewhere
51
Trust relationships
trust n.
1. reliance on the integrity, strength, ability, surety,
etc., of a person or thing; confidence.
2. confident expectation of something; hope.
Trust relationships bridge gaps in our systems
Trusted systems are given additional access or
rights
Trusted systems may provide data we rely on
Trust relationships are potential vulnerabilities
What if the trusted system misbehaves?
52
Trust relationships (cont…)
When we segment networks, we still need to allow
some communication between them, and within a
segment, systems will rely on each other, or
share sensitive information. These are all
examples of trust relationships. Essentially, trust
relationships are what allow systems to
collaborate.
The problem is that by their nature, trust
relationships can be exploited to attack a system.
If a system A trusts B to behave in a particular
way, and B doesn’t, this can have adverse effects
on A.
53
Examples of trust relationships
Use of a DNS server
Trust the DNS server to convert name to correct address
Trust the DNS server not to send malformed data back
Use of a directory server for authentication
Trust directory server to provide correct authentication
data
Use of shared passwords
Trust others (systems or users) not to divulge the secret
Firewall rules
Trusted systems may communicate with each other
54
Trust relationships: remote access
scenario
55
Trust relationships: remote access
scenario (cont…)
Here’s a scenario from the real world. In this case, we have a company that runs a control
system using the MODBUS protocol. The SCADA LAN is used for this. MODBUS is an entirely
insecure protocol. The SCADA LAN is separated from the business LAN by a firewall, and the
business LAN from the Internet with another firewall. We allow a software developer to access
workstation on the business LAN from a Windows XP computer in his (or her) home, via a
virtual private network. The VPN is correctly configured so that it will never connect to the
wrong network. The firewall allows users on the business workstation to connect to the
engineering workstation using PC Anywhere.
There are lots of trust relationships in this scenario. The devices on the SCADA LAN trust the
engineering workstation to send only valid commands. The engineering workstation trusts the
business workstation. The business workstation trusts (to some extent) the Windows machine
in the developer’s home. That machine trusts the ISPs DNS server and the ADSL modem. So
what’s the problem?
Well, what if the ISP DNS, or the ADSL modem were compromised to direct the windows
machine to a website with malicious code, when the user browses the internet. The malicious
code sits on the windows machine until the VPN is set up, whereupon it attacks the business
workstation (and any other systems on the business LAN it can reach). It also watches for PC
Anywhere connections and exploits them to compromise other computers. Here, the chain of
trust relationships could allow an attacker entry to a very well protected, and sensitive, system.
56
Trust Relationships
57
Trust Relationships (cont…)
This is an example of a fairly typical architecture. Here there are 40
workstations that all have the same administrative password (the blue
cluster). All are connected to each other.
There are three file servers. Two are connected to the workstations, so
the workstations trust those file servers. Compromising a file server will
compromise all workstations. The third file server serves two database
servers. The database servers trust the file server, and the file server
has the same administrative password as the two database servers do.
One of the database servers is connected to a cluster of web servers.
The web servers share an administrative password, and have the ability
to connect to the database server using a privileged account.
58
Trust relationship conclusions
A necessary risk
Trust relationships are a business necessity
Trust relationships lead to exposure
The question is whether the exposure is motivated
by the needs of the business or not
What to do
Map existing trust relationships
Eliminate trust relationships that do not serve a
business need
Evaluate exposure caused by trust relationships
Mitigate those leading to unacceptable exposure
59
Backdoors: bypassing the gaps
A hardware or software based secret entrance into a
computer system that bypasses security controls
Back doors allow intruders to bypass perimeter
security
The firewall will not help – it has been bypassed
The network IDS may detect nothing – it is designed
to examine
Traffic coming through the front door
Back doors may bypass access control and
application security
60
Backdoors (cont…)
The traditional back door is something in the
authentication software that lets people through.
Backdoors are a real problem, as they create a
path into the network that may bypass perimeter
controls. They may even bypass application
security and access control.
For this presentation, we take a somewhat broader
view of what constitutes a backdoor.
61
Backdoor examples
HP telephone support account (telesup/no password)
Installed at financial institution w/o company knowledge
Secure Shell software modified to accept any
password
Installed by intruder on academic system
3COM corebuilder hard-coded password
(debug/synnet)
Installed by manufacturer for use by support staff
Lexar PBX hard-coded password
Customers required to maintain dialup line
62
NOT-so-Secret Backdoors
63
NOT-so-Secret Backdoors (cont…)
Here’s an example from the wild world of control systems. We have
a computer controlling a number of remote terminal units, which
in turn control (or listen to) controllers. The computer is
accessible through a dial-in model, placed there for vendor
support. The computer communicates using a radio link (running
various protocols in IP) with the control system LAN. This exact
architecture is present in many substations around the world.
The problem is that the modem provides a back door into the
system. Even if there is a password (which there usually isn’t), it
can be guessed, and that lets an attacker onto a network without
going through any kind of perimeter protection.
64
NOT-so-Secret Backdoors (Wireless)
65
NOT-so-Secret Backdoors (Wireless)
Here’s a more complex example. Think of the
SCADA LAN as any protected LAN. It could be
for e.g. financial transactions, control systems,
system management, or pretty much anything.
In this example, there are lots of entry points to the
network, and none of them protected by the
firewalls.
66
Back door conclusions
Back doors are very common in complex systems
Things get forgotton, misplaced and mistakes are
made
Back doors are very dangerous
They break the assumptions security is based on
Sometimes you do need them
But be aware of the risks they pose!
67
Wireless Networks
68
IEEE 802.11 a,b,g,n,ac,ax
Operates in the ISM Band at different data rates
Infrastructure vs Adhoc networking
69
IEEE 802.11 a,b,g,n,ac,ax
Integrated security feature (WEP/WPA/RSN)
70
IEEE 802.11 a,b,g,n,ac,ax (cont…)
There are a number of related IEEE 802.11 standards. All operate in the ISM bands
(Industrial Scientific Medical).
The original standard, defined in 1997, operated in the C-ISM band, around 2.4GHz.
Two separate PHYs were specified: one with an FHSS (frequency hopping) PHY and
one using a DSSS (direct sequence) PHY. The original standard provided for a
throughput of up to 2Mbit/s. The next standard, defined in 1999, was 802.11a, which
operates in the S-ISM band around 5MHz. This standard allows transmission rates of
up to 54Mbit/s, and uses a OFDM PHY. Products based on 802.11a were not
released until 2000 but are fairly common and affordable today. The third standard
was 802.11b, which uses a HR/DSSS PHY operating in the 2.4Ghz band. The most
recent standard is IEEE 802.11g, which uses an OFDM PHY operating around
2.4GHz, providing throughput of up to 54GBit/s. The next step in speed is 802.11n
that promises 200+Mbit/s over the air performance. IEEE 802.11 supports two modes
of operation: independent (or ad hoc) and infrastructure networks.
Security was a top concern when IEEE 802.11 was defined. The standard includes an
optional protocol called WEP that is designed to provide the same level of security as
a wired network.
71
Infrastructure Network
72
Infrastructure Network (cont…)
The previous slide shows the basic concepts in an IEEE 802.11 network:
BSS, ESS, APs, DS, Associations and stations.
AP means Access Point, and is the bridge between the wired and distribution
system. The access point is responsible for relaying traffic from the
distribution network to stations that are associated with it, for managing
associations and reassociations, for relaying traffic from stations to the
distribution network and frequently for authenticating stations on the network.
BSS means Basic Service Set, and is the service provided by a single access
point.
ESS means Extended Service Set, and is the service provided by several
access points connected by a distribution system. Stations can roam
between the BSSes that make up the ESS.
The distribution system is a network connecting access points. The
distribution system is used to relay traffic from stations within one BSS to
stations within another BSS. It is also used to relay traffic to and from external
networks. The distribution system is also used by the access points to
communicate with each other, e.g. when a station moves from one BSS to
another within the same ESS.
An association is a logical connection between a station and an access point.
When signing on to the network, a station will associate with an access point
within range. Traffic between the station and hosts outside the same BSS will
go through the access point the station is associated with.
73
Adhoc Mode
74
WiFi Security Standards
WEP (Wired Equivalent Privacy)
First security standard – broken by design
Today: 5-10 minutes to break into a WEP network
WPA (WiFi Protected Access)
Probably mostly secure by design – may be vulnerable to DoS
Problems with deployment using pre-shared keys
RSN/WPA2/IEEE 802.11i
Probably mostly secure by design – new everything
Still has issues with forged management messages
75
WiFi Security Standards (cont…)
The WiFi security standards have evolved significantly. From WEP, which
was supposed to provide security equivalent to a wired network, we’ve
moved to WPA, which is based on WEP but actually does provide some
security, to WPA2, RSN, IEEE 802.11i, which provides fairly strong
security.
Throughout all this, one thing has remained constant: the management
frames, which are used to control the network, are not protected in any
way – they can be forged, manipulated, anything.
The supposed reason for this is that what you can do by messing with
management frames can be done by messing with the radio. The problem
with that is that messing with the radio is harder, and easier to detect. I
think the real reason is that the industry has painted itself into a corner of
backwards compatibility: if the protocol changed drastically, old
equipment would stop working.
76
WEP (Wired Equivalent Privacy)
77
WEP (Cont…)
Wired Equivalent Privacy was supposed to provide the same level of
security that wired networks have. With WEP, frames on the network
started with a frame header containing information such as the source and
destination address of a frame and an Initialization vector (IV), which
contained the 24 first bits of a 64 bit long RC4 key. The remaining 40 bits
are a secret, shared between the access point and station (in fact, each
station can have four keys per access point, but in practice, only one is
used). The data and an integrity check vector, which is simply a CRC32
over the data, are encrypted using RC4.
This doesn’t look too bad. Encryption using a pretty good algorithm
protects the data from prying eyes. The key length is a bit short, but fixing
that shouldn’t be a huge problem. There’s an encrypted checksum to
protect the data from being tampered with. And with the IV, not every
frame will be encrypted with the same key, which is also a good thing.
As it happens, WEP is utterly broken. We’re going to look at some of the
ways to break it, since they’re instructive.
78
WEP (Cont…)
79
WEP (cont…)
The most serious problem is that RC4 has something called weak keys. If a frame is encrypted with a
weak key, part of the key is known and the first byte of the frame is known, that gives the attacker some
information on they entire key. This isn’t a huge problem if it’s hard to recognize when a weak key is used
or if the first byte is unknown. Unfortunately for WEP, the first 24 bits of a key, the bits that are transmitted
in clear, give the attacker enough information to attack weak keys. Furthermore, the first byte of a frame is
almost always the same, so the attacker knows the first byte in the keystream as well. The attack scales
linearly with respect to key size, so using longer keys won’t help. Even if the key can’t be recovered, it is
possible to recover every possible keystream. Since each frame uses up to 1500 bytes of data, and there
are 2^24 distinct key streams, the total storage requirements for all possible keystreams is about 24GB,
which isn’t large by current standards. It is possible for an attacker to recover keystreams (or partial
keystreams) by looking for frames with known contents. Note that an active attacker can generate such
frames at will by transmitting data to a station. Even when this is not possible (such as when the AP
requires WEP), an attacker can exploit a vulnerability in the protocol to recover all keystreams. This attack
is called inductive chosen plaintext, and does require some time to launch. It doesn’t always work since it
depends on the ability to send very short frames, and some APs drop short frames, but it can work, and is
an interesting case of using the protocol to break itself.
There are other problems as well. Standard WEP authentication is susceptible to replay attacks. The AP
sends a challenge to the STA, which encrypts and returns it. The problem is that this allows the attacker to
extract they keystream used for authentication. If the attacker wants to authenticate, he can simply
combine that keystream with any new challenge.
The integrity protection feature is also utterly broken. The checksum and encryption both are linear
functions. This makes it possible for an attacker to alter a message and ICV without knowing the
keystream; flipping a bit in the message results in a predictable flip of bits in the ICV, which are reflected
as bit flips in the encrypted frame. So doing things like changing the destination address of encrypted TCP
frames can be done without even decrypting them!
Finally, WEP does nothing to prevent denial of service attacks. This is intentional, but nonetheless
80
somewhat annoying.
Wi-Fi Protected Access (WPA)
Flaws in WEP known since January 2001 - flaws include weak encryption
(keys no longer than 40 bits), static encryption keys, lack of key
distribution method.
In April 2003, the Wi-Fi Alliance introduced an interoperable security
protocol known as WiFi Protected Access (WPA).
WPA was designed to be a replacement for WEP networks without
requiring hardware replacements.
WPA provides stronger data encryption (weak in WEP) and user
authentication (largely missing in WEP).
WPA Security Enhancements
WPA includes Temporal Key Integrity Protocol (TKIP) and 802.1x
mechanisms.
The combination of these two mechanisms provides dynamic key
encryption and mutual authentication
TKIP adds the following strengths to WEP:
Per-packet key construction and distribution:
WPA automatically generates a new unique encryption key periodically for
each client. This avoids the same key staying in use for weeks or months as
they do with WEP.
Message integrity code: guard against forgery attacks.
48-bit initialization vectors, use one-way hash function instead of XOR
WPA
Longer RC4 keys
Avoid weak keys
Hide keys better
New integrity check
Replay protection
802.1x authentication
83
802.11i
Uses AES, not RC4
Longer keys
New integrity check
Replay protection
802.1x authentication
84
WPA and 802.11i
WPA is an interim standard for secure wireless LANs that is beginning
to become popular. It can be implemented on WEP hardware, which
means there is an upgrade path for legacy equipment. WPA fixes a
number of problems with WEP. It has longer keys, so exhaustive key
search is no longer possible. It attempts to avoid weak keys by
combining the IV, other public material and the secret key in a smarter
way. It hides information about the keys being used by using a key
mixing function. It includes an all-new integrity check and
countermeasures to ensure that something like the inductive plaintext
attack will fail. It mandates 802.11x port-based authentication.
802.11i is the new security standard. Unlike WPA, it will require
hardware upgrades. One of the most important features is that RC4 has
been replaced by AES, which should be a much more secure cipher.
AES is run in “counter mode” to transform it from a block cipher to a
stream cipher. 802.11i also mandates new integrity checks and 802.1x
port-based authentication. All in all, WPA and 802.11i both look like
really good solutions. The one thing that’s still a problem is that
management frames are unprotected. This enables attackers to do nasty
things by spoofing them.
85
802.1x Authentication
86
802.1x Authentication (Cont…)
The 802.1x standard was originally designed for wired networks and remote access systems,
but the model is useful for wireless networks as well. The fundamental concept is the port. This
is not a physical port, but a logical connection in the authenticator system, which in the case of
wireless networks is usually the access point. At the outset, when a device wants to
communicate, it is connected to a port that only gives access to authentication services. The
device is identified by its MAC address. At this point, the device, called a supplicant, can
communicate with the authenticator, which it does using EAP messages (EAP is the Extensible
Authentication Protocol). EAP is nice because it doesn’t mandate a particular method of
authenticating. It’s possible to plug pretty much any kind of authentication in to the system. The
authenticator in turn communicates with an authentication server, which is typically a RADIUS
server (but could be something entirely different). Once authentication is complete, the
authenticator switches the supplicant over to a port that gives access to the services the
authenticator offers (typically access to the DS).
802.1x is pretty good, and with a suitable EAP method, is secure, but it does have a problem.
The problem is that there is no tight coupling between the supplicant and the authenticator; it is
possible to get them out of sync. An attacker can, once the supplicant is authenticated, forge a
disassociate message to the supplicant and then, using the supplicant’s MAC address,
communicate with the authenticator. The authorized supplicant has already authenticated, so
the port is open. This attack is only feasible if data communications are unencrypted, 802.1x
authentication should always be combined with data encryption. Some implementations of
802.1x are subject to authentication method downgrading. This is an attack in which the
attacker convinces the authenticator or supplicant to use an authentication method that isn’t
very secure. For example, an attacker spoofing an access point could advertise only
authentication methods that don’t provide mutual authentication. This is permitted by 802.1x,
and a supplicant that isn’t configured to refuse to authenticate under these circumstances might
87
authenticate to the attackers fake AP.
WPA has problems
WPA-PSK ((pre-shared key)
Dictionary attack on PSK
First four frames of authentication send in the clear
Many WPA passwords are weak
WPA-PSK cracking
coWPAtty: 30 keys/sec
With precomputed tables: 18000 keys/sec
DoS possible
88
WPA has problems (cont…)
WPA also has problems. In particular, the initial EAP exchange is sent
in the clear, and using this information an attacker can launch a
dictionary attack against the WPA password. Since many users have
weak passwords, this attack is often successful. Tools exist that make
the job simple. This attack is often not considered very serious, because
good passwords defeat it, and even with bad passwords, it takes a
considerable amount of time to test for them. However, recently
precomputed password tables have been published that make the job
much faster. These tables have all passwords up to length N computed
for the R most common SSIDs (WPA-PSK seeds the hash with the
SSID). The MIC countermeasures have an obvious potential for DoS. If
you manage to create two invalid MICs within a minute, everyone gets
kicked off the AP. Supposedly this is very difficult, but there is a tool
that does it. The tool has not been released for undisclosed reasons, but
you can expect that the method will become widely known soon.
89
WPA2
In July 2004, the IEEE approved the full IEEE 802.11i specification,
which was quickly followed by a new interoperability testing
certification from the WiFi Alliance known as WPA2.
Strong encryption and authentication for infrastructure and ad-hoc
networks (WPA1 is limited to infrastructure networks)
Use AES instead of RC4 for encryption
WPA2 certification has become mandatory for all new equipment
certified by the Wi-Fi Alliance, ensuring that any reasonably modern
hardware will support both WPA1 and WPA2.
Security in IP Level Protocols
ICMP
Error and control messages
Information requests
Not designed for security
Inappropriate control messages
Source quench
Redirect
Inappropriate requests
Timestamp request
Netmask request
91
ICMP (cont…)
ICMP was not designed with security in mind, and as a result,
there are ICMP messages that are inappropriate today. Many
systems support these, but in most, they are ignored by
default.
For example, the source quench message tells the recipient to
slow down sending. If this obeyed, source quench messages
could be used to launch an effective, low-bandwidth DoS
attack. The redirect message tells a system to send traffic to a
particular system to a specified router. Again, bad idea as this
allows an attacker to directly manipulate the routing tables on
the target, in order to launch a DoS, MITM or other attack.
Even information requests can be inappropriate, as they reveal
information about systems and networks that outsiders have no
business asking for.
92
IP Fragmentation
Fragmentation is needed when the datagram is larger than
the path MTU. Fragmentation is an integral part of IP.
Identification
Which IP datagram is this?
Flags
Are there fragments?
May I fragment you?
Fragmentation offset
Which fragment is this?
93
IP Fragmentation (Cont…)
Fragmentation becomes necessary when a datagram exceeds the MTU of the path between
sender and receiver. In the IP header, fragmentation is supported by the identification, flags
and fragmentation offset fields.
The identification field is set by the sender, or when set to zero by the sender, set to a different
value by the entity that fragments the datagram. The flags control fragmentation. The
fragmentation offset indicates where in the datagram the fragment belongs. To reassemble a
fragment, the entity doing the job has to be able to recognize whether or not a datagram has
been fragmented, which fragments belong together and in what order. It also has to be able
to determine when all fragments have been reassembled.
The identification field together with the source address, destination address, and protocol fields
identifies each datagram. All fragments in a datagram share the same identification value. It’s
important not to re-use the same identification value for more than one ”live” datagram at a
time, or you may end up piecing together fragments from different datagrams at reassembly.
The flags field contains two flags (the high bit is reserved). The middle bit is called ”DF” and
means ”don’t fragment”. If a datagram with this flag set exceeds the MTU of a link, the
network has to signal a failure. It may not fragment the datagram and sent it on. It has to
drop the datagram and respond with an ICMP fragmentation needed message. The low bit is
the ”more fragments” bit. When set, it indicates that the packet is not the last fragment of the
datagram.
The fragmentation offset indicates where in the datagram the fragment belongs. It is an offset
(counted in units of 8 octets) from the start of the datagram. Here’s an important question:
why use an offset and not just number the fragments? Consider the following facts: packets
can be sent down multiple paths; packets can be duplicated; the smallest MTU may not be
94 on the first link.
Fragments reassembly example
95
Fragments reassembly (Cont…)
This is a simplified example of reassembly. I’ve assumed that the ID
field contains all the identification necessary, and I’ve ignored the
details of how the header of the datagram is handled. Reassembly
is pretty straightforward. The receiver has a buffer in which the
datagram is reassembled. When a packet arrives, it is placed in
the buffer. If it is not fragmented, the datagram is ready and is
delivered to the next processing step.
If the datagram, however, is a fragment, it is placed at the
appropriate position in the buffer. The header is taken from the
first fragment.
Fragments can arrive in any order. That poses no problem for
reassembly.
96
Ping of death
The ping of death is a form of denial-of-service (DoS)
attack that occurs when an attacker crashes, destabilizes, or
freezes computers or services by targeting them with
oversized data packets.
Create IP datagram larger than 65535 bytes
Some IP implementations would crash during reassembly
97
Ping of death
Fragment 1
Size: 65500 bytes
Offset: 0
Fragment: 2
Size: 2048 bytes
Offset: 65500
Lessons learned? Don’t just specify what the correct behavior is, but also
analyze abuse modes/attacks, and specify how to respond.
98
Securing the network layer: IPSec
ESP – Encapsulating Security Payload
Confidentiality
AH – Authentication Header
Authentication
Integrity
IPComp – IP Compression
Data compression before ESP
IKE – Internet Key Exchange
Secret key exchange
99
Securing the network layer: IPSec
Network layer security involves securing communications at the network
layer. At this layer, the concern is end-to-end communications between
hosts. Typically, communication at this layer will travel over multiple
networks, some of them public and some of them private.
The concerns at this layer are usually:
* Authentication – is the communicating party the correct communicating party?
* Confidentiality
* Integrity
Today, network layer security is typically implemented using IPSec, but
there are alternatives. IPSec is based on two different core protocols,
called ESP and AH. ESP provides confidentiality, while AH provides
integrity and authentication. The two can be combined. There are also
two optional protocols, IPComp and IKE. IPComp is used to compress
the IP payload prior to encryption. This is useful since encryption tends to
have a negative impact on wire encryption (as in PPP). IKE is a protocol
for exchanging encryption keys between hosts.
In short, what you get with IPSec is the ability to set up a secure channel
between two hosts. The IP packets will be routed as usual, but the
payload can be protected.
100
ESP
101
ESP (cont…)
The ESP protocol provides encryption and
authentication. The sender may choose one or both
services, but at least one is required. The protocol
adds a header and a trailer to the data payload. The
header contains two fields: the SPI, which is used by
the destination to determine how to process the
packet and a sequence number, which can be used
for replay protection. The trailer consists of optional
padding (up to 255 bytes), which is used to align the
trailer and to expand the payload to a block size
suitable for encryption. Finally, authentication data
may be appended.
102
AH
Authenticated
Version
Internet Header Length
Total Length
Identification
Protocol
Source/Destination Address
Not authenticated
Type of Service (TOS)
Flags
Fragment Offset
Time to Live (TTL)
Header Checksum
103
AH (cont…)
104
AH (cont…)
AH provides authentication of the payload and parts of the
IP header (specifically the non-mutable parts). AH adds a
header to the payload, consisting of the payload length, a
reserved field, the SPI, sequence number for replay
protection and variable-length authentication data. The
payload follows the AH header. Unlike ESP there is no
trailer.
AH is appropriate when authentication of the surrounding
IP datagram is required. In other cases, ESP is more
appropriate.
105
IPSec modes
IPSec can operate in
two modes: transport
mode and tunnel mode.
Transport mode is for
communication
between peers. In this
mode, each peer
applies AH and ESP,
and the data remains
encrypted until it
reaches the other peer.
106
IPSec modes (cont…)
In tunnel mode, existing IP
datagrams are encapsulated in new
IPv4 or IPv6 headers. Peers
communicate using normal IP, but a
gateway applies IPSec and sends
them to the peer gateway, where the
IPSec information is stripped. Tunnel
mode is designed for implementing
VPNs. The idea is that if two sites
need to communicate securely over
a public network, they can set up an
IPSec tunnel between the sites and
use tunnel mode. The hosts at each
site will be able to communicate, and
any IP datagrams that traverse the
public network will be protected by
the IPSec gateways.
107
IPSec modes (cont…)
It is worth noting that IPSec in transport mode gives
away more information to an attacker than tunnel
mode does. In tunnel mode, it is likely that a number
of hosts are behind each gateway, but an attacker
can’t distinguish what host is responsible for what
traffic. In contrast, transport mode exposes the source
and destination addresses, allowing the attacker to
perform rudimentary traffic analysis.
108
ESP/AH
109
ESP/AH (Cont…)
ESP provides encryption and integrity protection for a datagram.
In transport mode it provides protection for the payload only.
The IP headers are not protected at all. They can obviously not
be encrypted, but there is no integrity protection either. In tunnel
mode, ESP is applied to the entire original IP datagram, so the
whole thing is protected. The tunnel IP header is not.
AH provides integrity protection for the datagram and parts of the
IP header. In transport mode, it protects static fields in the IP
header as well as the payload. In tunnel mode, it protects parts
of the tunnel IP header as well as the entire original datagram.
Note that it is possible to apply both ESP and AH to the same
datagram.
110
Authentication and Encryption
111
Authentication and Encryption (Cont…)
There are two ways to get both authentication and confidentiality:
nested ESP in AH or authenticated ESP. In the former case, ESP
is first applied to the original datagram, causing the payload to
be encrypted. Then AH is applied to the ESP datagram, adding
authentication of the IP header, ESP header and encrypted
payload.
The other option is to use authenticated ESP. In this case, a trailer
is added to the datagram containing authentication information.
Since this is ESP, the IP header is not authenticated; only the
ESP header and payload are authenticated.
112
Security associations
SA
One-way relationship between IPSec hosts
Determines processing for sender and decoding for
destination
SA Database (SAD)
Parameters of each SA
Security Parameter Index (SPI)
Used by destination to select correct SA for pkt
SPI + destination address + protocol identifies SA
SA Bundle
SAs to apply together
e.g. ESP in AH
113
SA(cont…)
A security association is a relationship between two hosts that allows one to send IPSec-
protected packets to the other. Note that the security association is unidirectional –
for bidirectional communication over IPSec to take place, there must be two SAs, one
in each direction. The security association determines how packets are processed at the
sender and recipient. This includes what protocol to use, what mode, and what
parameters to use for the protocol (e.g. what encryption algorithm, what key and so
on). Security associations are stored in the Security Association Database (SAD), and
are uniquely identified by a number called the SPI (Security Parameter Index), the
destination address of packets to process and the protocol to use (ESP/AH).
Security associations can be bundled to form SA bundles, which are SAs to apply
together. For example, an SA bundle might specify application of IPComp first,
followed by ESP and then AH, for compression, encryption and authentication of the
same packet.
At the destination, processing for each packet is determined by finding the SA with the
same destination address as the packet, the same protocol (ESP or AH) as the protocol
and the same SPI as the one in the ESP or AH header.
At the sender, processing is determined by the Security Policy Database. The security
policy database is also used at the destination to ensure that traffic has been adequately
protected.
114
Internet key exchange (IKE)
115
IKE (cont…)
IPSec depends on security associations and the security policy database. For static associations, such as permanent VPN
tunnels, this is enough, but if we want to use IPSec for dynamic associations, something more is required. For
example, a company might want on-demand VPN tunnels for laptops on the road, or we might want to run IPSec
on wireless clients, regardless of what WLAN we happen to be on. Part of the solution is a protocol for
negotiating security associations and security policies, and for IPSec, IKE fills that function. IKE isn’t a complete
solution – it doesn’t solve the problem of identifying peers, and it doesn’t solve the problem of discovering an
IPSec gateway. There are other approaches available, of which OE, Opportunistic Encryption, is perhaps one of
the simplest. It uses secure DNS to distribute keys. The advantage of this is that DNS is widely deployed. The
disadvantage is that the security of an SA depends on the security of DNS.
Today, IKE version 1 is deployed. IKEv1 is a flexible but complex protocol, and it requires extensive exchanges of
messages to set up security associations. As a result, IKEv2 was developed, and it is considerably less complex, and
a lot faster. We’ll discuss IKE from a high level, so a lot is applicable to both IKEv1 and IKEv2. When we get in to
details, we’ll focus on IKEv2 since it is simpler and probably will replace IKEv1 eventually. Both IKEv1 and IKEv2
follow the same overall structure. The first step is a clear text exchange, including Diffie-Hellman key exchange,
where the peers set up a SA for IKE itself. All subsequent messages are encrypted. The second set of messages are
for mutual authentication. IKE supports a number of authentication methods; in IKEv1, it was possible to use pre-
shared keys or certificated; IKEv2 supports these methods as well as the extensible authentication protocol (EAP),
which is used in 802.1x authentication and in many other situations. Once the IKE SA is set up and the peers
authenticated, IKE can set up an SA for IPSec. In IKEv2, several child SAs can be created, speeding up the process
of creating multiple SAs between the same two peers.
IKE is typically implemented as a user-space process that can update the SPD and SAD based on the IKE exchange.
116
IPSEC Summary
117
Transport layer protocols
TCP three way handshake
SYN Flooding
Attacker floods target with TCP connection attempts
Effect
Target allocates resources for each connection
Resources are allocated until attempt times out
118
TCP handshaking (cont…)
The SYN flood has become a real classic, and it’s a problem that’s easy to design in to a
protocol. The principle is simple. When a connection attempt using TCP is made (the
target receives a SYN segment), resources are allocated for that connection. Those
resources remain allocated until the connection terminates or times out. Although the
amount of resources is typically small, it isn’t negligible. A SYN flood is where an
attacker initiates as many connection attempts as possible to the target, ideally with the
source addresses set to some innocent address that doesn’t respond at all, or simply to
random addresses on the internet. For each connection, the target allocates a small
amount of memory, but because there are so many attempts, and connections time out
slowly, the aggregate slowly approaches all available memory (other resources may also
be consumed). At that point, the target will no longer accept any connections, legitimate
or not. A target that is under a SYN flood attack becomes unresponsive, and may even
crash.
The reason SYN floods work is that TCP requires the recipient of a connection attempt to
allocate resources to keep track of the connection. Had that not been necessary, SYN
floods wouldn’t work very well. Interestingly enough, although SYN floods do generate
a lot of traffic, it is usually not enough to consume all available bandwidth.
119
Preventing SYN floods
120
Preventing SYN floods (cont…)
To really understand the problem and the solution it’s necessary to understand how TCP, and most other protocols,
establish connections. Connections are established using three or four way handshakes. First, the client sends a
connection request to the server. The server, if willing to accept the connection, replies with an acknowledgment.
The client then acknowledges that, and the connection is established. Three way handshake is a minimum since the
first and second packets may contain parameters for the connection that the other party must acknowledge. In a
four way handshake, there is one additional acknowledgment. Typically, data can also be included in the third (and
fourth) packets. The problem is that the server, upon receiving the second packet from the client, must verify that
the acknowledgment belongs to a connection the server has accepted, and isn’t a stray packet or something more
sinister. To do this, TCP implementations maintain a connection queue, where all accepted connections are parked
until the second client packet is processed.This is the queue that consumes resources during a SYN flood attack.
To prevent this type of problem, the queue needs to be eliminated, and this implies that the server must verify the
connection some other way. The solution is to introduce a cookie into the process. A cookie is a number computed
from parameters of the first packet and information only the server has. This is sent to the client in the first
response, and the client returns it, unchanged in its second packet. The server, upon receiving the second packet
can check that the cookie matches the parameters of the connection. If it does, the client must have received it
from the server as a result of an accepted connection attempt, because only the server knows the secret to
computing the cookie.
Unfortunately, TCP isn’t prepared for cookies. Furthermore, TCP requires the server to keep track not only of
addresses and ports, but also the MSS, maximum segment size, a parameter that limits the size of TCP packets. To
introduce cookies into TCP, serious hacking had to take place. TCP exchanges numbers called initial sequence
numbers, and the cookie can be encoded into the difference between the client’s ISN and the servers (the client
ISN can be recovered from either of the first two packets sent to the server and the server ISN can be recovered
from the server’s first and the client’s second packets). In my opinion this is a monstrously ugly hack, while also
being perversely elegant. It gets the job done, sort of.
121
Preventing SYN floods (cont…)
TCP SYN cookies are probably appropriate when a TCP server
detects a SYN flood, but they are decidedly inappropriate
during normal operations. SYN cookies prevent the use of
important TCP options since the options aren’t encoded in
the cookie or remembered at the server. Certain options,
such as large windows, are very important for performance.
However, if the choices are between succumbing to a SYN
flood or degrading performance for some users, then
degraded performance is probably the best choice. Other
protocols include cookies from the start. For example, SCTP,
a fairly new transport protocol, uses a cookie-based four-way
handshake to establish connections. Since cookies are part of
the original design, they don’t cause problems the way they
do with TCP.
122
Blind connection spoofing
123
Blind Connection Spoofing (Cont…)
Blind attacks in general are where an attacker can’t directly see the results of the attack. In the case of connection spoofing,
the attacker wants to connect to a system, impersonating a trusted source. The problem is that the target will direct
return traffic to the trusted source, not to the attacker, so the attacker can’t see important parts of the communication,
such as the initial sequence number the target specifies. And without the initial sequence number, the attacker can’t
complete the connection.
An interesting point here is that the use of cookie-based connection establishment can prevent this type of attack
completely. The whole idea behind cookies is that they are unpredictable, and if they are, an attacker can’t complete the
connection blind.The problem, of course, is that “unpredictable” in theory isn’t always so unpredictable in practice.
The procedure of the attack is as follows. In preparation for the attack, the attacker disables communication between the
target and the victim. A simple DoS attack usually does the job nicely. The attacker first probes the target a few times
to gain enough information to predict the pattern of initial sequence numbers. This typically requires sending about 80
packets. The attacker then sends a spoofed SYN segment (connection attempt) to the target, impersonating the victim.
The target responds with a SYN/ACK segment to the victim. The SYN/ACK segment includes the initial sequence
number chosen by the target, and the attacker needs this information to get any data accepted by the target. In this
case, the attacker can’t see the information, and must rely on guessing. The attacker next spoofs an ACK frame to the
target, acknowledging the guessed sequence number. If the guess is correct, the connection will be established. Of
course the attacker won’t know, since the attacker never sees any replies. The attacker continues on, sending data to the
target, possibly never acknowledging any responses.
So, how to guess ISNs? They’re 32 bit random numbers, right? No. In fact, in many implementations they’re not
particularly random. It may be difficult to predict the precise ISN at any given time, but it may be possible to predict a
set of numbers that has a high probability of containing the correct ISN. One method of doing this is based on phase-
space analysis.
124
Blind connection spoofing Prevention
Unpredictable ISN
Protocol Changes
Prevention of connection spoofing is actually pretty simple. Since it all hinges on the attacker being able
to predict the sequence number, making the sequence number unpredictable will prevent the
attack. The basic problem is that sequence numbers are used for several things at once. The primary
use is to ensure that data within a connection is all received and in the correct order. Sequence
numbers are the basis of reliable data transfer. But they are also expected to be chosen to prevent
segments from old connections to disturb new ones, and to prevent segments that have been
modified (accidentally) in transit from disturbing the new connection. They are also used to validate
that an arriving segment is “acceptable”. The latter two functions (robustness and security) can be
combined, because they amount to the same thing, but there is no reason to combine them with the
former.
There are two ways to prevent ISN guessing, and both can be applied together.
The first is to ensure that ISNs are random. Knowing a series of past ISNs should not help the attacker
predict future ISNs. The second is to make the ISN dependent on the connection itself, as well as
random data. Every TCP connection is defined by the source address, destination address, source
port and destination port. By using these four together with a random number to create the ISN,
every unique connection will get its own series of ISNs. Thus, the information an attacker probes in
the first step are less useful in predicting the ISN used for the spoofed connection.
125
Blind connection spoofing Prevention
(Cont…)
Finally, new protocols should be designed with this issue in mind. Information
used for security shouldn’t be used for other things as well. The risk of creating
dependencies that weakens the protocol is simply too large. Of course, even
recognizing the security issues can be hard. For example, SCTP includes
sequence numbers, but doesn’t use them for security. Instead, SCTP exchanges
a verification tag at the start of a connection, which is included in all SCTP
packets. The verification tag can be completely random (unlike TCP ISNs), and
cannot be predicted by an attacker (at least in theory). SCTP packets are only
accepted for processing if the verification tag is correct. SCTP sequence
numbers are only used to implement reliable data transfer.
Other protocols have similar problems. For example, DNS uses a random number
to identify queries (responses are matched to queries using this number since
DNS doesn’t use a connection-oriented transport). It has been recognized for
some time that these random numbers need to be cryptographically secure
since an attacker who can blindly spoof DNS query responses (i.e. without
seeing the queries), can misdirect network traffic in interesting ways. We’ll talk
about DNS in a little bit.
126
Stream Control Transmission Protocol
(SCTP)
127
SCTP (Cont…)
SCTP is a transport protocol designed for PSTN (Public Switched
Telephone Network) signaling. It was originally designed to carry SS7
signals over IP networks. SCTP, however, is a general transport protocol,
and is in no way tied to PSTNs. It is designed to coexist with TCP, and
uses similar mechanisms for flow and congestion control. SCTP has a
number of transport-layer features that are beyond the scope of this
course, but interesting from a protocol design or application design
point of view.
SCTP uses a four-way handshake with cookies to initiate connections. This
ensures that SCTP does not have to reserve resources for connections
until they have been established. The fact that the connection is four-way
instead of three-way has nothing to do with security. Every SCTP packet
contains a verification tag, which is copied from the initialization tag
exchanged at connection setup. This prevents spoofing attacks. Sequence
numbers are also used, but only to provide reliable data transport.
In many ways, the design of SCTP reflects lessons learned with TCP.
128
Higher-level protocols
DNS Security
Securing the presentation layer: TLS/ SSL
129
DNS Security
130
DNS Security (cont…)
The domain name system is a critical infrastructure application on
the Internet. Its primary purpose it so map easy-to-remember
names to addresses, and to map addresses to names. Every time
you access a remote system using its name, DNS is involved in
some way. DNS can be characterized as a distributed client-server
database application. The full database is distributed over a huge
number of hosts, organized in a hierarchical way to facilitate
finding information rapidly. Caching is used extensively for
efficiency. Data retrieved from a remote system is stored locally
for a period of time, so future queries for the same information
can be fulfilled locally, thereby not incurring the performance
penalty of a global search (which can be quite severe). Caching is
also
131
the key to many security issues in DNS.
DNS Security (cont…)
A typical DNS query works as follows. A client requests the address for a name, say
www.google.com. This query is sent to a resolver or recursive name server. The resolver’s job is
to give a final answer to a query. The resolver checks the cache to see if the query can be
answered from the cache. If not, the query is forwarded to a root name server. The root name
server is not a recursive name server, so it will reply with a referral to a name server closer to
thetarget. In this case, the root name server will respond with a referral to the name server that
handles the “com” domain. The resolver caches the information so that the next time it wants an
address in the com domain it won’t have to query the root name server at all. It then sends the
www.google.com query to the “com” nameserver. The “com” nameserver is not recursive and
doesn’t have the answer, so it responds with a referral to the nameserver for “google.com”. The
resolver caches the reply so that future queries about google.com can be sent directly to the
“google.com” nameserver. It then forwards the www.google.com query to the “google.com”
nameserver. The “google.com” nameserver isn’t recursive, but it is authoritative for the
google.com domain, so it has the answer, which it sends to the resolver. The resolver caches the
answer, and forwards the reply to the client.
The next time a client requests information from google.com, the query will be sent directly to the
“google.com” nameserver. If there is a request for information in some other “com” domain,
such as amazon.com, the resolver will forward it to the “com” namserver. All other queries will
be forwarded to the root name server.
132
DNS Security (cont…)
DNS is a critical application from a security point of view since an attacker who is able to
substitute false information in a DNS response could redirect traffic from the querying
system. For example, consider what would happen if an attacker were able to modify
the referral sent from the “com” nameserver to point to his (or her) nameserver. The
final query would then be sent to the attacker, who could easily reply with false
information, thereby causing traffic to be misdirected. This type of attack is called
“cache poisoning”, since it inserts false information into the cache. There have been
many variants on cache poisoning, but only one is still effective; the remaining attacks
are prevented by most implementations, at the cost of efficiency.
133
Countermeasures and consequences
Consequences
Attacker can redirect traffic that uses DNS names
Countermeasures
Keep DNS services updated
Separate resolver from nameserver
No access to resolver for outsiders
134
Securing the presentation layer: TLS
SSL/TLS
Secure Socket Layer
Transport Layer Security
TLS = SSL v3 + updates
Strong encryption
Integrity protection
Mutual authentication
135
TLS/SSL (cont…)
In the OSI network model, the presentation layer includes features that transform data from
one form to another. As a result, SSL (Secure Sockets Layer), also known as TLS
(Transport Layer Security) is most properly placed at the presentation layer. It is above
the transport layer, since it is largely transport independent (it does require certain
properties from the transport, but any transport layer that provides them could be used
for SSL), and it is below the application layer since it is completely application
independent (as demonstrated by the large number of applications that use it).
TLS has become extremely popular, and is more or less the standard solution for
applications that want secure communications. Since it deals with process to process
communication, it fills a different niche than IPSec. The reason TLS is so popular is
probably that there are several high quality free implementations of the protocol, and
there have been for some time. This in turn is probably thanks to the fact that Netscape
designed SSL for secure web browsing at a time when solutions were far apart and the
need was beginning to become evident.
SSL provides strong encryption, integrity protection and mutual authentication. The SSL
protocol is quite flexible and complex, and we won’t be going over many details here.
There are some additional details in the assigned reading material (e.g. Bishop).
136
TLS/SSL (cont…)
137
TLS/SSL (cont…)
Two key SSL concepts are sessions and connections. An SSL session is a
long-lived association between two communicating hosts, and a session may
span many connections. By re-using an old session, a new connection does not
have to go through the long and expensive SSL handshake process.
The SSL handshake process is used to set up sessions (and in an abbreviated
version, connections). The handshake process can take on a few different forms
depending on the features that are actually used. The most common case is a
client connecting to a server that has a certificate containing enough
information to perform key exchange (i.e. a public encryption key). The client
is not authenticated. If the server doesn’t have a public key, or the client needs
to be authenticated, there are additional messages in the protocol.
The handshake starts with two hello messages, where the client and server
negotiate the SSL version to use, as well as what encryption and integrity
algorithms to use. The hello messages also include nonces that will be used in
key generation.
138
TLS/SSL (Cot….)
The next phase is server authentication. Here the server sends its certificate to the
client. The certificate contains information about the server, such as its name, who
operates it and what services the certificate is valid for. The certificate is signed by a
certificate authority that must be known to the client (this isn’t the whole truth – the
client must be able to build a certificate chain that ends in a known certificate authority;
all other certificates in the chain are trusted by association). Finally the server sends a
message indicating that this phase is complete.
The third phase is client authentication. In most cases, the client does not
authenticate to the server at all, but if the client does, this would be where the client
certificate is presented to the server. The client also sends a key exchange message to the
server. The client also sends a change_cipher_spec message to activate encryption, and
concludes with a finish message. The server also sends a change_cipher_spec message
and concludes with a finished message. At this point the handshake is complete and the
client and server can begin exchanging data.
The protocol has been designed to be resilient to a number of different attack scenarios, and
is considered very secure, provided the encryption methods negotiated are strong
enough. SSL is not vulnerable to encryption method downgrading, or to MITM attacks
in general.
139
SSL/TLS Certificate Problems
From a technical point of view SSL and TLS seem to be well-designed and secure. There have been some instances
where implementation errors have led to vulnerabilities in applications (serious vulnerabilities), and in the past,
SSL implementations from the USA were limited to 40 bits of secret key material, which was pathetically insecure
(it could be brute forced even in those days, and computers haven’t gotten any slower since then).
The most serious problem with SSL has to do with certificates and how people treat them. For SSL to be resilient to
MITM attacks, mutual authentication must be performed (server authentication is enough for most practical
purposes). In practice server authentication is done through the use of certificates. If a server can’t present a valid
certificate, then it shouldn’t be trusted. The problem is what applications do when they are presented with a
problematic certificate. The certificate is likely to have valid signatures – accepting certificates with invalid
certificates would be monumentally stupid. But the certificate authority might be unknown, or the server name
incorrect, or the usage invalid (e.g. web certificate used for e-mail) or the certificate might be expired, or it might
be something else. Situations like these are common – cheap web hosting companies might use the same
certificate on all virtual web servers, which causes problems, or a certificate might have expired by mistake, or a
web server owner is too cheap to get a certificate signed by a known authority, or a certificate that was supposed
to be usable for anything wasn’t created correctly. In all these instances, most common applications will present
the user with a warning stating that “there was a problem with the certificate” and offering to let the user examine
it more closely.
The question is, how many users are actually qualified to understand the warning, let alone the details of an X.509
certificate? I have a hard time with them, and I’m supposed to know this stuff. So most users will simply ask the
application to accept the certificate anyway, perhaps permanently.
And that opens the path to a MITM attack. There are tools today that will do SSL MITM attacks. All you need is a
computer and a suitable certificate, and you’re done. This problem isn’t unique to SSL. Other certificate-based
protocols have the same problem. For example, if you use SSH, how often have you checked the key fingerprint
prior to connecting to a server for the first time? Have you ever?
140
Intrusion detection/ Intrusion prevention
141
Intrustion Detection
Anomaly detection
Types
Network (NIDS)
Detect suspicious network activity
Captures what goes on when for example a network is scanned
Can detect attacks before they hit a host
Can not do much about encrypted traffic
Host-based (HIDS)
Detect suspicious activity on a single host
Encryption is no problem
Does not require resources to keep the data rate of a whole network
Can not draw conclusions from probing of several targets
If prevention is used, it may be too late at the host
142
Intrustion Detection (cont…)
Intrusion detection is something of a hot topic where different people take different, sometimes quite
dogmatic points of view. The goal of intrusion detection is simply to detect intrusions that have
occurred or that are in the process of occurring. In other words, intrusion detection will do
nothing to prevent intrusions, but might be helpful in attempting to understand them or mitigate
their effects.
Intrusion detection can be roughly divided into network intrusion detection systems (NIDS) and
host-based intrusion detection systems (HIDS). NIDS will detect suspicious network activity,
while HIDS will detect suspicious activity on a single host. All types of intrusion detection systems
are really special cases of the more general concept of anomaly detection, which strives to detect
anomalous behavior in a system (which could be symptoms of an intrusion, misconfiguration,
imminent equipment failure or any of a number of things). Naturally, the classification in NIDS
and HIDS is a very simplified one, used in practice, but not really in research.
In general, an IDS is a device that collects information or audits from any number of sources, analyzes
the information and determines whether there is a problem or not. The main advantage of an IDS
over manual audits is that an IDS can handle far more information than a human can. The
disadvantage is that an IDS is n’t intelligent and can have a hard time recognizing subtle patterns in
the data.
However, the sheer volume of data available in a large system is such that an automated approach is
necessary. Manual audits would be too expensive.
143
IDS classification
144
IDS classification
This is a more detailed classification of intrusion detection systems. Without looking at all
the details, we’ll look at some key points. The detection method is important. Behavior-
based systems use information about the normal behavior of a system in order to detect
intrusions (which hopefully aren’t normal), while knowledge-based systems use
knowledge about intrusions.
Behavior on detection separates systems that just raise an alert (passive) from systems that
attempt to prevent the intrusion (active) or take other countermeasures. The audit
source location is where the network-host separation comes in. Advanced systems may
be able to synthesize information from many sources. The detection paradigm separates
systems that can detect when a system goes from secure to insecure (transition-based)
from systems that merely evaluate whether a system is secure or not (state based).
Finally, usage frequency separates systems based on how often they are run.
For example, Snort, an open source NIDS, can be classified as a knowledge-based, passive
alerting, network-packet-using, transition-based, continuous monitoring system. It uses
signatures of known attacks (knowledge-based) matched against network packets
(network packet source) to detect attacks in real-time (continuous, transition-based)
and by default only alerts the operator (passive alerting). Snort does include elements of
a behavior-based IDS (it can learn typical traffic patterns and detect anomalies) and
active-response (it can be configured to take arbitrary actions on detection).
145
NIDS Challenges
146
NIDS (cont…)
NIDSs aren’t foolproof. There are significant problems to overcome in implementation. One of the most
obvious problems is that of false positives. A NIDS facing heavy traffic may raise alarms for legitimate
traffic. For example, a NIDS looking for specific strings on a specific TCP port (typically to detect a
backdoor) may very well detect those same strings in legitimate traffic. Similarly, a NIDS may detect a
SYN flood where the reality is a sudden surge in popularity of a web site. In order to tackle this
problem, a NIDS needs to be carefully tuned, because introducing false negatives when attempting to
reduce false positives is also undesirable.
The next challenge comes from performance. A NIDS that scans high traffic volumes, such as those
entering or leaving a large organization, requires high performance hardware, networking and software
to work. Otherwise it will miss some traffic. And a NIDS that misses traffic regularly may be unable to
detect anything. It is certainly very easy to evade. Two major performance problems come from the need to
defragment traffic and from the need to track TCP connections. These features are necessary but can
consume significant resources on a NIDS. Essentially, a NIDS needs to defragment all datagrams it sees,
and it needs to track all TCP connections to all hosts in the network it monitors, at least as long as the
connections cannot be dismissed as safe.
Another challenge comes from security features. IPSec, VPN tunnels, SSL and application-level encryption
are all designed to protect sensitive information from prying eyes, but they also prevent a NIDS from
doing its job. It is difficult to circumvent this problem; with IPSec the solution is to run IPSec in tunnel
mode, and place the NIDS on the unencrypted side of the gateway, but in general, encryption is a
serious problem.
Finally, network protocols, topology and implementation are also significant challenges. A NIDS needs to
process network traffic exactly as target nodes do, or inconsistencies arise that allow an attacker to
create traffic that is discarded by the NIDS but accepted by the target or vice versa.
147
NIDS Evasion
Evasion
Getting the NIDS to ignore data the target processes
Disable the NIDS entirely
Getting the NIDS to process data the target ignores
Examples
TTL-based attacks
Fragmentation attacks
Resource exhaustion
Implementation ambiguities
Timing-based attacks
148
NIDS Evasion (cont…)
The tricks used to evade network intrusion detection systems can teach us about designing
NIDS and about designing network protocols. Some of the problems are inherent in the
protocols, and could have been avoided. We’ll talk about a few different evasion
methods. Rest assured that there are several more where these came from.
There are three basic forms of attack on a NIDS. One is to get the NIDS to process packets
that the target ignores. This allows an attacker to make the NIDS see things that aren’t
really there. Another is to get the NIDS to ignore packets that the target processes. This
allows an attacker to hide traffic from the NIDS. The third is to disable the NIDS entirely.
Even getting the NIDS to incorrectly ignore or process part of a packet can be enough.
Attacks typically take place at the network (IP) or transport (TCP) layers. Attacks on the
link layer require access to the data link the NIDS is connected to, and with such access,
tricking a NIDS is fairly easy. At the IP layer, attacks can exploit network topology using
the TTL field or implementation anomalies and ambiguities, particularly with regard to
fragmentation and reassembly. At the TCPl evel, the attacker can attempt to exhaust the
resources of the NIDS, or desynchronize the NIDS from the real data stream (getting the
NIDS to think the correct TCP sequence numbers are different from what they really
are).
Again, these are only a few examples of what can be done.
149
NIDS Evasion (cont…)
150
NIDS Evasion (cont…)
One of the classic NIDS evasion techniques is to send fragmented datagrams. Early NIDS didn’t reassemble datagrams, so
by fragmenting them, they would pass the NIDS undetected. Other implementations didn’t (and still don’t) handle
pathological cases well, such as when faced with a large datagram fragmented into the smallest fragments possible,
transmitted in reverse order.
Today, NIDS manufacturers have mostly learned the lesson, and do perform proper reassembly. Even with proper
reassembly, fragmentation can be a serious problem. Different operating systems deal with overlapping fragments
differently. Some prefer information in the latest datagram; some prefer information in the oldest. Some depend on
whether the overlap is forward (towards the end of the fragment) or backward (towards the beginning of the
fragment).
For a NIDS to operate properly it must process packets the same way as the end system, but with different end systems
behaving differently, how should the NIDS behave? Finally, there is a timing issue. IP fragment reassembly is never
permitted to take an infinite amount of time, but there is a problem if a NIDS discards an unfinished datagram before
or after the end system does. If the NIDS discards datagrams too quickly, there is potential for evasion. If a NIDS
discards an unfinished datagram too slowly, there is potential for insertion (i.e. the attacker can finish the datagram
after it was discarded by the end system, but before the NIDS discards it).
More difficult to deal with is the issue of network topology. How can a NIDS know that a packet it sees will actually reach
the destination? The most obvious attack of this type is to craft packets that have a TTL large enough to reach the
NIDS, but small enough to be discarded before reaching the destination. If a NIDS is deployed on a different network
segment than the target, there is a potential for this type of attack. More subtle, and less likely to succeed, would be
to exploit the network MTU. If the MTU (maximum transfer unit) to the end system is different from the MTU to
the NIDS, there is potential for evasion or insertion. Fortunately, this situation is rare in practice.
One of the most powerful NIDS evasion techniques is TCP desynchronization. A NIDS that examines TCP traffic must
mimic the TCP processing of the end system. If the NIDS and the end system have different views of the state of the
connection, they are desynchronized, and the NIDS will no longer be able to accurately track the connection. Once a
connection is desynchronized, an attacker can transmit anything without much fear of detection.
151
NIDS Lessons
Simple protocols are safer than complex ones
Synchronization issues are hard to solve
Implementation doesn’t always match design
False positives or false negatives
152
NIDS Lessons (Cont…)
So, what lessons can we learn from studying NIDS and how to evade them? From a practical point of view, the
lesson is that an network intrusion detection system will never be 100% reliable, so a NIDS should never be
the only solution for detecting problems. Host-based IDS and other forms of monitoring should also be
deployed. For protocol designers, we see that simple protocols are easier to manage than complex ones. We
haven’t talked about evading NIDS with UDP for the simple reason that UDP is a protocol that doesn’t give an
attacker as many opportunities for mischief as TCP does. When studying NIDS evasion we see that even IP,
which should be a very simple protocol, has plenty of subtleties that can be exploited by an attacker. Any
protocol is likely to be the same, and it is important that the protocol designer makes an effort to really
understand them.
Regarding the typical NIDS design, there are evidently serious problems in keeping the NIDS synchronized to end
systems. The flaws are so serious that they must be considered fundamental design flaws. Interestingly enough,
the synchronization issue seems to crop up in security quite often – when security is dependent on two
systems being synchronized, and there are no reliable synchronization methods, there are probably security
exposures.We also see a typical issue of tradeoffs between accuracy on one side and reliability on the other.
For example, the more diligent a NIDS is about tracking TCP connections, the more likely it is that it can be
attacked using resource exhaustion methods. The same thing goes for fragmentation. Similar tradeoffs often
exist in security – to get more secure in one area, you are forced to sacrifice in some other. We also see
tradeoffs between security and security. If you add encryption, you sacrifice monitoring abilities.
NIDS also teaches us the importance not only to understand how protocols (and other systems) are supposed to
work (i.e. the specifications), but to also verify that is how they actually work. For example, although the IP
specification does indicate how to handle overlapping fragments, not all implementations work the same way.
In order to achieve synchronization, a NIDS (or other system) needs to take actual implementation, not just
design, into account. Intrusion detection is an active area of research, although it has been somewhat tarnished
by flawed offerings.There are still any number of open research questions in the area.
153
Intrusion Prevention System (IPS)
Just another name for IDS + automatic attempts to
meet a suspected attack
Can cooperate with firewalls to block ports etc.
Examples:
intrusion prevention system with global correlation.
IBM Security Network Intrusion Prevention System
Literature:
http://www.cisco.com/en/US/products/sw/secursw/ps211
3/index.html
http://www.youtube.com/watch?v=hKkTBf7pgJc
154
Honey Pots
False “targets” set up to lure attackers away from
real target
Can keep the attacker busy, so that tracing
information can be collected
Can keep the attacker occupied, while stronger
defenses are deployed
Can avoid attacks at real target
155
Am I DOS or not?
156
Am I DOS or not? (Cont…)
Here’s an illustration of the challenge facing a NIDS. These are
traffic graphs for two web servers. The first web server’s link
was limited to 40Mbit/s. The second was limited to
100Mbit/s. Both were hit with lots of traffic over a short
period of time. If the latter web server’s link would have
been limited to 40Mbit/s, the traffic would have completely
saturated the link, like in the first graph. The question is,
which one is a DoS attack, and which is all legitimate traffic?
How can a NIDS determine which is which?
157
Conclusions
Network security based on risk and security awareness
Understand the threats, attackers, consequences, etc
Protect against the right threats to the right degree
Use the appropriate (cost-effective) mechanisms in the right
place
Theory and practice rarely match completely
Need to consider implementation, deployment issues
Technical security is not the end-all
It all ultimately comes down to the people involved
158
Reference Links
https://www.youtube.com/watch?v=mw9fN9mlUS4
https://www.youtube.com/watch?v=0i1CN8b1ZKQ&t=2
48s
http://www.youtube.com/watch?v=hKkTBf7pgJc
https://www.youtube.com/watch?v=XEqnE_sDzSk
https://www.youtube.com/watch?v=AvxTFGYh1BY
https://www.youtube.com/watch?v=AwIoQNw0lXs
https://www.youtube.com/watch?v=AlE5X1NlHgg
https://www.youtube.com/watch?v=fdzuj8FzxLg
https://www.youtube.com/watch?v=VBUxA_95KoY
https://www.youtube.com/watch?v=zopRwR0yhlg
159
Questions?
160
Thank You
161