Module-2 SVT CSD4001
Module-2 SVT CSD4001
something malicious expose or alter sensitive information, disrupt or destroy a system, or take
software bugs. Security vulnerabilities are bugs that pack an extra hidden surprise: A malicious
user can leverage them to launch attacks against the software and supporting systems.
• A bug must have some security-relevant impact or properties to be considered a security issue;
in
other words, it has to allow attackers to do something they normally wouldn't be able to do.
3
• For example, a program that allows you to edit a critical system file you shouldn't have
access to might be operating completely correctly according to its specifications and design.
So it probably wouldn't fall under most people's definition of a software bug, but it's
definitely a security vulnerability.
• When attackers use an external program or script to perform an attack, this attacking
program is often called an exploit or exploit script.
4
• Security Policies
As mentioned, attackers can exploit a vulnerability to violate the security of a
system. One useful way to conceptualize the "security of a system" is to think of a
system’s security as being defined by a security policy. From this perspective, a
violation of a software system's security occurs when the system's security policy is
violated.
For a system composed of software, users, and resources, you have a security
policy, which is simply a list of what's allowed and what's forbidden. This policy
might state, for example, "Unauthenticated users are forbidden from using the
calendar service on the staging machine." A problem that allows unauthenticated
users to access the staging machine's calendar service would clearly violate the
5
security policy.
This policy could take a few different forms, as described in the following list:
For a particularly sensitive and tightly scoped system, a security policy could be a formal specification of constraints that can be
verified against the program code by mathematical proof. This approach is often expensive and applicable only to an extremely
controlled software environment. You would hope that embedded systems in devices such as traffic lights, elevators, airplanes, and life
support equipment go through this kind of verification. Unfortunately, this approach is prohibitively expensive or unwieldy, even for
many of those applications.
A security policy could be a formal, written document with clauses such as "C.2. Credit card information (A.1.13) should never be
disclosed to a third party (as defined in A.1.3) or transferred across any transmission media without sufficient encryption, as specified in
Addendum Q." This clause could come from a policy written about the software, perhaps one created during the development process. It
could also come from policies related to resources the software uses, such as a site security policy, an operating system (OS) policy, or a
database security policy.
The security policy could be composed solely of an informal, slightly ambiguous collection of people's expectations of reasonable
program security behavior, such as "Yeah, giving a criminal organization access to our credit card database is probably bad. "
6
Security Expectations
Considering the possible expectations people have about software security helps determine
which issues they consider to be security violations. Security is often described as resting on
three components: confidentiality, integrity, and availability. The following sections consider
1. Confidentiality requires that information be kept private. This includes any situation where
systems often deal with data that contains secrets, ranging from nation- or state-level
7
2. Integrity is the trustworthiness and correctness of data. It refers to expectations that people
have about software's capability to prevent data from being altered. Integrity refers not only to the
contents of a piece of data, but also to the source of that data. Software can maintain integrity by
preventing unauthorized changes to data sources. Other software might detect changes to data
integrity by making note of a change in a piece of data or an alteration of the data's origins.
expectations users have about a system's availability and its resilience to denial-of-service (DoS)
attacks.
8
The Necessity of Auditing
9
Code Auditing and the Development Life Cycle
When you consider the risks of exposing an application to potentially malicious users, the value of application security
assessment is clear. However, you need to know exactly when to perform an assessment. Generally, you can perform an audit at
any stage of the Systems Development Life Cycle (SDLC). However, the cost of
identifying and fixing vulnerabilities can vary widely based on when and how you choose to audit. So before you get started,
review the following phases of the SDLC:
1. Feasibility study This phase is concerned with identifying the needs the project should meet and determining whether
developing the solution is technologically and financially viable.
2. Requirements definition In this phase, a more in-depth study of requirements for the project is done, and project goals are
established.
3. Design The solution is designed and decisions are made about how the system will technically achieve the agreed-on
requirements.
4. Implementation The application code is developed according to the design laid out in the previous phase.
5. Integration and testing The solution is put through some level of quality assurance to ensure that it works as expected and to
catch any bugs in the software.
6. Operation and maintenance The solution is deployed and is now in use, and revisions, updates, and corrections are made as a
result of user feedback. 10
11
12
Auditing Versus Black Box Testing
Granted, this definition might seem a bit confusing, but the bottom line is that vulnerability
classes are just mental devices for conceptualizing software flaws.
They are useful for understanding issues and communicating that understanding with others,
but there isn't a single, clean taxonomy for grouping vulnerabilities into accurate,
nonoverlapping classes. It's quite possible for a single classification system, and perspective.
14
Types of Vulnerabilities
16
Input and Data Flow
• The majority of software vulnerabilities result from unexpected behaviors triggered by a program's response to malicious data.
• So the first question to address is how exactly malicious data gets accepted by the system and causes such a serious impact. The
best way to explain it is by starting with a simple example of a buffer overflow vulnerability.
• Consider a UNIX program that contains a buffer overflow triggered by an overly long command-line argument. In this case, the
malicious data is user input that comes directly from an attacker via the command-line interface. This data travels through the
program until some function uses it in an unsafe way, leading to an exploitable situation.
• For most vulnerabilities, you'll find some piece of malicious data that an attacker injects into the system to trigger the exploit.
However, this malicious data might come into play through a far more circuitous route than direct user input. This data can come
from several different sources and through several different interfaces.
• It might also pass through multiple components of a system and be modified a great deal before it reaches the location where it
ultimately triggers an exploitable condition. Consequently, when reviewing a software system, one of the most useful attributes
to consider is the flow of data throughout the system's various components.
17
Contd..
• For example, you have an application that handles scheduling meetings for a large organization. At the end of every
month, the application generates a report of all meetings coordinated in this cycle, including a brief summary of
each meeting. Close inspection of the code reveals that when the application creates this summary, a meeting
description larger than 1,000 characters results in an exploitable buffer overflow condition.
• Developer introduces some code for a reason completely unrelated to security, but it has the side effect of
protecting a vulnerable component later down the data flow. Also, tracing data flow in a real-world application
can be exceedingly difficult. Complex systems often develop organically, resulting in highly fragmented data
flows. The actual data might traverse dozens of components and delve in and out of third-party framework code
during the process of handling a single user request.
18
Trust Relationships
Trust relationships are integral to the flow of data, as the level of trust between components often
determines the amount of validation that happens to the data exchanged between them.
• Designers and developers often consider an interface between two components to be trusted or
designate a peer or supporting software component as trusted. This means they generally believe
that the trusted component is impervious to malicious interference, and they feel safe in making
assumptions about that component's data and behavior.
• when evaluating trust relationships in a system, it's important to appreciate the transitive nature of
trust. For example, if your software system trusts a particular external component, and that
component in turn trusts a certain network, your system has indirectly placed trust in that network.
If the component’s trust in the network is poorly placed, it might fall victim to an attack that ends up
putting your software at risk.
19
Assumptions and Misplaced Trust
I. Another useful way of looking at software flaws is to think of them in terms of programmers and designers
making unfounded assumptions when they create software. Developers can make incorrect assumptions about
many aspects of a piece of software, including the validity and format of incoming data, the security of
supporting programs, the potential hostility of its environment, the capabilities of its attackers and users, and
even the behaviors and nuances of particular application programming interface (API) calls or language
features.
II. The concept of inappropriate assumptions is closely related to the concept of misplaced trust because you can
say that placing undue trust in a component is much the same as making an unfounded assumption about that
component. The following sections discuss several ways in which developers can make security-relevant
mistakes by making unfounded assumptions and extending undeserved trust.
20
Interfaces
Interfaces are the mechanisms by which software components communicate with each other and the
outside world. Many vulnerabilities are caused by developers not fully appreciating the security properties
of these interfaces and consequently assuming that only trusted peers can use them.
For example, developers might expect a high degree of safety because they used a proprietary and complex
network protocol with custom encryption. They might incorrectly assume that attackers won't be likely to
construct their own clients and encryption layers and then manipulate the protocol in unexpected ways.
Unfortunately, this assumption is particularly unsound, as many attackers find a singular joy in reverse
engineering a proprietary protocol.
21
Contd..
To summarize, developers might misplace trust in an interface for the following reasons:
They choose a method of exposing the interface that doesn't provide enough protection from
external attackers.
They choose a reliable method of exposing the interface, typically a service of the OS, but they use
or configure it incorrectly. The attacker might also exploit a vulnerability in the base platform to gain
unexpected control over that interface.
They assume that an interface is too difficult for an attacker to access, which is usually a dangerous
bet.
22
Exceptional Conditions
• Vulnerabilities related to handling exceptional conditions are intertwined with data and environmental vulnerabilities. Basically, an
exceptional condition occurs when an attacker can cause an unexpected change in a program's normal control flow via external
measures. This behavior can entail an asynchronous interruption of the program, such as the delivery of a signal. It might also
involve consuming global system resources to deliberately induce a failure condition at a particular location in the program.
• For example, a UNIX system sends a SIGPIPE signal if a process attempts to write to a closed network connection or pipe; the
default behavior on receipt of this signal is to terminate the process. An attacker might cause a vulnerable program to write to a pipe
at an opportune moment, and then close the pipe before the application can perform the write operation successfully.
• This would result in a SIGPIPE signal that could cause the application to abort and perhaps leave the overall system in an unstable
state. For a more concrete example, the Network File System (NFS) status daemon of some Linux distributions was vulnerable to
crashing caused by closing a connection at the correct time. Exploiting this vulnerability created a disruption in NFS functionality
23