0% found this document useful (0 votes)
45 views23 pages

Module-2 SVT CSD4001

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views23 pages

Module-2 SVT CSD4001

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

CSD-4001

Software Vulnerability Testing

School of Computing Science and Engineering


VIT Bhopal University
Software Vulnerability Fundamentals

Vulnerabilities, Security Policies, Security Expectations, The


Necessity of Auditing, Auditing vs Black-Box testing, Classifying
Vulnerabilities, Types of Vulnerabilities (Design,
Implementation, and Operational Vulnerabilities), Gray Areas,
Input and Data Flow, Trust Relationships, Assumptions and
Misplaced Trust, Interfaces, Exceptional Conditions
Vulnerabilities
• Vulnerabilities are specific flaws or oversights in a piece of software that allow attackers to do

something malicious expose or alter sensitive information, disrupt or destroy a system, or take

control of a computer system or program.

• In general, software vulnerabilities can be thought of as a subset of the larger phenomenon of

software bugs. Security vulnerabilities are bugs that pack an extra hidden surprise: A malicious

user can leverage them to launch attacks against the software and supporting systems.

• A bug must have some security-relevant impact or properties to be considered a security issue;

in

other words, it has to allow attackers to do something they normally wouldn't be able to do.
3
• For example, a program that allows you to edit a critical system file you shouldn't have
access to might be operating completely correctly according to its specifications and design.
So it probably wouldn't fall under most people's definition of a software bug, but it's
definitely a security vulnerability.

• The process of attacking a vulnerability in a program is called exploiting. Attackers might


exploit a vulnerability by running the program in a clever way, altering or monitoring the
program's environment while it runs, or if the program is inherently insecure, simply using
the program for its intended purpose.

• When attackers use an external program or script to perform an attack, this attacking
program is often called an exploit or exploit script.
4
• Security Policies
As mentioned, attackers can exploit a vulnerability to violate the security of a
system. One useful way to conceptualize the "security of a system" is to think of a
system’s security as being defined by a security policy. From this perspective, a
violation of a software system's security occurs when the system's security policy is
violated.
For a system composed of software, users, and resources, you have a security
policy, which is simply a list of what's allowed and what's forbidden. This policy
might state, for example, "Unauthenticated users are forbidden from using the
calendar service on the staging machine." A problem that allows unauthenticated
users to access the staging machine's calendar service would clearly violate the
5
security policy.
This policy could take a few different forms, as described in the following list:

 For a particularly sensitive and tightly scoped system, a security policy could be a formal specification of constraints that can be
verified against the program code by mathematical proof. This approach is often expensive and applicable only to an extremely
controlled software environment. You would hope that embedded systems in devices such as traffic lights, elevators, airplanes, and life
support equipment go through this kind of verification. Unfortunately, this approach is prohibitively expensive or unwieldy, even for
many of those applications.

 A security policy could be a formal, written document with clauses such as "C.2. Credit card information (A.1.13) should never be
disclosed to a third party (as defined in A.1.3) or transferred across any transmission media without sufficient encryption, as specified in
Addendum Q." This clause could come from a policy written about the software, perhaps one created during the development process. It
could also come from policies related to resources the software uses, such as a site security policy, an operating system (OS) policy, or a
database security policy.

 The security policy could be composed solely of an informal, slightly ambiguous collection of people's expectations of reasonable

program security behavior, such as "Yeah, giving a criminal organization access to our credit card database is probably bad. "

6
Security Expectations

Considering the possible expectations people have about software security helps determine

which issues they consider to be security violations. Security is often described as resting on

three components: confidentiality, integrity, and availability. The following sections consider

possible expectations for software security from the of these cornerstones.

1. Confidentiality requires that information be kept private. This includes any situation where

software is expected to hide information or hide the existence of information. Software

systems often deal with data that contains secrets, ranging from nation- or state-level

intelligence secrets to company trade secrets or even sensitive personal information.

7
2. Integrity is the trustworthiness and correctness of data. It refers to expectations that people

have about software's capability to prevent data from being altered. Integrity refers not only to the

contents of a piece of data, but also to the source of that data. Software can maintain integrity by

preventing unauthorized changes to data sources. Other software might detect changes to data

integrity by making note of a change in a piece of data or an alteration of the data's origins.

3. Availability is the capability to use information and resources. Generally, it refers to

expectations users have about a system's availability and its resilience to denial-of-service (DoS)

attacks.

8
The Necessity of Auditing

Auditing an application is the process of analyzing application code (in


source or binary form) to uncover vulnerabilities that attackers might
exploit. By going through this process, you can identify and close security
holes that would otherwise put sensitive data and business resources at
unnecessary risk.

9
Code Auditing and the Development Life Cycle
When you consider the risks of exposing an application to potentially malicious users, the value of application security
assessment is clear. However, you need to know exactly when to perform an assessment. Generally, you can perform an audit at
any stage of the Systems Development Life Cycle (SDLC). However, the cost of
identifying and fixing vulnerabilities can vary widely based on when and how you choose to audit. So before you get started,
review the following phases of the SDLC:
1. Feasibility study This phase is concerned with identifying the needs the project should meet and determining whether
developing the solution is technologically and financially viable.
2. Requirements definition In this phase, a more in-depth study of requirements for the project is done, and project goals are
established.
3. Design The solution is designed and decisions are made about how the system will technically achieve the agreed-on
requirements.
4. Implementation The application code is developed according to the design laid out in the previous phase.
5. Integration and testing The solution is put through some level of quality assurance to ensure that it works as expected and to
catch any bugs in the software.
6. Operation and maintenance The solution is deployed and is now in use, and revisions, updates, and corrections are made as a
result of user feedback. 10
11
12
Auditing Versus Black Box Testing

Black box testing is a method of evaluating a software system by


manipulating only its exposed interfaces. Typically, this process involves
generating specially crafted inputs that are likely to cause the application
to perform some unexpected behaviour, such as crashing or exposing
sensitive data. For example, black box testing an HTTP server might
involve sending requests with abnormally large field sizes, which could
trigger a memory corruption bug -"Memory Corruption".
13
Classifying Vulnerabilities
A vulnerability class is a set of vulnerabilities that share some unifying Commonality a
pattern or concept that isolates a specific feature shared by several different software flaws.

Granted, this definition might seem a bit confusing, but the bottom line is that vulnerability
classes are just mental devices for conceptualizing software flaws.

They are useful for understanding issues and communicating that understanding with others,
but there isn't a single, clean taxonomy for grouping vulnerabilities into accurate,
nonoverlapping classes. It's quite possible for a single classification system, and perspective.

14
Types of Vulnerabilities

1. The security community generally accepts design vulnerabilities as


flaws in a software system's architecture and specifications.

2. Implementation vulnerabilities are low-level technical flaws in the


actual construction of a software system.

3. Operational vulnerabilities addresses flaws that arise in deploying


and configuring software in a particular environment.
15
Gray Areas
• The distinction between design and implementation vulnerabilities is deceptively simple in terms of the SDLC, but it's not
always easy to make. Many implementation vulnerabilities could also be interpreted as situations in which the design
didn’t anticipate or address the problem adequately. On the flip side, you could argue that lower-level pieces of a software
system are also designed, in a fashion. A programmer can design plenty of software components when implementing a
specification, depending on the level of detail the specification goes into.
• These components might include a class, a function, a network protocol, a virtual machine, or perhaps a clever series of
loops and branches. Lacking a strict distinction, in this book the following definition of a design vulnerability is used:
• In general, when people refer to design vulnerabilities, they mean high-level issues with program architecture,
requirements, base interfaces, and key algorithms. Expanding on the definition of design vulnerabilities, that uses the
following definition of an implementation vulnerability:
• Security issues in the design of low-level program pieces, such as parts of individual functions and classes, are generally
considered to be implementation vulnerabilities. Implementation vulnerabilities also include more complex logical
elements that are not normally addressed in the design specification. (These issues are often called logic vulnerabilities.)

16
Input and Data Flow
• The majority of software vulnerabilities result from unexpected behaviors triggered by a program's response to malicious data.
• So the first question to address is how exactly malicious data gets accepted by the system and causes such a serious impact. The
best way to explain it is by starting with a simple example of a buffer overflow vulnerability.
• Consider a UNIX program that contains a buffer overflow triggered by an overly long command-line argument. In this case, the
malicious data is user input that comes directly from an attacker via the command-line interface. This data travels through the
program until some function uses it in an unsafe way, leading to an exploitable situation.
• For most vulnerabilities, you'll find some piece of malicious data that an attacker injects into the system to trigger the exploit.
However, this malicious data might come into play through a far more circuitous route than direct user input. This data can come
from several different sources and through several different interfaces.
• It might also pass through multiple components of a system and be modified a great deal before it reaches the location where it
ultimately triggers an exploitable condition. Consequently, when reviewing a software system, one of the most useful attributes
to consider is the flow of data throughout the system's various components.

17
Contd..
• For example, you have an application that handles scheduling meetings for a large organization. At the end of every
month, the application generates a report of all meetings coordinated in this cycle, including a brief summary of
each meeting. Close inspection of the code reveals that when the application creates this summary, a meeting
description larger than 1,000 characters results in an exploitable buffer overflow condition.

• Developer introduces some code for a reason completely unrelated to security, but it has the side effect of
protecting a vulnerable component later down the data flow. Also, tracing data flow in a real-world application
can be exceedingly difficult. Complex systems often develop organically, resulting in highly fragmented data
flows. The actual data might traverse dozens of components and delve in and out of third-party framework code
during the process of handling a single user request.

18
Trust Relationships
Trust relationships are integral to the flow of data, as the level of trust between components often
determines the amount of validation that happens to the data exchanged between them.
• Designers and developers often consider an interface between two components to be trusted or
designate a peer or supporting software component as trusted. This means they generally believe
that the trusted component is impervious to malicious interference, and they feel safe in making
assumptions about that component's data and behavior.
• when evaluating trust relationships in a system, it's important to appreciate the transitive nature of
trust. For example, if your software system trusts a particular external component, and that
component in turn trusts a certain network, your system has indirectly placed trust in that network.
If the component’s trust in the network is poorly placed, it might fall victim to an attack that ends up
putting your software at risk.
19
Assumptions and Misplaced Trust

I. Another useful way of looking at software flaws is to think of them in terms of programmers and designers
making unfounded assumptions when they create software. Developers can make incorrect assumptions about
many aspects of a piece of software, including the validity and format of incoming data, the security of
supporting programs, the potential hostility of its environment, the capabilities of its attackers and users, and
even the behaviors and nuances of particular application programming interface (API) calls or language
features.
II. The concept of inappropriate assumptions is closely related to the concept of misplaced trust because you can
say that placing undue trust in a component is much the same as making an unfounded assumption about that
component. The following sections discuss several ways in which developers can make security-relevant
mistakes by making unfounded assumptions and extending undeserved trust.

20
Interfaces
Interfaces are the mechanisms by which software components communicate with each other and the
outside world. Many vulnerabilities are caused by developers not fully appreciating the security properties
of these interfaces and consequently assuming that only trusted peers can use them.

For example, developers might expect a high degree of safety because they used a proprietary and complex
network protocol with custom encryption. They might incorrectly assume that attackers won't be likely to
construct their own clients and encryption layers and then manipulate the protocol in unexpected ways.
Unfortunately, this assumption is particularly unsound, as many attackers find a singular joy in reverse
engineering a proprietary protocol.

21
Contd..

To summarize, developers might misplace trust in an interface for the following reasons:
 They choose a method of exposing the interface that doesn't provide enough protection from
external attackers.
 They choose a reliable method of exposing the interface, typically a service of the OS, but they use
or configure it incorrectly. The attacker might also exploit a vulnerability in the base platform to gain
unexpected control over that interface.
 They assume that an interface is too difficult for an attacker to access, which is usually a dangerous
bet.

22
Exceptional Conditions
• Vulnerabilities related to handling exceptional conditions are intertwined with data and environmental vulnerabilities. Basically, an

exceptional condition occurs when an attacker can cause an unexpected change in a program's normal control flow via external

measures. This behavior can entail an asynchronous interruption of the program, such as the delivery of a signal. It might also

involve consuming global system resources to deliberately induce a failure condition at a particular location in the program.

• For example, a UNIX system sends a SIGPIPE signal if a process attempts to write to a closed network connection or pipe; the

default behavior on receipt of this signal is to terminate the process. An attacker might cause a vulnerable program to write to a pipe

at an opportune moment, and then close the pipe before the application can perform the write operation successfully.

• This would result in a SIGPIPE signal that could cause the application to abort and perhaps leave the overall system in an unstable

state. For a more concrete example, the Network File System (NFS) status daemon of some Linux distributions was vulnerable to

crashing caused by closing a connection at the correct time. Exploiting this vulnerability created a disruption in NFS functionality
23

You might also like