0% found this document useful (0 votes)
174 views

Fuzzing For Software Security Testing and Quality Assurance

The document discusses fuzz testing, which involves sending anomalous or unexpected input to a system in order to uncover bugs and vulnerabilities. Some key points: - Fuzz testing focuses on input validation errors and tests the application interface without considering responses. It aims to crash systems through faults like buffer overflows. - Fuzzers can be categorized by their injection method or test complexity, ranging from static and random to dynamic and model-based. - Vulnerabilities tested include buffer overflows, integer overflows, and format string vulnerabilities. Strategies like input sanitization and defense in depth are recommended to improve security.

Uploaded by

AdeeshAadhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
174 views

Fuzzing For Software Security Testing and Quality Assurance

The document discusses fuzz testing, which involves sending anomalous or unexpected input to a system in order to uncover bugs and vulnerabilities. Some key points: - Fuzz testing focuses on input validation errors and tests the application interface without considering responses. It aims to crash systems through faults like buffer overflows. - Fuzzers can be categorized by their injection method or test complexity, ranging from static and random to dynamic and model-based. - Vulnerabilities tested include buffer overflows, integer overflows, and format string vulnerabilities. Strategies like input sanitization and defense in depth are recommended to improve security.

Uploaded by

AdeeshAadhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 5

BOOK - Fuzzing for software security testing and quality

assurance

Chapter 1 - Introduction :
--------------------------------------------------------------
>> the purpose of fuzzing is to send anomalous data to a system to
crash it, therefore revealing reliability problems.

>> types of effects found -


1. Quality Assurance (QA): Testing and securing your internally developed
software.
2. System Administration (SA): Testing and securing software on which you
depend in your own usage environment.
3. Vulnerability Assessment (VA): Testing and trying to break into someone
else’s software or system

>> Fuzzing in future technologies -


Next Generation Networks (Triple-Play) such as VoIP and IPTV;
Data/video streaming protocols such as MPEG2-TS (DVB-C/S/T);
IPv6 and related protocols;
Wireless technologies such as WiFi, 6LoPAN, Zigbee, Bluetooth, NFC, and RFID;
Industrial networks (SCADA);
Vehicle area networks such as CAN and MOST

>> Focus of Fuzzing :


>> it focus on input validation error
>> it focus on actual application and dynamic testing of a finished product
>> it ignores the responses or valid behaviour
>> it concetrates on testing interface that have security implications

>> Target Tests :


i) SUT (System Under Test) -
>> SUT consist of several subsystem or it can represent an entire
network with various system running on it
>> eg: banking systems
ii) DUT (Device Under Test) -
>> DUT is a service or piece of equipment connected to a larger system
>> eg: routers, WLAN
iii) IUT (Implementaion Under Test) -
>> IUT is a binary representation of software
>> eg: process running on OS, web application..etc
iv) UUT (Unit Under Test) -
>> Subcomponent i.e a module,libarary,function

>> security-related programming mistakes


i) Inability to handle invalid lengths and indices
ii) inability to handle out-of-sequence or out-of-state-messages
iii) inability to tolerate overflows (large packets or elements)
iv) inability to tolerate missing elements or underflows

>> Fault injection - Faults injected to an actual product

>> tests similar to fuzzing,


Negative testing
Protocol Mutation
Robustness Testing
Syntax Testing
Fault Injection
Rainy-Day Testing
Dirty Testing

>> Fuzzers can be categorized into two :


i) Injector vector or Attack Vector
ii) Test Cse Complexity

>> Different types of anomalies and different resulting faliures


i) Field Level
>> overflows, integer anomalies
ii) structural Level
>> underflows, repetition of elements, unexpected elements
iii) Sequence Level
>> out of sequence omitted, unexpected repetition, spamming

What these fuzzing finds(resulting failures)


>> crashes, DOS, security exposure, performance degration, slow repsonses,
trashing, anomalous

>> Methodology of a test case :


i) decode input
ii) check semantics
iii) update state, generate output

>> Fuzzers based on Test Case Complexity :


i) Static and Random template-based Fuzzer - only tests request-response
protocol, no dynamic functionality, protocol awareness is zero
ii) Block-based Fuzzer - implements basic structure for request-response
protocol and can contain dynamic functionality such as calculation of checksum and
lenght values
iii) Dynamic Generation or evaluation based fuzzer - It learns based on the
feadback loop of target system, the might or might not break the message sequence
iv) Model based or simulation based fuzzer - These implement tested interface
either through a model or simulation , not only message strucutres are fuzzed but
also unexpected messages in sequence can be generated

>> typical structure of a fuzzer


i) protocol modeller - enables data formats and message sequence
ii) Anomaly library - collection of inputs to trigger vulnerabilities in a
s/w
iii) Attack Simulation Engine - uses library of attacks
iv) Runtime Analysis Engine - Monitors SUT
v) Reporting - test results
vi) Documentation - Test case docs

>> Fuzzing Process - a fuzz test consist of sequence of messages(requests,responses


or files) sent to SUT

>> Reult of Fuzz test


i) Valid Response
ii) Error Response
iii) Anomalous Response ( unexpected reaction such as slowdown or responding
with corrupted message)

eg - request format - GET \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\&


response format - 200 OK , Internal Server Error, <=========No
RESPONSE========>

Chapter 2 - Software Vulnerability Analysis:


--------------------------------------------------------------
>> A heap overflow is when data is written beyond the boundary of an allocated
chunk of memory on the heap.

>> A stack overflow involves memory on the stack getting corrupted due to improper
bounds checking when a memory write operation takes place

>> variable overflows, eg:integer overflow i.e Numerical wrapping, field


truncation, or signedness problems

>> RATS tool to analyse the program file and lists out the details in it, it also
highlights the buffer overflow present in the code

>> Input Source and Input Space are similar terms that refer how data will be
generated to the application to be fuzzed, Input Space is the entire set of all
possible permutations that
sent to the target

>> Fuzzing examples -


1. Buffer overflows are tested with long strings. For example:
[Client]-> “user jared\r\n”
“user Ok. Provide pass.\r\n” <-[Server]
[Client]-> “pass <5000 ‘A’s>\r\n”
2. Integer overflows are tested with unexpected numerical values such as: zero,
small, large, negative: wrapping at numerical boundaries—2^4, 2^8, 2^16,
2^24: wrong number system—floats vs. integers. For example:
[Client]-> “user jared\r\n”
“user Ok. Provide pass.\r\n” <-[Server]
[Client]-> “pass jared\r\n”
“pass Ok. Logged in. Proceed with next command.\r\n” <-[Server]
[Client]-> “get [-100000.98] files: *\r\n”
Format string vulnerabilities are tested with strings such as:
[Client]-> “user <10 ‘%n’s>”
‘%n’s are useful because of the way the printf family of functions were
designed. A percent sign followed by a letter is referred to as a format
string.24 The ‘n’ is the only switch that triggers a write and is therefore use#ful
for triggering a crash while fuzzing. ‘x’ or ‘s’ may actually be a better
choice in some cases, as the ‘n’ usage may be disabled.25
3. Parse error: NULL after string instead of \r\n. Bad string parsing code might
be expecting a linefeed (\r or 0x0d) or newline (\n or 0x0a) in a given packet
and may incorrectly parse data if nothing or a NULL exists in its place. The
NULL (0x00) is special because string functions will terminate on it, when
perhaps the parsing code wouldn’t expect it to since no new-line is present.
[Client]-> “user jared0x00”
4. Parse error: Incorrect order and combined commands in one packet. Often,
network daemons expect each command to arrive in a separate packet. But
what if they don’t? And what if they’re out of order, and all strung together
with linefeeds in one packet? Bad things could happen to the parser.
[Client]-> “pass jared\r\nuser jared\r\n”
5. Parse error: Totally random binary data. If there is a particular character(s)
that the parser is looking for but might not handle well in an unexpected
scenario, this might uncover such an issue.
[Client]-> “\xff\xfe\x00\x01\x42\xb5...”
6. Parse error: Sending commands that don’t make sense—multiple login.
Design or logic flaws can also sometimes be uncovered via fuzzing.
[Client]-> “user jared\r\n”
“user Ok. Provide pass.\r\n” <-[Server]
[Client]-> “pass jared\r\n”
“pass Ok. Logged in. Proceed with next command.\r\n” <-[Server]
[Client]-> “user jared\r\n”
7. Parse error: Wrong number of statement helpers such as ‘../’, ‘{’, ‘(’, ‘[’,
etc.
Many network protocols such as HTTP have multiple special chapters such
as ‘:’, “\\”, etc. Unexpected behavior or memory corruption issues can creep
in if parsers are not written very carefully.
[Client]-> “user jared\r\n”
“user Ok. Provide pass.\r\n” <-[Server]
[Client]-> “pass jared\r\n”
“pass Ok. Logged in. Proceed with next command.\r\n” <-[Server]
[Client]-> “get [1] files: {{../../../../etc/password\r\n”
8. Parse error: Timing issue with incomplete command termination. Suppose
we want to DoS the server. Clients can often overwhelm servers with a com#mand that
is known to cause processing to waiting on the server end. Or
perhaps this uses up all the validly allow connections (like a SYN flood26)
in a given window of time.
[Client]-> “user jared\r” (@ 10000 pkts/second with no read for
serverresponse

>> a network server that needs to be able to run at very high speeds would not be
written in Python
or Ruby, because it would be too slow. C would be the best choice for speed. This
is because C provides the programmer the ability to manage low-level operations,
such as memory management (malloc(), free(), etc.).

>> Strategies for Defensive coding :


i) Reduce code complexity
ii) Source code reviews
iii) Quality Control
iv) Code Reuse
v) Secure input/output handling
vi) Canonicalisation
vii) Principle of least privilege
viii) Assume the worst
ix) Encrypt
x) Stay upto date

Chapter 3 - Quality Assurance and Testing :


--------------------------------------------------------------
>> Validaiton testing, to show that the software functions according to user
requirements
>> Defect testing, to uncover flaws in s/w rather than simulate its operational use
>> Structural Testing or white-box testing, uses access to source code to reveal
flaws in software
>> Functional Testing or black-box testing, tests s/w through external interface
>> Gray-box testing, combination of white box and black box
>> Black Box testing Techniques
1. Load testing - eg:DOS testing
2. Stress testing(operational environment) - size and speed of available
memory,size and speed of disk
3. Security scanner - scanning for vulnerabilities
4. Unit testing - testing application logic
5. Fault injection - to forecast the behavior of h/w during operations
6. Syntax testing - to verify that system does sone form of input validation
like syntax errors, delimeter error, field value error, context dependent error,
state dependent error
7. Negative testing
8. Regression testing - post release testing is also known as regression
testing
Chapter 4 - Fuzzing Metrics :
--------------------------------------------------------------
>> A simple product life cycle consist of following phases :
Predeployment (development):
• Requirements and design
• Implementation
• Development testing
• Acceptance testing
Postdeployment (maintenance):
• Integration testing
• System maintenance
• Update process and regression testing
• Retirement

>> Fuzzing methodology


1. Take any communication interface and find all known vulnerabilities that
have been reported in any implementations of it.
2. For each found known issue, find out what the exact input (packet, message
sequence, field inside a file format) that triggers it.
3. For each known issue, identify how the problem could be detected by moni#toring
the target system if the trigger was sent to it.
4. Map these triggers to the test coverage of black-box testing tools such
as fuzzers.

>> Expected number bugs = Number of tests * Probability of finding a defect per
test

>> Vulnerabillity Risk meter


• Criticality rating (numeric value): Metric for prioritizing vulnerabilities.
• Business impact (high/medium/low, mapped to a numeric value): Estimates the
risk or damage to assets.
• Exploitability: Type of, easiness of, or possibility for exploiting the
vulnerability.
• Category of the compromise: Various vulnerability categories and the related
exposure through successful exploitation: confidentiality, integrity,
availabil#ity, total compromise.
• Interface: Remote or local, or further details such as identifying the used
attack vector or communication protocol.
• Prerequisites: What is needed for exploitation to succeed; for example, is
there a requirement for successful authentication?

Chapter 5 - Building and classifying Fuzzers :


--------------------------------------------------------------
>> There are certain things every sufficiently complex fuzzer should be capable of
doing:
• Model protocol data in a variety of complex scenarios: blocks, loops,
choices, and so forth;
• Use a library of mutations and anomalies;
• Compute relationships like checksums, hashes, and so forth;
• Associate data as lengths of other data;
• Send test cases over a variety of interfaces;
• Monitor the process in a variety of ways;
• Log results with test cases

You might also like