0% found this document useful (0 votes)
24 views

Slides Lecture 1

Uploaded by

Gemeda Gobena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Slides Lecture 1

Uploaded by

Gemeda Gobena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Software security, secure programming

Lecture 1: introduction

Master M2 Cybersecurity & MoSiG

Academic Year 2020 - 2021


Who are we ?

Teaching staff
I Laurent Mounier (UGA), Marie-Laure Potet (G-INP)
Mathias Ramparon (G-INP)
I research within Verimag Lab
I research focus: formal verification, code analysis, compilation
techniques, language semantics ... and (software) security !

Attendees
I Master M2 on Cybersecurity [mandatory course]
I Master M2 MoSiG [optionnal course]

→ various skills, backgroud and interests . . .

2 / 21
Agenda

Part 1: an overview of software security and secure programming


I 6 weeks (18 hours)
I for all of you . . .
I classes on tuesday (11.15am-12.45am) and wednesday (2pm-3.30pm)
→ includes lectures, exercises and lab sessions . . .

Part 2: some tools and techniques for software security


I 7 weeks (21 hours)
I for M2 Cyberscurity only
I class on tuesday (8.15am - 11.15am)

3 / 21
Examination rules
The rules of the game . . .

Assignments
I M1 : a written exam (duration ∼ 1.5h, mid-November)
I M2 : a (short) report on lab sessions
I M3 : an oral presentation (in January)
I M4 : final written exam (duration=2h, end of January)

Mark computation (3 ECTS)


I for MoSiG students:

M = (0.3 × M2 ) + (0.7 × M1 )
I for Cybersecurity students:

M = (0.5 × M1 + 0.25 × M2 + 0.25 × M3 )/2 + (0.5 × M4 )

4 / 21
Course user manual

An (on-going) course web page . . .

http://www-verimag.imag.fr/~mounier/Enseignement/Software_Security

I course schedule and materials (slides, etc.)


I weekly, reading suggestions, to complete the lecture
I other background reading/browsing advices . . .

During the classes . . .


Alternation between lectures, written excercices, lab exercises . . .
. . . but no “formal” lectures → questions & discussions always welcome !

heterogeneous audience + open topics ⇒ high interactivity level !

5 / 21
Prerequisites
Ideally . . .

This course is concerned with:

Programming languages
I at least one (classical) imperative language:
C or C++ ? Java ?? Python ??? . . .
I some notions on compilation & language semantics

What happens behind the curtain


Some notions about:
I assembly code (ARM, x86, others . . . )
I memory organization (stack, heap)

6 / 21
Outline

Some practical information

What software security is (not) about ?

About software security


The context: computer system security . . .

Question 1: what is a “computer system”, or an execution plateform ?

Many possible incarnations, e.g.:


I (classical) computer: mainframe, server, desktop
I mobile device: phone, tablets, audio/video player, etc.
. . . up to IoT, smart cards, . . .
I embedded (networked) systems: inside a car, a plane, a
washing-machine, etc.
I cloud computing, virtual execution environment
I but also industrial networks (Scada), . . . etc.
I and certainly many more !

→ 2 main characteristics:
I include hardware + software
I open/connected to the outside world . . .

7 / 21
The context: computer system security . . . (ct’d)

Question 2: what does mean security ?

I a set of general security properties: CIA


Confidentiality, Integrity, Availability (+ Non Repudiation + Anonymity + . . . )

I concerns the running software + the whole execution plateform


(other users, shared resources and data, peripherals, network, etc.)

I depends on an intruder model


→ there is an “external actor”1 with an attack objective in mind, and
able to elaborate a dedicated strategy to achieve it (6= hazards)
,→ something beyond safety and fault-tolerance

→ A possible definition:
I functionnal properties = what the system should do
I security properties = what it should not allow w.r.t the intruder model . . .

Rk: functionnal properties do matter for “security-oriented” software (firewalls, etc.)!

1
could be the user, or the execution plateform itself!
8 / 21
Example 1: password authentication
Is this code “secure” ?
boolean verify (char[] input, char[] passwd , byte len) {
// No more than triesLeft attempts
if (triesLeft < 0) return false ; // no authentication
// Main comparison
for (short i=0; i <= len; i++)
if (input[i] != passwd[i]) {
triesLeft-- ;
return false ; // no authentication
}
// Comparison is successful
triesLeft = maxTries ;
return true ; // authentication is successful
}

functional property:

verify(input, passwd, len) ⇔ input[0..len] = passwd[0..len]

What do we want to protect ? Against what ?


I confidentiality of passwd, information leakage ?
I no unexpected runtime behaviour
I code integrity, etc.
9 / 21
Example 2: file compression

Let us consider 2 programs:


I Compress, to compress a file f
I Uncompress, to uncompress a (compressed) file c

A functional property: the one we will try to validate . . .

∀f .Uncompress(Compress(f )) = f (1)

But, what about uncompressing an arbitrary (i.e., maliciously crafted) file ?


(e.g., CVE-2010-0001 for gzip)
A security property:

(6 ∃f .Compress(f ) = c) ⇒ (Uncompress(c) 6 )

(uncompressing an arbitrary file should not produce unexpected crashes)


Actually (2) is much more difficult to validate than (1) . . .

Demo: make ‘python -c ’print "A"*5000’‘

10 / 21
Why do we need to bother about crashes ?

crash = consequence of an unexpected run-time error


I not trapped/foreseen by the programmer
I nor at the “programming language level”

⇒ part of the execution:


I may take place outside the program scope
(beyond the program semantic)
I but can be controled/exploited by an attacker (∼ “weird machine”)

runtime error crash

normal execution out of scope execution

possibly exploitable ...

,→ may break all security properties ...


from simple denial-of-service to arbitrary code execution
Rk: may also happen silently (without any crash !)

11 / 21
Some (not standardized) definitions . . .

Bug: an error (or defect/flaw/failure) introduced in a SW, either


I at the specification / design / algorithmic level
I at the programming / coding level
I or even by the compiler (or any other pgm transformation tools) . . .

Vulnerability: a weakness (for instance a bug !) that opens a security breach


I non exploitable vulnerabilities: there is no (known !) way for an attaker
to use this bug to corrupt the system
I exploitable vulnerabilities: this bug can be used to elaborate an attack
(i.e., write an exploit)

Exploit: a concrete program input allowing to exploit a vulnerability


(from an attacker point of view !)
PoC exploit: assumes that existing protections are disabled
(i.e., they can be hijacked wit other existing exploits)

Malware: a piece of code “injected” inside a computer to corrupt it


→ they usually exploit existing vulnerabilities . . .

12 / 21
Software vulnerability examples

Case 1 (not so common . . . )


Functional property not provided by a security-oriented component
I too weak crypto-system,
I no (strong enough) authentication mechanism,
I etc.

Case 2 (the vast majority !)


Insecure coding practice in (any !) software component/application
I improper input validation SQL injection, XSS, etc.
I insecure shared resource management (file system, network)
I information leakage (lack of data encapsulation, side channels)
I exploitable run-time error
I etc.

13 / 21
The intruder model

Who is the attacker ?


I a malicious external user, interacting via regular input sources
e.g., keyboard, network (man-in-the-middle), etc.
I a malicious external “observer”, interacting via side channels
(execution time, power consumption)
I another application running on the same plateform
interacting through shared resources like caches, processor elements, etc.
I the execution plateform itself (e,g., when compromised !)

What is he/she able to do ?


At low level:
I unexpected memory read (data or code)
I unexpected memory write (data or code)
⇒ powerful enough for
I information disclosure
I unexpected/arbitrary code execution
I priviledge elevation, etc.

14 / 21
Outline

Some practical information

What software security is (not) about ?

About software security


Some evidences regarding cyber (un)-security
So many examples of successful computer system attacks:
I the “famous ones”: (at least one per year !)
Morris worm, Stuxnet, Heartbleed, WannaCry, Spectre, etc.
I the “cyber-attacks” against large organizations: (+ 400% in 10 years)
Sony, Yahoo, Paypal, e-Bay, etc.
I all the daily vulnerability alerts: [have a look at these sites !]
http://www.us-cert.gov/ncas
http://www.securityfocus.com
http://www.securitytracker.com
I etc.

Why ? Who can we blame for that ??

I 6 ∃ well defined recipe to build secure cyber systems in the large


I permanent trade-off beetween efficiency and safety/security:
I HW and micro-architectures (sharing is everywhere !)
I operating systems
I programming languages and applications
I coding and software engineering techniques
15 / 21
But, what about software security ?
Software is greatly involved in “computer system security”:
I it plays a major role in enforcing security properties:
crypto, authentication protocols, intrusion detection, firewall, etc.
I but it is also a major source of security problems2 . . .
“90 percent of security incidents result from exploits against defects in software” ( U.S. DHS)

→ SW is clearly one of the weakest links in the security chain!


Why ???
I we do not no very well how to write secure SW
we do not even know how to write correct SW!
I behavioral properties can’t be validated on a (large) SW
impossible by hand, untractable with a machine
I programming languages not designed for security enforcement
most of them contain numerous traps and pitfalls
I programmers feel not (so much) concerned with security
security not get enough attention in programming/SE courses
I heterogenous and nomad applications favor unsecure SW
remote execution, mobile code, plugins, reflection, etc.
2
outside security related code!
16 / 21
Some evidences regarding software (un)-security (ct’d)

An increasing activity in the “defender side” as well ...

I all the daily security patches (for OS, basic applications, etc.)

I companies and experts specialized in software security


code audit, search for 0days, malware detection & analysis, etc.
“bug bounties” [https://zerodium.com/program.html]

I some important research efforts


from the main software editors (e.g., MicroSoft, Google, etc)
from the academic community (numerous dedicated conferences)
from independent “ethical hackers” (blogs, etc.)

I software verification tools editors start addressing security issues


e.g.: dedicated static analyser features

I international cooperation for vulnerability disclosure and classification


e.g.: CERT, CVE/CWE catalogue, vulnerability databases

I government agencies to promote & control SW security


e.g.: ANSSI, Darpa “Grand Challenge”, etc.

17 / 21
Couter-measures and protections (examples)
Several existing mechanisms to enforce SW security

I at the programming level:


I disclosed vulnerabilities → language weaknesses databases
,→ secure coding patterns and libraries
I aggressive compiler options + code instrumentation
,→ early detection of unsecure code

I at the OS level:
I sandboxing
I address space randomization
I non executable memory zones
I etc.

I at the hardware level:


I Trusted Platform Modules (TPM)
I secure crypto-processor
I CPU tracking mechanims (e.g., Intel Processor Trace)
I etc.

18 / 21
Techniques and tools for assessing SW security
Several existing mechanisms to evaluate SW security

I code review . . .

I fuzzing:
I run the code with “unexpected” inputs → pgm crashes
I (tedious) manual check to find exploitable vulns . . .

I (smart) testing:
coverage-oriented pgm exploration techniques
(genetic algorithms, dynamic-symbolic executions, etc.)
+ code instrumentation to detect (low-level) vulnerabilities

I static analysis: approximate the code behavior to detect potential vulns


(∼ code optimization techniques)

In practice:
I only the binary code is always available and useful . . .
I combinations of all these techniques . . .
I exploitability analysis still challenging . . .

19 / 21
Course objectives (for the part 1)

Understand the root causes of common weaknesses in SW security


I at the programming language level
I at the execution platform level
→ helps to better choose (or deal with) a programming language

Learn some methods and techniques to build more secure SW:


I programming techniques:
languages, coding patterns, etc.
I validation techniques:
what can(not) bring existing tools ?
I counter-measures and protection mechanisms

20 / 21
Course agenda (part 1)
See
http://www-verimag.imag.fr/~mounier/Enseignement/Software_Security

Credits:
I E. Poll (Radboud University)
I M. Payer (Purdue University)
I E. Jaeger, O. Levillain and P. Chifflier (ANSSI)

21 / 21

You might also like