0% found this document useful (0 votes)
57 views39 pages

CH 4: Security Policies: Week 2

This document discusses security policies, including what they cover, examples of different types of policies, and key concepts like mechanisms and trust. It defines security policies as partitioning a system's possible states into authorized and unauthorized. Mechanisms enforce parts of the security policy. The document provides examples of confidentiality, integrity, and availability policies and discusses models for representing policies.

Uploaded by

burmansoft
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views39 pages

CH 4: Security Policies: Week 2

This document discusses security policies, including what they cover, examples of different types of policies, and key concepts like mechanisms and trust. It defines security policies as partitioning a system's possible states into authorized and unauthorized. Mechanisms enforce parts of the security policy. The document provides examples of confidentiality, integrity, and availability policies and discusses models for representing policies.

Uploaded by

burmansoft
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 39

Ch 4: Security Policies

Week 2

Chapter 4: Security Policies


Overview
The nature of policies
What they cover
Policy languages

The nature of mechanisms


Types

Underlying both
Trust

Security Policy
Policy partitions system states into:
Authorized (secure)
These are states the system can enter

Unauthorized (nonsecure)
If the system enters any of these states, its a security
violation

Secure system
Starts in authorized state
Never enters unauthorized state

Security Policy - Example


The security policy partitions states into authorized
states A={s1, s2} and unauthorized states UA={s3, s4}
This system is NOT secure, b/c regardless of which
authorized state it starts in, it can enter an
unauthorized state.
If we remove the edge s1-s3, the system will become
secure.

Security Policy
Recall the three basic properties relevant to security:
Confidentiality
Integrity
Availability
CIA properties

Confidentiality
X set of entities, I information
I has confidentiality property w.r.t. X if no x
X can obtain information from I
I can be disclosed to others
Example:
X : set of students
I : final exam answer key
I is confidential w.r.t. X if students cannot obtain
final exam answer key

Integrity
X set of entities, I information
I has integrity property w.r.t. X if all x X
trust information in I
Types of integrity:
trust I, its conveyance and storage (data integrity)
I is the information about origin of something or
an identity (origin integrity, authentication)
I is a resource: The resource functions as it should
(assurance)

Availability
X set of entities, I resource
I has availability property w.r.t. X if all x X can
access I.
The access depends on the needs of the members of
X, the nature of the resource, and the use to which
the resource is put.

Types of availability:
traditional: x gets access or not
quality of service: e.g. a channel promised a level of
access (for example, a specific level of bandwidth) but
not meet it, even though some access is achieved

Putting it all together ..


A security policy considers all relevant aspects
of CIA:
Confidentiality policy:
The security policy identifies those states in which
information leaks to those not authorized to receive
it, or even if not leakage, illicit transmission of
information (called information flow).
The policy must also have a temporal element, e.g.
a contractor should only have access to sensitive
data during the lifetime of a nondisclosure
agreement (NDA).

Putting it all together ..


Integrity policy:
A security policy identifies authorized ways in
which information may be altered by authorized
entities.
Authorization may derive from a variety of
relationships, including external influences, e.g.
separation of duties principle forbids an entity
from completing a transaction on its own.

Putting it all together ..


W.r.t. availability:
The security policy describes what services must
be provided
It may present the parameters within which the
services will be accessible, e.g. a browser may
download web pages, but not Java applets.
It may require a level of service, e.g. a server will
provide authentication data within 1 min. of the
request.

Policy Models
Abstract description of a policy or class of
policies
Focus on points of interest in policies
Security levels in multilevel security models
Separation of duty in Clark-Wilson model
Conflict of interest in Chinese Wall model

Types of Security Policies


Military (governmental) security policy
Policy primarily protecting confidentiality
Ex: Revealing the date a ship will sail could be catastrophic

Commercial security policy


Policy primarily protecting integrity
Ex: If the integrity of customer accounts for a bank is
compromised, the bank may lose its business.

The role of trust highlights the


difference:
Confidentiality policy
Policy protecting only confidentiality
Place no trust on objects

Integrity policy
Policy protecting only integrity
Indicate how much the object can be trusted
Given that the level of trust is correct, the integrity
policy dictates what a subject can do with that object
Usually assigns a certain level of integrity (low, high, or
in between)

More on Security Policies


Confidentiality policies and military policies deal with
confidentiality
A confidentiality policy DOES NOT deal with integrity at
all
A military policy may

A similar distinction holds for integrity policies and


commercial policies

Integrity and Transactions


Similar to processing database transactions:
Begin in consistent state
Consistent defined by specification

Perform series of actions (transaction)


Actions cannot be interrupted
If actions complete, system in consistent state
If actions do not complete, system reverts to beginning
(consistent) state

Trust - Example
Administrator installs patch
1. Trusts patch came from vendor, not tampered with
in transit
2. Trusts vendor tested patch thoroughly
3. Trusts vendors test environment corresponds to
local environment
4. Trusts patch is installed correctly

Trust in Formal Verification


Gives formal mathematical proof that given input i,
program P produces output o as specified
Suppose a security-related program S formally
verified to work with operating system O

Trust in Formal Methods


1.

Proof has no errors

Bugs in automated theorem provers

Preconditions hold in environment in which S is to


be used
S transformed into executable S whose actions
follow source code

2.

3.

Compiler bugs, linker/loader/library problems

Hardware executes S as intended

4.

Hardware bugs (Pentium f00f bug, for example)

Types of Access Control


Discretionary Access Control - Identitiy Based Access
Control (DAC, IBAC)
individual user sets access control mechanism to allow or
deny access to an object

Mandatory Access Control (MAC)


system mechanism controls access to object, and individual
cannot alter that access

Originator Controlled Access Control (ORCON)


originator (creator) of information controls who can access
information

Discretionary Access Control (DAC) Example


In UNIX, users may change the permissions on
directories and/or files they own, making this a
discretionary policy.

Mandatory Access Control (MAC) Example


The law allows a court to access a persons social
security number. This is a mandatory control as the
owner of the SSN has no control over the courts
access to this information.

Originator Controlled Access Control


(ORCON) - Example
A company A contracts with another company B. The
company A gives a copy of the specifications of a
new product that they are working on to company B.
The company B needs to obtain permission from
company A before it gives any information about the
new product to its subcontructors.
This is an originator controlled access mechanism
because even though the company B owns the
specifications, it may not allow anyone to access that
information unless the creator (company A) gives
permission.

Example (1/3)
Policy disallows cheating
Includes copying homework, with or without permission

CS class has students do homework on computer


Anne forgets to read-protect her homework file
Bill copies it
Who cheated?
Anne, Bill, or both?

Example (1/2)
Bill cheated
Policy forbids copying homework assignment
Bill did it
System entered unauthorized state (Bill having a copy of
Annes assignment)

If not explicit in computer security policy, certainly


implicit
Not credible that a unit of the university allows something
that the university as a whole forbids, unless the unit
explicitly says so

Example (3/3)
Anne didnt protect her homework
Not required by security policy

She didnt breach security


If policy said students had to read-protect homework
files, then Anne did breach security
She didnt do this

Mechanisms
Entity or procedure that enforces some part of the
security policy
Access controls (like bits to prevent someone from
reading a homework file)
Disallowing people from bringing CDs and floppy disks
into a computer facility to control what is placed on
systems

Example English Policy


Computer security policy for academic institution
Institution has multiple campuses, administered from
central office
Each campus has its own administration, and unique
aspects and needs

Authorized Use Policy


Electronic Mail Policy

Authorized Use Policy


Intended for one campus (e.g. Dallas) only
Goals of campus computing
Underlying intent

Procedural enforcement mechanisms


Warnings
Denial of computer access
Disciplinary action up to and including expulsion

Written informally, aimed at user community

Electronic Mail Policy


Systemwide, not just one campus
Three parts
Summary
Full policy
Interpretation at the campus

Summary
Warns that electronic mail not private
Can be read during normal system administration
Can be forged, altered, and forwarded

Unusual because the policy alerts users to the threats


Usually, policies say how to prevent problems, but do
not define the threats

Summary
What users should and should not do
Think before you send
Be courteous, respectful of others
Dont interfere with others use of email

Personal use okay, provided overhead minimal


Who it applies to
Problem is UT is quasi-governmental, so is bound by rules
that private companies may not be
Educational mission also affects application

Full Policy
Context
Does not apply to Dept. of Psychology labs run by the
university
Does not apply to printed copies of email
Other policies apply here

E-mail, infrastructure are university property


Principles of academic freedom, freedom of speech apply
Access without users permission requires approval of vice
chancellor of campus or vice president of UT
If infeasible, must get permission retroactively

Uses of E-mail
Anonymity allowed
Exception: if it violates laws or other policies

Cant interfere with others use of e-mail


No spam, letter bombs, e-mailed worms, etc.

Personal e-mail allowed within limits


Cannot interfere with university business
Such e-mail may be a university record subject to
disclosure

Security of E-mail
University can read e-mail
Wont go out of its way to do so
Allowed for legitimate business purposes
Allowed to keep e-mail robust, reliable

Archiving and retention allowed


May be able to recover e-mail from end system
(backed up, for example)

Implementation
Adds campus-specific requirements and procedures
Example: incidental personal use not allowed if it benefits
a non-university organization
Allows implementation to take into account differences
between campuses, such as self-governance by Academic
Senate

Procedures for inspecting, monitoring, disclosing email contents


Backups1

Key Points
Policies describe what is allowed
Mechanisms control how policies are enforced
Trust underlies everything

Security and Precision


Definition 416. Let p be a function p: I1 ...
In R. Then P is a program with n inputs ik Ik, 1
k n, and one output r R.
Axiom 41. (The observability postulate.) The
output of a function p(i1, ..., in) encodes all available
information about i1, ..., in.

Security and Precision


Example: Consider a program asks for a username and
password. If the username is illegal, or is not associated
with the password, the program prints Bad. If the
username has the given password, it prints Good. The
inputs are username, password, and the database of
associations. The output is in the set {Bad, Good}.
If the username is illegal, the program immediately prints
Bad w/out accessing the password database. But if the
username is valid, the program must access the database,
which will take some time. If the program immediately
prints Bad, the observer concludes that the user is
unknown. If a delay occurs before the program prints
Bad, the observer concludes that the user is known but
the password is incorrect.

You might also like