0% found this document useful (0 votes)
425 views70 pages

Secure Software Design Principles

This document discusses principles and objectives for designing secure software systems. It outlines goals for decomposing a system into components, determining relationships and communication between components, and specifying component interfaces. The document emphasizes that integrating security early in the design process is important to avoid insecure designs. It presents principles for secure software design including isolation, least privilege, compartmentalization, separation of duties, and simplicity. Isolation principles like virtual machines and sandboxes are explained in detail. The important principle of least privilege is also described through examples.

Uploaded by

Haya al-theyab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
425 views70 pages

Secure Software Design Principles

This document discusses principles and objectives for designing secure software systems. It outlines goals for decomposing a system into components, determining relationships and communication between components, and specifying component interfaces. The document emphasizes that integrating security early in the design process is important to avoid insecure designs. It presents principles for secure software design including isolation, least privilege, compartmentalization, separation of duties, and simplicity. Isolation principles like virtual machines and sandboxes are explained in detail. The important principle of least privilege is also described through examples.

Uploaded by

Haya al-theyab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 70

Design Principles and

Objectives for Secure


Software
Software Security
CCCY 322
Goals of Software Design
• Decompose a software system to components
• Identify software architecture and its components
• Determine relationships among components
• Identify component dependencies
• Determine internal communication mechanisms among components
• Global variables, function calls, shared memory, IPC/RPC
• Specify component interfaces
• Well defined interfaces facilitate component testing and communication
among developers
• Describe component functionality 2
Secure Software Design
• Integrating security in software development after software design often requires
architectural changes, not only code changes or small design changes.
• Such changes often result in insecure software
• Integrating security in software design requires the following consideration:
• What software components need to be created to satisfy security
requirements?
• How to integrate components securely?
• How sensitive data is stored and retrieved in the system?
• How to control information flows between components? 3
Secure Software Design (cont.)

• Secure software design is difficult because


• No general methodology suitable for design of all kinds of software
• A secure design for certain software may not be secure for other software
• Software design heavily relies on developers’ knowledge, experiences and
intuition
• Emerging technologies and new threats require new software design patterns

4
Design Principles for Secure Software

• Specific design principles underlie the design and implementation of


mechanisms for supporting security policies.
• These principles build on the ideas of simplicity and restriction.
• Simplicity makes designs and mechanisms easy to understand.
• Minimizing the interaction of system components minimizes the number of sanities
checks on data being transmitted from one component to another.
• Simplicity also reduces the potential for inconsistencies within a policy or set of
policies.
• Restriction minimizes the power of an entity. The entity can access only information it
needs. Entities can communicate with other entities only when necessary, and in as
few (and narrow) ways as possible.
Design Principles for Secure Software

• These principles can be applied at many levels, e.g., in source code of an


application, between applications on a machine at OS level, at network
level, within an organization, between organizations
Design Principles
• Isolation • Complete Mediation
• Least Privilege • Defense-In-depth Design
• Compartmentalization • Access control pattern and System
security levels
• Separation of duties
• Component's integration • Fail safe default and fail secure

• Open design
• Least Astonishment (Psychological
Acceptability)
• Simplicity of Design
• Minimize trust surface (Reluctance to
• Abstraction trust)
• Generic design • Usability
Isolation

• Isolation separates two components from each other and confines their interactions
to a well-defined API.
• There are different ways to enforce isolation between components, all of them
require some form of abstraction and a security monitor.
• The security monitor runs at higher privileges than the isolated components and
ensures that they adhere to the isolation.
• Any violation to the isolation is stopped by the security monitor and, e.g., results in
the termination of the violating component.
Isolation

Systems isolate processes in two ways:


• In the first, the process is presented with an environment that appears to be a
computer running only that process or those processes to be isolated (VM).
• This type of environment prevents the process from accessing the underlying computer
system and any processes or resources that are not part of that environment.

• In the second, an environment is provided in which process actions are analyzed


to determine if they leak information (sandboxing).
• This type of environment does not emulate a computer. It merely alters the interface
between the existing computer and the process(es)..
Isolation via Virtual machines

• The first type of environment is called a virtual machine.


• A virtual machine is a program that simulates the hardware of a computer system.
• A virtual machine uses a special operating system called a virtual machine monitor to
provide a virtual machine on which conventional operating systems can run.
• The primary advantage of a virtual machine is that existing operating systems do not need
to be modified. They run on the virtual machine monitor.
• The virtual machine monitor enforces the desired security policy, and functions as a
security kernel.
Isolation via Virtual machines

• In terms of policy, the virtual machine monitor deals with subjects (VM).
• Even if one virtual machine is running hundreds of processes, the virtual
machine monitor knows only about the virtual machine.
• Thus, it can apply security checks to its subjects, and those controls apply to
the processes that those subjects are running. This satisfies the rule of
transitive confinement.
Isolation via Sandboxing
• The computer sandbox provides a safe environment for programs to execute in.
• If the programs "leave" the sandbox, they may do things that they are not supposed to
do. Both types of sandboxes restrict the actions of their occupants.
• A sandbox is an environment in which the actions of a process are restricted according to a
security policy.
Systems may enforce restrictions in two ways:
• First, the sandbox can limit the execution environment as needed.
• This is done by adding extra security-checking mechanisms to the libraries or kernel. The
program itself is not modified.
• For example, the Java virtual machine is a sandbox because its security manager limits
access of downloaded programs to system resources as dictated by a security policy
Isolation via Sandboxing

• The second enforcement method is to modify the program (or process) to


be executed.
• Dynamic debuggers and some profilers use this technique by adding
breakpoints to the code and, when the trap occurs, analyzing the state of
the running process.
• A variant, known as software fault isolation adds instructions that perform
memory access checks or other checks as the program runs, so any attempt
to violate the security policy causes an error.
Principle of Least Privilege

• This principle restricts how privileges are granted.


• The principle of least privilege states that a subject should be given only
those privileges that it needs in order to complete its task.
• In other words, Only the minimum access necessary to perform an
operation should be granted, and that access should be granted only for the
minimum amount of time necessary.
Principle of Least Privilege

• Least privilege requires isolation to restrict access of the component to other parts
of the system
• If a component follows least privilege, then any privilege that is further removed
from the component removes some functionality. Any functionality that is
available can be executed with the given privileges.
• For example, multitier architecture is used by web application, for API tier there are
different API servers for different purposes. If the caller just only calls APIs hosted
on one server, then access to other API servers should be denied.
• In an online shopping application, the frontend used by public user shouldn’t be
allowed to access account reconciliation API, which can only be accessed by finance
frontend.
Principle of Least Privilege

• A good real-world example appears in the security clearance system of the


U.S. government -- the policy of "need to know.“
• If you have clearance to see any classified document, you still won't be
able to see any secret document that you know exists.
• If you could, it would be very easy to abuse the secret clearance level.
Instead, people are only allowed to access documents that are relevant to
those tasks that apply to them.
Principle of Least Privilege

• Famous violations of the principle of least privilege exist in UNIX systems.


• For example, you need to have root privileges to run a service on a port
number less than 1024.
• So, to run a mail server on port 25 (SMTP port) a program needs the
privileges of the root user.
• However, once a program has set up shop on port 25, there is no need
for it to use root privileges again.
• A security-conscious program would give up root privileges and let the
operating system know that it should never require those privileges
again (at least, not until the next run of the program).
Principle of Least Privilege

Another counterexample:
• One large problem with some e-mail servers is that they don't give up their
root permissions once they have grabbed the mail port (Sendmail is a classic
example). Therefore, if someone finds a way to trick such a mail server into
doing something nefarious, it will succeed.
• For example, if a malicious attacker were to find a suitable stack overflow in Sendmail
(see Resources), then that overflow could be used to trick the program into running
arbitrary code. Because Sendmail runs with root permissions, any valid attempt by the
attacker will succeed.
Principle of Least Privilege

In organization
• don’t give everyone access to root passwords
• don’t give everyone administrator rights
On computer
• Run process with minimal set of privileges
• For example, don’t run web application as root or administrator
Principle of Least Privilege

For Java application: not the default policy


grant codeBase"file:${{java.ext.dirs}}/*" { permission
java.security.AllPermission; };
but minimum required
grant codeBase"file:./forum/*" { permission
java.security.FilePermission;
"/home/forumcontent/*","read/write";};
Principle of Least Privilege
Expose minimal functionality in interfaces of objects, classes,
packages, applications.
in code:
• not public int x;
• but private int x;
• not public void m()
• but package void m()

• Least privilege example: ─Standard coding standard


• not to use import java.lang.*;
but always import java.lang.String;
Principle of Least Privilege

• Use Secure Defaults


• By default, security should be switched on
permissions turned off

• This will ensure that we apply principle of least privilege


• Counterexample: Bluetooth connection on mobile phone is by default on,
but can be abused
Compartmentalization

• The idea behind compartmentalization is to break a complex system into


small components that follow a well-defined communication protocol to
request services from each other.
• Break the system up into as many isolated units as possible
• Simplicity
• Containing attacker in case of failure
Compartmentalization

• Compartmentalization allows abstraction of a service into small


components.
• Under compartmentalization, a system can check permissions and protocol
conformity across compartment boundaries.
• This property builds on least privilege and isolation. Both properties are
most effective in combination: many small components that are running
and interacting with least privilege.
Compartmentalization

• A good example of compartmentalization is the Chromium web browser.


• Web browsers consists of multiple different components that interact with each other
such as a network component, a cache, a rendering engine that parses documents, and
a JavaScript compiler.
• Chromium first separates individual tabs into different processes to restrict interaction
between them.
• Additionally, the rendering engine runs in a highly restricted sandbox to limit any bugs
in the parsing process to an unprivileged process.
Compartmentalization

• Counterexample: Famous violations of this principle exist standard UNIX


privilege model
• A program with root privilege can do everything (including erase logs)

• A few operating systems, such as Trusted Solaris, do compartmentalize.


• Tradeoff with manageability.
• Counterexample: OS that crashes if an application crashes.
Compartmentalization

• Use different machines for different tasks


• Example: run web application on a different machine from employee salary
database
• Example: use different user accounts on one machine for different tasks
• Compartmentalization provided by typical OS is poor!
Separation of duties (SOD)

•    NIST SP 800-57 Part 2 : SOD is a security principle that divides critical functions
among different staff members in an attempt to ensure that no one individual has
enough information or access privilege to perpetrate damaging fraud.
• SOD is a basic building block of sustainable risk management and internal controls
for a business.
• The principle of SOD is based on shared responsibilities of a key process that
disperses the critical functions of that process to more than one person or
department.
• Without this separation in key processes, fraud and error risks are far less
manageable. 
Separation of duties (SOD)
• Another definition, The principle of separation of duty states that if two or more steps are
required to perform a critical function, at least two different people should perform the
steps.
• Moving a program from the development system to the production system is an example
of a critical function.
• Suppose one of the application programmers made an invalid assumption while developing
the program.
• Part of the installation procedure is for the installer to certify that the program works
“correctly,” that is, as required.
• The error is more likely to be caught if the installer is a different person (or set of people)
than the developer.
• Similarly, if the developer wishes to subvert the production data with a corrupt program,
the certifier either must not detect the code to do the corruption or must be in league with
the developer.
Separation of duties

• A key fraud control is separation of duties.


• For example, in an application certain role have different levels of trust than
normal users, especially for administrators, administrators should not be
users of the application. An administrator should be able to turn the system
on or off, set password policy but shouldn’t be able to log on to the
storefront as a super privileged user, such as being able to “buy” goods on
behalf of other users.
Open design Principle

• The Open Design Principle is a concept that the security of a system and its
algorithms should not be dependent on secrecy of its design or implementation.
• This principle suggests that complexity does not add security.
Open design

• This concept captures the term “security through obscurity”, this is true of
cryptographic software and systems.
• Keeping cryptographic keys and passwords secret does not violate this
principle, because a key is not an algorithm.
• However, keeping the enciphering and deciphering algorithms secret
would violate it.
• Thus, Publish your design for anyone to review. This allows reviewers to
comment on the security mechanisms being used while protecting the keys or
passwords that are used by the mechanisms. You should assume that your design
is not a secret.
Simplicity of Design

• Designs and implementations should be as simple as possible


• Complexity increases the risk of problems.
• Complex code tends to be harder to analyze and maintain.
• It also tends to have more bugs.
• Should try to reuse components whenever possible.
• Be careful in applying this principle
• Keep system simple on the condition of keeping system secure.
Simplicity of Design

• Another way to improve the simplicity of your software is to funnel all security-
critical operations through of choke points in your system.
• A choke point is a narrow interface to a system that you force all traffic to go
through that you can easily control it.
• Avoid spreading security code throughout a system, because it becomes
difficult to maintain.
Abstraction

What is abstraction?
• Abstraction is the concept that something complicated can be represented more simply. All
models are abstractions - since they reduce the complexity of an object into something that
is understandable.
• How does abstraction contribute to cybersecurity?
• Remove/reduce any clutter that can distract the user or programmer from using a
resource correctly.
• Only provide the necessary details, while reducing the complexity to a set of essential
characteristics.
• Excess complexity may hide malicious behaviors.
Complete Mediation

• This principle requires that all accesses to objects should be checked to ensure that they are
allowed.
• Also, it restricts the caching of information, which often leads to simpler implementations
of mechanisms.
• Every time you tried to access an object; the system should authenticate the privileges
associated with that subject.
• What happens in most systems is that those privileges are cached away for later use. The
subject’s privileges are authenticated once at the initial access. For subsequent accesses
the system assumes that the same privileges are enforce for that subject and object. This
may or may not be the case.
• The operating system should mediate all and every access to an object.
Complete Mediation

Counterexample:
• When a UNIX process tries to read a file, the operating system determines if the process is
allowed to read the file. If so, the process receives a file descriptor encoding the allowed
access.
• Whenever the process wants to read the file, it presents the file descriptor to the kernel.
• The kernel then allows the access. If the owner of the file disallows the process permission
to read the file after the file descriptor is issued, the kernel still allows access.
• This scheme violates the principle of complete mediation, because the second access is not
checked. The cached value is used, resulting in the denial of access being ineffective.
Defense in depth

• Apply multiple layers of security mechanisms.


• Example:
• Use firewalls to protect a network from malicious user inputs, virus and
worms
• The next layer may be IDS
• The next layer may be access control for a particular asset
• The next layer may be contingency safe mode 38
Defense-In-depth Design

• The idea behind defense in depth is to manage risk with multiple defensive
strategies, so that if one layer of defense turns out to be inadequate, another layer
of defense will, ideally, prevent a full breach.
• This principle is well-known, even beyond the security community; for example, it is
a famous principle for programming language design:
• Defense in Depth: Have a series of defenses so that if an error isn't caught by one, it will
probably be caught by another. (From Bruce MacLennan's Principles of Programming Languages)

• Thus, your design should include redundancy and layers of defense.


Defense-In-depth Design

• Example 1: have a firewall and secure web application software, and run web
application with minimal privileges
• Example 2: use OS access control to restrict access to sensitive files, and encrypt
them, especially when files are stored on removable media which might be
disposed.
• Counterexample: on UNIX systems, the password file, /etc/passwd, which contains
hashed passwords, was world readable.─Solution: enforce tight access control to
the file.
• Counterexample: having a firewall, and only having firewall ─a user bringing in a
laptop circumvents firewall
• Counterexample: firewall + unencrypted data within network
Access control pattern and System security levels

• An access control model defines who can access what and in what manner in
a system
• Some secure design methodologies start from use cases and from them a
conceptual model is developed. Security constraints are then defined in the
conceptual model. The most important constraints are related to access
control
Access Control Components

• Access Controls: The security features that control how users and systems
communicate and interact with one another
• Access: The flow of information between subject and object
• Subject: An active entity that requests access to an object or the data in an
object
• Object: A passive entity that contains information
Many Access control Model
• There are many models for access control, mostly variations of some basic models
and it is confusing for a software developer to select an appropriate model for her
application
• Access control models generally represent a few types of security policies, e.g.
“rights are assigned to roles”, and provide a formalization of these policies using
some ad hoc notation
• Four basic access control models are commonly used, and they may be extended to
include other aspects
• Access control models can be defined for different architectural levels, including
application, database systems, operating systems, and firewalls
• Some of them apply to any type of systems while some are specialized, e.g. for
distributed systems.
Access Control Policies (models)

Access control policies are generally grouped into the following categories:
• Discretionary access control (DAC): Controls access based on the identity of the requestor
and on access rules (authorizations) stating what requestors are (or are not) allowed to do.
• This policy is termed discretionary because an entity might have access rights that permit the entity to
enable another entity to access some resource.

• • Mandatory access control (MAC): Controls access based on comparing security labels
(which indicate how sensitive or critical system resources are) with security clearances
(which indicate system entities are eligible to access certain resources).
• This policy is termed mandatory because an entity that has clearance to access a resource may not enable
another entity to access that resource.
Access Control Policies

• Role-based access control (RBAC): controls access based on the roles that
users have within the system and on rules stating what accesses are allowed
to users in given roles.
• Attribute-based access control (ABAC): controls access based on
attributes of the user, the resource to be accessed, and current
environmental conditions.
Access Controls Implementation

• Access controls can be implemented at various layers of an organization,


network, and individual systems
• Three broad categories:
• Administrative (e.g., separation of duties, rotation of duties)
• Physical (e.g., network segregation, physical access)
• Technical (aka logical, e.g., auditing, network access)
Fail safe default Principle

• This principle states that, unless a subject is given explicit access to an object, it should be
denied access to that object.
• Also, it restricts how privileges are initialized when a subject or object is created.
• This principle requires that the default access to an object is none.
• Whenever access, privileges, or some security-related attribute is not explicitly granted, it should be
denied.
• Moreover, if the subject is unable to complete its action or task, it should undo those changes it made in
the security state of the system before it terminates.
• This way, even if the program fails, the system is still safe.
Fail safe default Principle
EXAMPLE:
• If the mail server is unable to create a file in the spool directory, it should close the network
connection, issue an error message, and stop. It should not try to store the message
elsewhere or to expand its privileges to save the message in another location, because an
attacker could use that ability to overwrite other files or fill up other disks (a denial-of-
service attack).
• The protections on the mail spool directory itself should allow create and write access only
to the mail server and read and delete access only to the local server. No other user should
have access to the directory. Most systems will allow an administrator access to the mail
spool directory.
• By the principle of least privilege, that administrator should be able to access only the
subjects and objects involved in mail queueing and delivery. This constraint minimizes the
threats if that administrator’s account is compromised. The mail system can be damaged or
destroyed, but nothing else can be.
Fail secure Principle

• Fail secure principle requires that when systems fail, they should not revert to insecure
behavior. Otherwise, attacker only need to invoke the right failure.
• The best real-world example is credit card authentication.
• Whenever you make a purchase, the vendor swipes your card through a device that contacts the credit
card company.
• The credit card company checks to see if the card is known to be stolen.
• Also, the credit card company analyzes the requested purchase in the context of your recent purchases
and compares the patterns to the overall trends of your spending habits.
• If their engine senses anything fairly suspicious, the transaction is denied.
Fail secure Principle
Counterexample:
• In Remote Method invocation (RMI), when a client and server want to communicate over
RMI, and the server wants to use SSL or some other encryption protocol, but the client may
not support the server’s protocol.
• When that's the case, the client downloads the proper socket implementation from the
server at runtime.
• This constitutes a big security hole, because the server has not been authenticated at the
time that the encryption interface was downloaded.
• An attacker could pretend to be the server, installing his own socket implementation on
each client, even when the client already has proper SSL classes installed.
• The problem is that if the client fails to establish a secure connection with the default
libraries (a failure), it will establish a connection using whatever protocol an untrusted
entity gives it, thereby extending trust.
Fail secure Principle

• If an application fails, do not leave sensitive data accessible


• Error messages should not expose internal system details
• Determine what may occur when a system fails and be sure it does not
threaten the system.
Least Astonishment (Psychological Acceptability)

• This principle states that security mechanisms should not make the resource more difficult
to access than if the security mechanisms were not present.
• This principle recognizes the human element in computer security.
• Configuring and executing a program should be as easy as possible, and any output should
be clear, direct, and useful.
• If security-related software is too complicated to configure, system administrators may
unintentionally set up the software in a nonsecure manner.
• Similarly, security-related user programs must be easy to use and must output
understandable messages.
• If a password is rejected, the password changing program should state why it was
rejected rather than giving a cryptic error message. If a configuration file has an
incorrect parameter, the error message should describe the proper parameter.
Least Astonishment (Psychological Acceptability)

EXAMPLE:
• The ssh program allows a user to set up a public key mechanism for enciphering
communications between systems.
• The installation and configuration mechanisms for the UNIX version allow one to
arrange that the public key be stored locally without any password protection.
• In this case, one need not supply a password to connect to the remote system but will
still obtain the enciphered connection.
• This mechanism satisfies the principle of psychological acceptability
Least Astonishment (Psychological Acceptability)

• In practice, this principle is interpreted to mean that the security mechanism may
add some extra burden, but that burden must be minimal and reasonable.
• EXAMPLE:
• A mainframe system allows users to place passwords on files. Accessing the files
requires that the program supply the password.
• Although this mechanism violates the principle as stated, it is considered
sufficiently minimal to be acceptable.
• On an interactive system, where the pattern of file accesses is more frequent and
more transient, this requirement would be too great a burden to be acceptable.
Minimize trust surface (Reluctance to trust)

• Minimize Trusted Computing Base (TCB), i.e., the part of the system that has
to be trusted
• User input should not be trusted, and subjected to strong input validation
checks before being used
• All user input is evil !
• Unchecked user input leads to buffer overflows, SQL injection, XSS on
websites
• User input includes cookies, environment variables, ...
• Trust is transitive – Once you give it to an entity, you implicitly extend it to
anyone that entity may trust
• make sure that trusted programs never invoke untrusted programs.
Minimize trust surface (Reluctance to trust)

• Be skeptical of security protections that are not within your software


system.
• The quote "trust but verify" is a good motto to follow when placing your
software within an operating environment.
• A cloud provider may claim to provide certain protections, but it is best to
verify these as best you can.
Minimize trust surface (Reluctance to trust)

• For example, while using the off-the-shelf software can keep your designs
and implementations simple, but
• How do you know you can trust an off-the-shelf component to be secure?
• Do you really think the developers were security experts?

• Another place where trust is generally extended far too easily is in the area
of customer support.
• Social engineering attacks are easy to launch against unsuspecting customer support
agents, who have a proclivity (tendency ) to trust, because it makes their jobs easier.
Usability

• Usability is also an important design goal.


• For a software product to be usable, its users should be able to accomplish tasks that the
software is meant to assist them in carrying out.
• The way to achieve usable software is not to build a software product first, and then bring in
an interaction designer or usability engineer to recommend tweaks to the user interface.
• Instead, to design usable software products, interaction designers and usability engineers
should be brought in at the start of the project to architect the information and task flow to
be intuitive to the user.
Usability

There are a few items to keep in mind regarding the interaction between
usability and security:
• Do not rely on documentation. The first item to keep in mind is that users
generally will not read the documentation or user manual. If you build
security features into the software product and turn them off by default,
you can be sure that they will not be turned on, even if you tell users how
and why to do so in the documentation.
Usability

Secure by default
• Unlike many other product features that should be turned off by default, security features should be
turned on by default, or else they will rarely be enabled at all.
• The challenge is to design security features that are easy to use and provide security advantages, while
they are not inconvenient to the point that users will shut them off or work around them in some way.
• For instance, requiring a user to choose a relatively strong but usable password when they first power
up a computer, and enter it at the time of each login might be reasonable.
• However, requiring a user to conduct a two-factor authentication every time that the screen locks will
probably result in a feature being disabled.
Usability

Remember that users will often ignore security if given the choice.
• If you build a security prompt into a software product, such as a dialog box
that pops up in front of the users saying, “The action that you are about to
conduct may be insecure. Would you like to do it anyway?” a user will most
likely ignore it and click “Yes.”
• Therefore, you should employ secure-by-default features that do not allow
the user to commit insecure actions.
Secure the weakest link

• security is a chain. A system is only as secure as the weakest link.


• One consequence is that the weakest parts of your system are the parts
most susceptible to attack.
• Identifying what you perceive to be the weakest components of a system
should be easy if you perform a good risk analysis. You should address what
seems to be the most serious risk first, instead of the risk that seems easiest
to mitigate.
Secure the weakest link

• Attackers usually attack a weak spot in a software system, and do not


penetrate a heavily fortified component.
• Example: Cryptographic algorithms can take long time to break, and attackers
are not likely to attack encrypted information communicated in a network.
Instead, the endpoints of communication (e.g., servers) may be much easier to
attack.
• Social engineering: A common weak link Sometimes it's not the software that
is the weakest link in your system; sometimes it’s the surrounding
infrastructure.
Secure the weakest link

• The weakest link is the part of a system that is the most vulnerable,
susceptible, or easiest to attack.
• Some weak links that may exist in systems are:
• Weak Passwords
• People
Principle of Least Common Mechanism
• This principle is restrictive because it limits sharing.
• The principle states that mechanisms used to access resources should not be shared. Sharing
resources provides a channel along which information can be transmitted, and so such sharing
should be minimized.
• In a simple way, Minimize the number of resources being shared/used by more than one user or
system.
• Do not share objects and protection mechanisms, instead create separate instances for each user
or system interface.
Principle of Least Common Mechanism

• EXAMPLE: A Web site provides electronic commerce services for a major company.
• Attackers want to deprive the company of the revenue it obtains from that Web site.
• They flood the site with messages and tie up the electronic commerce services.
• Legitimate customers are unable to access the Web site and, as a result, take their
business elsewhere.
• Here, the sharing of the Internet with the attackers’ sites caused the attack to succeed.
• The appropriate countermeasure would be to restrict the attackers’ access to the
segment of the Internet connected to the Web site.
• Techniques for doing this include proxy servers such as the Purdue SYN intermediary or
traffic throttling. The former targets suspect connections; the latter reduces the load
on the relevant segment of the network indiscriminately.
Secure Defaults Principles

• Are user accounts disabled by default and must be explicitly


enabled when required?
• If an account is not used for a specified period, remove or disable it
• Disable or remove unused services, protocols and functionalities.
• Does user’s server need all the current services and ports?
• Does user’s application need all the enabled features?
67
Promote privacy Principles

• Protect customers’ privacy


• Comply with the privacy regulations of the Fair Information
Practice Principles (FIPPs) of the Federal Trade Commission.
https://en.wikipedia.org/wiki/FTC_Fair_Information_Practice
• Create a formal privacy statement
• Request the users’ consent before collecting their personal
information
• Do not collect unnecessary information
68
Secure SW Design Principles (cont.)

• Reuse software components as much as possible


• Funnel security-critical operations through a few chokepoints
• Avoid hidden interfaces and backdoors
• Avoid making frequent changes
• Avoid making the design too detailed
• Avoid ambiguous design
• Avoid inconsistent design
• Do not rely on users making correct security decisions
69
References
Books:
• Software Security Principles, Policies, and Protection.
• Introduction to Computer Security.
• Foundations of Security What Every Programmer Needs to Know.
• Computer security principles and practice, chapter 4 (Access control)
Articles:
• Software security principles: Part 1 pdf
• Design Principles article pdf

You might also like