0% found this document useful (0 votes)
8 views

Dependability & Security

The document discusses the concept of dependability in computing systems, which includes attributes like availability, reliability, safety, and security. It defines each attribute and discusses approaches to improve system dependability, including avoiding faults, detecting and removing errors, and incorporating fault tolerance. Safety-critical systems are discussed along with ensuring hazards are avoided, detected, or have minimal consequences.

Uploaded by

Rain
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Dependability & Security

The document discusses the concept of dependability in computing systems, which includes attributes like availability, reliability, safety, and security. It defines each attribute and discusses approaches to improve system dependability, including avoiding faults, detecting and removing errors, and incorporating fault tolerance. Safety-critical systems are discussed along with ensuring hazards are avoided, detected, or have minimal consequences.

Uploaded by

Rain
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Dependability & Security

Introduction
 The term ‘dependability’ was proposed by Laprie (1995) to
cover the related systems attributes of availability, reliability,
safety, and security.
 The dependability of a computer system is a property of the system
that reflects its trustworthiness.
 Trustworthiness here essentially means the degree of confidence a
user has that the system will operate as they expect, and that the
system will not ‘fail’ in normal use.
 It is not meaningful to express dependability numerically
 Rather, we use relative terms such as ‘not dependable,’ ‘very
dependable,’ and ‘ultra-dependable’ to reflect the degrees of trust
that we might have in a system.
The importance of dependability
 The dependability of systems is now usually more
important than their detailed functionality for the
following reasons:-
1. System failures affect a large number of people.
2. Users often reject systems that are unreliable,
unsafe, or insecure
3. System failure costs may be enormous
4. Undependable systems may cause information
loss
Considers for dependability
 When designing a dependable system, you have to
consider the following
1. Hardware failure: System hardware may fail
because of mistakes in its design
2. Software failure: System software may fail
because of mistakes in its specification, design,
or implementation
3. Operational failure: Human users may fail to
use or operate the system correctly
The dependability attributes
 There are four principal dimensions to dependability:-
1. Availability, the availability of a system is the probability that it
will be up and running and able to deliver useful services to
users at any given time.
2. Reliability, the reliability of a system is the probability, over a
given period of time, that the system will correctly deliver
services as expected by the user.
3. Safety, the safety of a system is a judgment of how likely it is
that the system will cause damage to people or its environment.
4. Security, the security of a system is a judgment of how likely it is
that the system can resist accidental or deliberate intrusions.
 These dependability properties are not all applicable to all
systems.
The dependability attributes
The dependability attributes
 As well as these four main dependability properties, you may also
think of other system properties as dependability properties:
1. Reparability : if System fail, it must be possible to diagnose the
problem, access the component that has failed, and make
changes to fix that component.
2. Maintainability As systems are used, new requirements emerge
and it is important to maintain the usefulness of a system by
changing it to accommodate these new requirements.
3. Survivability: it is the ability of a system to continue to deliver
service whilst under attack and, potentially, whilst part of the
system is disabled.
4. Error tolerance: it reflects the extent to which the system has
been designed so that user input errors are avoided and
tolerated.
developing dependable software
 To develop dependable software, you therefore need
to ensure that:
1. You avoid the introduction of accidental errors into the
system during software specification and development.
2. You design verification and validation processes that are
effective in discovering residual errors that affect the
dependability of the system.
3. You design protection mechanisms that guard against
external attacks that can compromise the availability or
security of the system.
4. You configure the deployed system and its supporting
software correctly for its operating environment.
Availability and reliability
 System availability and reliability are closely related properties
that can both be expressed as numerical probabilities.
 The availability of a system is the probability that the system
will be up and running to deliver these services to users on
request.
 If, on average, 2 inputs in every 1,000 cause failures, then the
reliability, expressed as a rate of occurrence of failure, is
0.002.
 If the availability is 0.999, this means that, over some time
period, the system is available for 99.9% of that time.
Availability and reliability
 System reliability and availability may be defined more
precisely as follows:
– Reliability : The probability of failure-free operation over a specified
time, in a given environment, for a specific purpose
– Availability: The probability that a system, at a point in time, will be
operational and able to deliver the requested services
Reliability terminology
System faults do not always result in
system errors
 System faults do not always result in system errors and system
errors do not necessarily result in system failures. The reasons
for this are as follows:
1. Not all code in a program is executed.
2. Errors are transient.
3. The system may include fault detection and protection
mechanisms
Software usage patterns
 Because each user of a system uses it in different ways, they
have different perceptions of its reliability.
 Faults that affect the reliability of the system for one user
may never be revealed under someone else’s mode of
working.
Approaches for improving system
reliability
 There are three complementary approaches are used
to improve the reliability of a system:-
1. Fault avoidance : Development techniques are used that either
minimize the possibility of human errors and/or that trap
mistakes before they result in the introduction of system faults.
2. Fault detection and removal: The use of verification and
validation techniques that increase the chances that faults will
be detected and removed before the system is used.
3. Fault tolerance : These are techniques that ensure that faults in
a system do not result in system errors or that system errors do
not result in system failures. The incorporation of self-checking
facilities in a system and the use of redundant system modules
are examples of fault tolerance techniques.
Safety
 Safety-critical systems are systems where it is essential that
system operation is always safe; that is, the system should
never damage people or the system’s environment even if the
system fails.
 Examples of safety-critical systems include control and
monitoring systems in aircraft, process control systems in
chemical and pharmaceutical plants, and automobile control
systems.
 Hardware control of safety-critical systems is simpler to
implement and analyze than software control.
Safety Classes
Safety-critical software falls into two classes:-
1. Primary safety-critical software: This is software
that is embedded as a controller in a system.
 Malfunctioning of such software can cause a hardware
malfunction, which results in human injury or environmental
damage
2. Secondary safety-critical software:- This is
software that can indirectly result in an injury.
 An example of such software is a computer-aided engineering
design system whose malfunctioning might result in a design fault
in the object being designed.
Safety terminology
Safety terminology
Assuring safety
 The key to assuring safety is to ensure either that accidents do
not occur or that the consequences of an accident are
minimal. This can be achieved in three complementary ways:
1. Hazard avoidance: The system is designed so that
hazards are avoided.
2. Hazard detection and removal :The system is designed
so that hazards are detected and removed before they
result in an accident
3. Damage limitation :The system may include protection
features that minimize the damage that may result from
an accident
Security
 Security is a system attribute that reflects the ability
of the system to protect itself from external attacks.
 These external attacks are possible because most general-
purpose computers are now networked and are therefore
accessible by outsiders.
 For some systems, security is the most important
dimension of system dependability.
 Military systems, systems for electronic commerce, and
systems that involve the processing and interchange of
confidential information must be designed so that they
achieve a high level of security.

20
Examples of attacks
 Examples of attacks might be:-
1. The installation of viruses and Trojan horses
2. Unauthorized use of system services
3. Unauthorized modification of a system or its
data.
4. System disruption
Security terminology
Examples of security
terminology
Examples of attacks
The controls that you might put in place to
enhance system security are as follows:-
1. Vulnerability avoidance:- Controls that are
intended to ensure that attacks are unsuccessful.
2. Attack detection and neutralization: Controls
that are intended to detect and repel attacks.
3. Exposure limitation and recovery: Controls that
support recovery from problems.

You might also like