SE Unit6
SE Unit6
Hours)
Software Reliability And Quality Management: Software quality, SEI CMM and ISO- 9001, Reliability,
Safety, Risk Analysis, computer-aided software engineering (CASE).
Good design – It’s always important to have a good and aesthetic design to please users
Durability- Durability is a confusing term, In this context, durability means the ability of the
software to work without any issue for a long period of time.
Consistency – Software should be able to perform consistently over platform and devices
Maintainability – Bugs associated with any software should be able to capture and fix
quickly and news tasks and enhancement must be added without any trouble
Value for money – customer and companies who make this app should feel that the money
spent on this app has not fone to waste.
What is Software Quality Model?
Software quality models were proposed to measure the quality of any software model.
There are three widely accepted models when it comes to measuring software quality
Mc call’s Model
Mc Call’s model was first introduced in the US Air force in the year 1977. The main intention of
this model was to maintain harmony between users and developers.
Dromey’s model is mainly focused on the attributes and sub-attributes to connect properties of the
software.
Management plan – Have a clear idea about how the quality assurance process will be
carried out through the project. Quality engineering activities required should also be set at
the beginning along with team skill check.
The first measure of the quality of any products is the number of defects found and fixed. Though
there a many “conditions apply” cases this is the first ballpark estimate of the quality of the
software. The more the number of defects found, would be the quality of development is poor. So
the management should strive hard to improve development and do an RCA (Root Cause Analysis) to
find why the quality is taking the hit.
Defect Density = No. of Defects Found / Size of AUT or module
This is an important metric for assessing the effectiveness of a testing team. DRE is an indicator of
the number of defects the tester or the testing team was able to remove from going into a
production environment. Every quality team wants to ensure a 100% DRE.
DRE = A/(A+B) x 100
A – number of defects found before production
B – Number of defects found in production
As the name suggests it is the average time between two failures in a system. Based on the AUT and
expectation of business the definition of failure may vary.
For any online website or mobile application crash or disconnection with the database could be the
expected failure. No team can produce software that never breaks or fails, so the onus is always to
increase the MTBF as much as possible, which means that in a time frame the number of times the
applications fail should be reduced to an acceptable number.
This again is quite self-explanatory. The mean time to recover is basically the time it takes for the
developers to find a critical issue with the system fix it and push the fix patch to production. Hence
the average time which the team needs to fix an issue in production. It is more of maintenance
contract metrics, where an MTTR of 24 hours would be preferred over an MTTR of 2 days for
obvious reasons.
It is important metrics for mobile apps and online websites. It is a measure of how often the mobile
app or website crashes in any environment. It is an indicator of the quality of the code. The better
the code, the longer it will be able to sustain without crashing.
In recent times where the speed of delivery has taken utmost importance, the traditional methods
life the SDLC and waterfall models have taken a backseat, giving way for more dynamic and fast-
paced agile, scrum and lean methodologies.
6. Lead Time:
Lead time is defined as the time it takes from the time of project or sprint kick-off to the completion.
In an agile process, we normally pick up user stories that will be delivered at the end of the sprint.
The lead time is thus defined as the time it takes to complete and deliver these user stories.
7. Cycle Time
Cycle time is similar to the lead time with a difference that leads time is measured per user story,
while cycle time is measured per task. Ex: if database creation is part of the user story related to
client data.Then time taken to create the database would be the cycle time, while the time taken to
have the complete database ready would be the lead time. The cycle time data is used to arrive at
delivery estimation timelines for future sprints.
8. Team Velocity
Team Velocity is a very important metric for Agile/Scrum. It is an indicator of the number of tasks or
user stories a team is able to complete during a single sprint. This does not include the items moved
to the backlog and incomplete items. Only fully completed user stories are included for velocity
calculations. This is an important metric because based on the team velocity, the management
would decide on the number of stories they can pick up for the next sprint.
These metrics are in line with the agile principle of dynamic, fast and quality delivery. It is an
indicator of the number of test cases that pass in the first run itself. It is also an indicator of the
quality of development. In simpler terms, it means that no defects were found in the developed
code when it went through testing for the first time.
As the name suggests, these metrics take the count of defects found in each sprint. This is a very
simple yet useful metrics for assessing the quality of the user stories delivered during any sprint.
The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process. The model describes a five-level evolutionary path of
increasingly organized and systematically more mature processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and
development center sponsored by the U.S. Department of Defense (DOD) and now part of Carnegie
Mellon University. SEI was founded in 1984 to address software engineering issues and, in a broad
sense, to advance software engineering methodologies. More specifically, SEI was established to
optimize the process of developing, acquiring and maintaining heavily software-reliant systems for
the DOD. SEI advocates industry-wide adoption of the CMM Integration (CMMI), which is an
evolution of CMM. The CMM model is still widely used as well.
CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the International
Organization for Standardization. The ISO 9000 standards specify an effective quality system for
manufacturing and service industries; ISO 9001 deals specifically with software development and
maintenance.
The main difference between CMM and ISO 9001 lies in their respective purposes: ISO 9001 specifies
a minimal acceptable quality level for software processes, while CMM establishes a framework for
continuous process improvement. It is more explicit than the ISO standard in defining the means to
be employed to that end.
There are five levels to the CMM development process. They are the following:
1. Initial. At the initial level, processes are disorganized, ad hoc and even chaotic. Success likely
depends on individual efforts and is not considered to be repeatable. This is because
processes are not sufficiently defined and documented to enable them to be replicated.
2. Repeatable. At the repeatable level, requisite processes are established, defined and
documented. As a result, basic project management techniques are established, and
successes in key process areas are able to be repeated.
3. Defined. At the defined level, an organization develops its own standard software
development process. These defined processes enable greater attention to documentation,
standardization and integration.
4. Managed. At the managed level, an organization monitors and controls its own processes
through data collection and analysis.
5. Optimizing. At the optimizing level, processes are constantly improved through monitoring
feedback from processes and introducing innovative processes and functionality.
The Capability Maturity Model takes software development processes from disorganized and chaotic
to predictable and constantly improving.
Advantages of SEI CMM
Quality deliverables
Easier Management
Cost Effective
CMM vs. CMMI The Capability Maturity Model Integration : What's the difference?
CMMI is a newer, updated model of CMM. SEI developed CMMI to integrate and standardize CMM,
which has different models for each function it covers. These models were not always in sync;
integrating them made the process more efficient and flexible.
CMMI includes additional guidance on how to improve key processes. It also incorporates ideas
from Agile development, such as continuous improvement.
SEI released the first version of CMMI in 2002. In 2013, Carnegie Mellon formed the CMMI Institute
to oversee CMMI services and future model development. ISACA, a professional organization for IT
governance, assurance and cyber security professionals, acquired CMMI Institute in 2016. The most
recent version -- CMMI V2.0 -- came out in 2018. It focuses on establishing business objectives and
tracking those objectives at every level of business maturity.
CMMI adds Agile principles to CMM to help improve development processes, software configuration
management and software quality management. It does this, in part, by incorporating continuous
feedback and continuous improvement into the software development process. Under CMMI,
organizations are expected to continually optimize processes, record feedback and use that
feedback to further improve processes in a cycle of improvement.
One criticism of CMM is that it is too process-oriented and not goal-oriented enough. Organizations
have found it difficult to tailor CMM to specific goals and needs. One of CMMI's improvements is to
focus on strategic goals. CMMI is designed to make it easier for businesses to apply the methodology
to specific uses than with CMM.
Like CMM, CMMI consists of five process maturity levels. However, they are different from the levels
in CMM.
1. Initial. Processes are unpredictable and reactive. They increase risk and decrease efficiency.
2. Managed. Processes are planned and managed, but they still have issues.
3. Defined. Processes become more proactive than reactive.
5. Optimizing. The organization has a set of consistent processes that are constantly being
improved and optimized.
CASE can also serve as a repository for project-related documents like business plans, requirements
and design specifications. One of the best advantages of using CASE is the delivery of the final
product, which is more likely to meet real-world requirements as it ensures that customers remain
part of the process.
The CASE approach covers the entire cycle of product development, including code generation,
product tools, repositories, prototyping and other tools.
Various tools are incorporated in CASE and are called CASE tools, which are used to support different
stages and milestones in a software development lifecycle.
Diagramming Tools: Help in diagrammatic and graphical representations of the data and
system processes.
Computer Display and Report Generators: Help in understanding the data requirements and
the relationships involved.
Analysis Tools: Focus on inconsistent, incorrect specifications involved in the diagram and
data flow.
Central Repository: Provides the single point of storage for data diagrams, reports and
documents related to project management.
Code Generators: Aid in the auto generation of code, including definitions, with the help of
the designs, documents and diagrams.
The advantages of the CASE approach include:
As special emphasis is placed on redesign as well as testing, the servicing cost of a product
over its expected lifetime is considerably reduced.
Chances to meet real-world requirements are more likely and easier with a computer-aided
software engineering approach.
CASE indirectly provides an organization with a competitive advantage by helping ensure the
development of high-quality products.
Risk Analysis
The term risk analysis refers to the assessment process that identifies the potential for any adverse
events that may negatively affect organizations and the environment. Risk analysis is commonly
performed by corporations (banks, construction groups, health care, etc.), governments, and
nonprofits. Conducting a risk analysis can help organizations determine whether they should
undertake a project or approve a financial application, and what actions they may need to take to
protect their interests. This type of analysis facilitates a balance between risks and risk reduction.
Risk analysts often work in with forecasting professionals to minimize future negative unforeseen
effects.
KEY TAKEAWAYS
Risk analysis seeks to identify, measure, and mitigate various risk exposures or hazards
facing a business, investment, or project.
Quantitative risk analysis uses mathematical models and simulations to assign numerical
values to risk.
Qualitative risk analysis relies on a person's subjective judgment to build a theoretical model
of risk for a given scenario.
Risk assessment enables corporations, governments, and investors to assess the probability that an
adverse event might negatively impact a business, economy, project, or investment. Assessing risk is
essential for determining how worthwhile a specific project or investment is and the best process(es)
to mitigate those risks. Risk analysis provides different approaches that can be used to assess the risk
and reward tradeoff of a potential investment opportunity.
A risk analyst starts by identifying what could potentially go wrong. These negatives must be
weighed against a probability metric that measures the likelihood of the event occurring.
Finally, risk analysis attempts to estimate the extent of the impact that will be made if the event
happens. Many risks that are identified, such as market risk, credit risk, currency risk, and so on, can
be reduced through hedging or by purchasing insurance.
Almost all sorts of large businesses require a minimum sort of risk analysis. For example, commercial
banks need to properly hedge foreign exchange exposure of overseas loans, while large department
stores must factor in the possibility of reduced revenues due to a global recession. It is important to
know that risk analysis allows professionals to identify and mitigate risks, but not avoid them
completely.
Under quantitative risk analysis, a risk model is built using simulation or deterministic statistics to
assign numerical values to risk. Inputs that are mostly assumptions and random variables are fed
into a risk model.
For any given range of input, the model generates a range of output or outcome. The model's output
is analyzed using graphs, scenario analysis, and/or sensitivity analysis by risk managers to make
decisions to mitigate and deal with the risks.
A Monte Carlo simulation can be used to generate a range of possible outcomes of a decision made
or action taken. The simulation is a quantitative technique that calculates results for the random
input variables repeatedly, using a different set of input values each time. The resulting outcome
from each input is recorded, and the final result of the model is a probability distribution of all
possible outcomes.
The outcomes can be summarized on a distribution graph showing some measures of central
tendency such as the mean and median, and assessing the variability of the data through standard
deviation and variance. The outcomes can also be assessed using risk management tools such as
scenario analysis and sensitivity tables. A scenario analysis shows the best, middle, and worst
outcome of any event. Separating the different outcomes from best to worst provides a reasonable
spread of insight for a risk manager.
For example, an American company that operates on a global scale might want to know how
its bottom line would fare if the exchange rate of select countries strengthens. A sensitivity table
shows how outcomes vary when one or more random variables or assumptions are changed.
Elsewhere, a portfolio manager might use a sensitivity table to assess how changes to the different
values of each security in a portfolio will impact the variance of the portfolio. Other types of risk
management tools include decision trees and break-even analysis.
Qualitative risk analysis is an analytical method that does not identify and evaluate risks with
numerical and quantitative ratings. Qualitative analysis involves a written definition of the
uncertainties, an evaluation of the extent of the impact (if the risk ensues), and countermeasure
plans in the case of a negative event occurring.
Examples of qualitative risk tools include SWOT analysis, cause and effect diagrams, decision
matrix, game theory, etc. A firm that wants to measure the impact of a security breach on its servers
may use a qualitative risk technique to help prepare it for any lost income that may occur from a
data breach.
While most investors are concerned about downside risk, mathematically, the risk is the variance
both to the downside and the upside.
Value at risk (VaR) is a statistic that measures and quantifies the level of financial risk within a
firm, portfolio, or position over a specific time frame. This metric is most commonly used by
investment and commercial banks to determine the extent and occurrence ratio of potential losses
in their institutional portfolios. Risk managers use VaR to measure and control the level of risk
exposure. One can apply VaR calculations to specific positions or whole portfolios or to measure
firm-wide risk exposure.
VaR is calculated by shifting historical returns from worst to best with the assumption that returns
will be repeated, especially where it concerns risk. As a historical example, let's look at the Nasdaq
100 ETF, which trades under the symbol QQQ (sometimes called the "cubes") and which started
trading in March of 1999. If we calculate each daily return, we produce a rich data set of more than
1,400 points. The worst are generally visualized on the left, while the best returns are placed on the
right.
For more than 250 days, the daily return for the ETF was calculated between 0% and 1%. In January
2000, the ETF returned 12.4%. But there are points at which the ETF resulted in losses as well. At its
worst, the ETF ran daily losses of 4% to 8%. This period is referred to as the ETF's worst 5%. Based on
these historic returns, we can assume with 95% certainty that the ETF's largest losses won't go
beyond 4%. So if we invest $100, we can say with 95% certainty that our losses won't go beyond $4.
One important thing to keep in mind is that VaR doesn't provide analysts with absolute certainty.
Instead, it's an estimate based on probabilities. The probability gets higher if you consider the higher
returns, and only consider the worst 1% of the returns. The Nasdaq 100 ETF's losses of 7% to 8%
represent the worst 1% of its performance. We can thus assume with 99% certainty that our worst
return won't lose us $7 on our investment. We can also say with 99% certainty that a $100
investment will only lose us a maximum of $7.
Risk is a probabilistic measure and so can never tell you for sure what your precise risk exposure is at
a given time, only what the distribution of possible losses are likely to be if and when they occur.
There are also no standard methods for calculating and analyzing risk, and even VaR can have
several different ways of approaching the task. Risk is often assumed to occur using normal
distribution probabilities, which in reality rarely occur and cannot account for extreme or "black
swan" events.
The financial crisis of 2008, for example, exposed these problems as relatively benign VaR
calculations greatly understated the potential occurrence of risk events posed by portfolios
of subprime mortgages.
Risk magnitude was also underestimated, which resulted in extreme leverage ratios within subprime
portfolios. As a result, the underestimations of occurrence and risk magnitude left institutions
unable to cover billions of dollars in losses as subprime mortgage values collapsed.
Software reliability
Software reliability is the probability of failure-free operation of a computer program for a specified
period in a specified environment. Reliability is a customer-oriented view of software quality. It
relates to operation rather than design of the program, and hence it is dynamic rather than static. It
accounts for the frequency with which faults cause problems. Measuring and predicting software
reliability has become vital for software managers, software engineers, managers and engineers of
products that include software, and to users of these products.