Software Engineering: UNIT-1
Software Engineering: UNIT-1
Introduction to Software Engineering: The necessity of software engineering appears because of a higher rate of progress in user
requirements and the environment on which the program is working.
o Huge Programming: It is simpler to manufacture a wall than to a house or building,
The term software engineering is the product of two words, software, and engineering. similarly, as the measure of programming become extensive engineering has to step to give
it a scientific process.
The software is a collection of integrated programs.
o Adaptability: If the software procedure were not based on scientific and engineering ideas,
Software subsists of carefully-organized instructions and code written by developers on any it would be simpler to re-create new software than to scale an existing one.
of various particular computer languages.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let
Computer programs and related documentation such as requirements, design models and user down the cost of computer and electronic hardware. But the cost of programming remains
manuals. high if the proper process is not adapted.
Engineering is the application of scientific and practical knowledge to invent, design, o Dynamic Nature: The continually growing and adapting nature of programming hugely
build, maintain, and improve frameworks, processes, etc. depends upon the environment in which the client works. If the quality of the software is
continually changing, new upgrades need to be done in the existing one.
o Quality Management: Better procedure of software development provides a better and
quality software product.
Characteristics of a good software engineer:
The features that good software engineers should possess are as follows:
Exposure to systematic methods, i.e., familiarity with software engineering principles.
Good technical knowledge of the project range (Domain knowledge).
Good programming abilities.
Good communication skills. These skills comprise of oral, written, and interpersonal skills.
High motivation.
Sound knowledge of fundamentals of computer science.
Software Engineering is required: Intelligence.
Software Engineering is required due to the following reasons: Ability to work in a team
given four months of a company to the task, and the project is still in its first stage. Because
the company has provided many resources to the plan and it should be completed. So to
handle a big project without any problem, the company has to go for a software engineering
method.
5. Reliable software: Software should be secure, means if you have delivered the software,
then it should work for at least its given time or subscription. And if any bugs come in the
software, the company is responsible for solving all these bugs. Because in software
Importance of Software Engineering: engineering, testing and maintenance are given, so there is no worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the standards.
Software standards are the big target of companies to make it more effective. So Software
becomes more effective in the act with the help of software engineering
3. To decrease time: Anything that is not made according to the project always wastes time. o Measures the number of objects or classes in object-oriented design.
And if you are making great software, then you may need to run many codes to get the o Factors in object complexity and their interactions.
definitive running code. This is a very time-consuming procedure, and if it is not well
handled, then this can take a lot of time. So if you are making your software according to the 5. Feature Points:
software engineering method, then it will decrease a lot of time. o An extension of function points, taking into account additional factors like algorithm
4. Handling big projects: Big projects are not done in a couple of days, and they need lots of complexity.
patience, planning, and management. And to invest six and seven months of any company, it 6. Story Points:
requires heaps of planning, direction, testing, and maintenance. No one can say that he has
o Used in Agile methodologies. o The ease with which the software can be modified to correct faults, improve performance, or
adapt to a changed environment.
o Measures the effort required to implement a user story based on complexity, risks, and
uncertainties. o Metrics: Change request frequency, defect density, code complexity.
7. Effort (Person-Months): 3. Usability:
o Measures the amount of work required in terms of person-months or person-hours. o The degree to which the software can be used by specified users to achieve specified goals
with effectiveness, efficiency, and satisfaction.
o Derived from other size factors to plan resources.
o Metrics: User satisfaction surveys, task completion time, error rates.
8. Software Size Metrics:
4. Efficiency:
o Kilo Line of Code (KLOC): Thousands of lines of code.
o The capability of the software to provide appropriate performance relative to the amount of
o Effective Lines of Code (eLOC): Lines of code excluding comments and blank lines.
resources used.
9. Complexity Metrics:
o Metrics: Response time, throughput, resource utilization.
o Cyclomatic Complexity: Measures the number of linearly independent paths through a
5. Portability:
program’s source code.
o The ease with which the software can be transferred from one environment to another.
o Halstead Complexity Measures: Based on the number of operators and operands in the
code. o Metrics: Number of environments supported, effort required for porting.
10. Work Breakdown Structure (WBS): 6. Security:
o Divides the project into smaller, manageable sections or tasks. o The software's ability to protect information and data so that unauthorized persons or systems
cannot read or modify them and authorized persons or systems are not denied access.
o Each section’s size can be estimated to sum up to the total project size.
o Metrics: Number of security incidents, time to detect and respond to security threats.
These size factors are often used in conjunction with estimation models such as COCOMO
(Constructive Cost Model), which uses size factors to predict the effort, cost, and duration of 7. Functionality:
a software project.
o The degree to which the software performs its intended functions.
Quality and productivity Factors: o Metrics: Compliance with requirements, number of features implemented.
Quality and productivity are two critical aspects of software engineering that significantly 8. Interoperability:
influence the success of a project. Here are some important factors that affect quality and
productivity in software engineering: o The ability of the software to interact with other systems or software.
o The ability of the software to perform its required functions under stated conditions for a 1. Development Process:
specified period. o The methodologies and practices used during software development.
o Metrics: Mean Time Between Failures (MTBF), Mean Time to Repair (MTTR). o Metrics: Development time, defect rates, adherence to schedule.
2. Maintainability: 2. Team Skills and Experience:
o The knowledge, experience, and skills of the development team.
o Metrics: Team experience levels, training hours, developer productivity. Managerial issues in software engineering:
3. Tools and Technologies: Managerial issues in software engineering encompass a wide range of challenges that
o The effectiveness of development tools, programming languages, and frameworks used. managers face while planning, executing, and controlling software projects. Addressing these
issues effectively is crucial for the successful delivery of software products. Here are some
o Metrics: Tool usage frequency, defect rates, development speed. of the key managerial issues:
4. Project Management: 1. Project Planning and Scheduling:
o The practices related to planning, tracking, and managing software projects. • Estimating Project Size and Effort: Accurately estimating the size and effort required for a
o Metrics: Schedule adherence, budget adherence, project success rates. project can be challenging, leading to over- or under-estimation.
5. Communication: • Resource Allocation: Ensuring that the right resources (e.g., developers, testers) are available
and efficiently utilized throughout the project lifecycle.
o The effectiveness of communication within the development team and with stakeholders.
• Scheduling: Creating realistic timelines and milestones, and adapting to changes in scope or
o Metrics: Frequency of meetings, communication clarity, feedback loop efficiency. unexpected delays.
6. Requirements Management: 2. Risk Management:
o The process of eliciting, documenting, and managing software requirements. • Identifying Risks: Recognizing potential risks early in the project, such as technical
o Metrics: Requirements stability, requirements clarity, changes in requirements. challenges, resource shortages, or changes in requirements.
7. Code Quality: • Mitigating Risks: Developing strategies to minimize the impact of identified risks and
preparing contingency plans.
o The overall quality of the source code, including readability, maintainability, and complexity.
3. Scope Management:
o Metrics: Code reviews, static code analysis results, refactoring frequency.
• Scope Creep: Managing changes to the project scope to prevent uncontrolled growth and
8. Testing and Quality Assurance: ensure that new requirements are properly evaluated and integrated.
o The processes and practices used to ensure software quality through testing and validation. • Requirements Management: Ensuring that requirements are well-defined, documented, and
o Metrics: Test coverage, defect detection rate, defect resolution time. agreed upon by all stakeholders.
o The extent to which development and testing processes are automated. • Quality Assurance: Implementing processes to ensure that the software meets defined quality
standards and is free of defects.
o Metrics: Build frequency, deployment frequency, automated test coverage.
• Testing: Planning and executing comprehensive testing strategies to identify and fix issues
10. Work Environment: before deployment.
o The physical and psychological conditions under which the development team works. 5. Team Management:
o Metrics: Team morale, turnover rates, work-life balance. • Team Dynamics: Managing diverse teams, addressing conflicts, and fostering a collaborative
Balancing these quality and productivity factors is crucial for delivering high-quality and productive work environment.
software within time and budget constraints. Effective management practices, continuous • Skill Development: Ensuring that team members have the necessary skills and providing
improvement, and adopting the right tools and methodologies can significantly enhance both training or mentoring as needed.
quality and productivity in software engineering.
• Motivation and Retention: Keeping the team motivated and engaged, and addressing factors *Requirements Gathering: Engaging clients and users in the requirements-gathering process
that contribute to employee turnover. to ensure that the final product meets their needs.
6. Communication and Collaboration: *Feedback: Soliciting and incorporating feedback from clients and users throughout the
project lifecycle.
• Stakeholder Communication: Ensuring clear and consistent communication with all
stakeholders, including clients, team members, and management. 13. Compliance and Legal Issues:
• Collaboration Tools: Utilizing effective tools and practices to facilitate collaboration and *Regulatory Compliance: Ensuring that the software complies with relevant regulations and
information sharing among team members. standards.
7. Budget Management: *Intellectual Property: Managing intellectual property rights and ensuring that the software
does not infringe on third-party rights.
• Cost Estimation: Accurately estimating project costs and creating realistic budgets.
Effectively managing these issues requires a combination of strong leadership, effective
• Budget Control: Monitoring expenditures and ensuring that the project stays within budget.
communication, and the ability to adapt to changing circumstances. By addressing these
8. Process Management: managerial challenges, software engineering managers can increase the likelihood of
delivering successful projects on time and within budget.
• Adopting
Methodologies: Choosing and implementing the appropriate software development
methodologies (e.g., Agile, Waterfall) that fit the project needs.
• ProcessImprovement: Continuously evaluating and improving development processes to
enhance efficiency and quality.
Planning a Software Project:
9. Technology Management:
The objective of software project planning is to provide a framework that enables the
• Tool
Selection: Choosing the right tools and technologies that align with project requirements manager to make reasonable estimates of resources, cost, and schedule. These estimates are
and team capabilities. made within a limited time frame at the
• Keeping Up with Trends: Staying updated with the latest industry trends and advancements
to ensure that the project benefits from modern practices and technologies.
10.Change Management:
*Handling Changes: Managing changes in project scope, technology, team composition, and
other factors effectively.
*Adaptability: Ensuring that the team and processes are flexible enough to adapt to changes
without significant disruption.
• Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system
is tested for any faults and failures.
• Deployment of system − Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.
• Maintenance − There are some issues which come up in the client environment. To
fix those issues, patches are released. Also to enhance the product some better
versions are released. Maintenance is done to deliver these changes in the customer
environment.
All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the
defined set of goals areachieved for previous phase and it is signed off, so the name "Waterfall
Model". In this model, phasesdo not overlap.
Advantages:
• Works well for smaller projects where requirements are very well understood. This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
• It is disciplined in approach.
requirements and unit requirements are all done in this phase.
Disadvantages:
This phase also includes understanding the system requirements by continuous
communication between the customer and the system analyst. At the end of the spiral, the
• No working software is produced until late during the life cycle.
product is deployed in theidentified market.
• High amounts of risk and uncertainty.
Risk Analysis:
• Not a good model for complex and object-oriented projects.
Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
• Poor model for long and ongoing projects.
management risks, such as schedule slippage and cost overrun. After testing the build, at the
• Not suitable for the projects where requirements are at a moderate to high risk end of first iteration, the customer evaluates the software and provides feedback.
of changing.So, risk and uncertainty is high with this process model.
Engineering or construct phase:
Spiral Model:
The Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral, when the product is just thought of and the design is being developed a POC
The spiral model, initially proposed by Boehm, it is the combination of waterfall and iterative
(Proof ofConcept) is developed in this phase to get customer feedback.
model,Using the spiral model, the software is developed in a series of incremental releases.
Each phase in spiral model begins with planning phase and ends with evaluation phase.
Evaluation Phase: Step 4: Initial user evaluation
This phase allows the customer to evaluate the output of the project to update before the In this stage, the proposed system is presented to the client for an initial evaluation. It helps
project continues to the next spiral. to findout the strength and weakness of the working model. Comment and suggestion are
Software project repeatedly passes through all these four phases. collected fromthe customer and provided to the developer.
• Risk management This phase will not over until all the requirements specified by the user are met. Once the user
issatisfied with the developed prototype, a final system is developed based on the approved
• Easy and frequent feedback from users. final prototype.
Dis advantages:
Step 6: Implement Product and Maintain
• It doesn’t work for smaller projects
Once the final system is developed based on the final prototype, it is thoroughly tested and
• Risk analysis require specific expertise. deployedto production. The system undergoes routine maintenance for minimizing
downtime and prevent large-scale failures.
• It is costly model & complex.
• Project success is highly dependent on risk.
Prototype Model:
To overcome the disadvantages of waterfall model, this model is implemented with a special
factorcalled prototype. It is also known as revaluation model.
A prototyping model starts with requirement analysis. In this phase, the requirements of the
systemare defined in detail. During the process, the users of the system are interviewed to
know what is their expectation from the system.
The second phase is a preliminary design or a quick design. In this stage, a simple design of • Users are actively involved in development. Therefore, errors can be detected in the
the system is created. However, it is not a complete design. It gives a brief idea of the system initial stage of the software development process.
to the user.The quick design helps in developing the prototype. • Missing functionality can be identified, which helps to reduce the risk of failure as
Prototyping is also considered as a risk reduction activity.
Step 3: Build a Prototype
In this phase, an actual prototype is designed based on the information gathered from quick
• Helps team member to communicate effectively
design.It is a small working model of the required system. • Customer satisfaction exists because the customer can feel the product at a very early
stage.
Disadvantages: • During the test phase, the function of not only the new product but also of the
reused components is tested. Any deficiencies in the latter must be documented
• Prototyping is a slow and time taking process. exactly. The resulting modifications must be handled centrally in the class library
to ensure that they impact on other projects, both current and future.
• The cost of developing a prototype is a total waste as the prototype is
• Newly created classes must be tested for their general usability. If there is a
ultimately thrown away.
chance that a component could be used in other projects as well, it must be
• Prototyping may encourage excessive change requests. included in the class library and documented accordingly. This also means that
• After seeing an early prototype model, the customers may think that the actual the new class must be announced and made accessible to other programmers
product will be delivered to him soon. who might profit from it. This places new requirements on the in-house
communication structures.
• The client may lose interest in the final product when he or she is not happy with
the initial prototype.
The object-oriented life-cycle model:
• The usual division of a software project into phases remains intact with the use
of object-oriented techniques.
• The requirements analysis stage strives to achieve an understanding of the
client’s application domain.
• The tasks that a software solution must address emerge in the course of requirements
analysis.
• The requirements analysis phase remains completely independent of an
implementation technique that might be applied later.
• In the system specification phase the requirements definition describes what the The actual software life cycle recurs when new requirements arise in the
software product must do, but not how this goal is to be achieved.
company that initiates a new requirements analysis stage.
• One point of divergence from conventional phase models arises because The object and prototyping-oriented life-cycle model
implementation with object-oriented programming is marked by the assembly of
already existing components. The specification phase steadily creates new prototypes. Each time
we are confronted with the problem of having to modify or enhance
existing prototypes. If the prototypes were already implemented with
The advantages of object-oriented life-cycle model: object-oriented technology, then modifications and extensions are
particularly easy to carry out. This allows an abbreviation of the
• Design no longer is carried out independently of the later implementation because
specification phase, which is particularly important when proposed
during the design phase we must consider which components are available for the
solutions are repeatedly discussed with the client. With such an
solution of the problem. Design and implementation become more closely
approach it is not important whether the prototype serves solely for
associated, and even the choice of a different programming language can lead to
specification purposes or whether it is to be incrementally developed to
completely different program structures.
the final product. If no prototyping tools are available, object-oriented
• The duration of the implementation phase is reduced. In particular, (sub) products programming can serve as a substitute tool for modeling user
become available much earlier to allow testing of the correctness of the design. interfaces. This particularly applies if an extensive class library is
Incorrect decisions can be recognized and corrected earlier. This makes for closer available for user interface elements.
feedback coupling of the design and implementation phases.
For incremental prototyping (i.e. if the product prototype is to be used as the
• The class library containing the reusable components must be continuously basis for the implementation of the product), object-oriented programming
maintained. Saving at the implementation end is partially lost as they are also proves to be a suitable medium. Desired functionality can be added
reinvested in this maintenance. A new job title emerges, the class librarian, who stepwise to the prototypes without having to change the prototype itself.
is responsible for ensuring the efficient usability of the class library. These results in a clear distinction between the user interfaces modeled in the
specification phase and the actual functionality of the program. This is
particularly important for the following reasons: Quality Describes the quality procedures and standards
plan that will be used in a project.
• This assures that the user interface is not changed during the implementation
of the program functionality. The user interface developed in collaboration with
the client remains as it was defined in the specification phase. Validatio Describes the approach, resources and schedule
n plan used for system validation.
• In the implementation of the functionality, each time a subtask is completed, a
more functional prototype results, which can be tested (preferably together with
the client) and compared with the specifications? During test runs situations Configu Describes the configuration management
sometimes arise that require rethinking the user interface. In such cases the ration procedures and structures to be used.
software life cycle retreats one step and a new user interface prototype is manage
constructed. ment
plan
Since the user interface and the functional parts of the program are largely
Mai Predicts the maintenance requirements of the
nten system, maintenance costs and effort required.
ance
plan
Types of project planning: Even though the above three approaches have their pros and cons,
option 3 is most productive.
Types of project planning:
There are several roles within each software project team. Some of
the roles in a typical software project are listed below:
Modu A software engineer who manages and leads the team Dom An expert who knows the system of which the
le working ain software is a part. This would involve the technical
Lead on a particular module of the software project. The Cons knowledge of how the entities of the domain interface
er module leader will conduct reviews and has to ensure ultan with the software being developed. For example, a
the proper t banking domain consultant or a telecom domain
consultant.
28
Gate Degree & PG College Gate Degree & PG College
to be performed and the differing abilities of the team members. Work programmers of administrative tasks.
products (requirements, design, source code, user manual, etc) are
➢ The project secretary administrates all programs and documents and
discussed openly and are freely examined by all team members.
assists in project progress checks.
Advantage:
➢ The main task of the project secretary is the administration of the project
❖ Opportunity for each team member to contribute to decision library.
❖ Opportunity for team members to learn from one another ➢ The chief programmer determines the number of specialists needed.
❖ Increased Job satisfaction that results from good communication in ➢ Specialists select the implementation language, implement individual
open, non-threatening work environments. system components, choose and employ software tools, and carry out
tests.
Disadvantages
Advantages
❖ Communication overhead required in reaching decision,
• The chief programmer is directly involved in system development and can
❖ All team members must work well together, better exercise the control function.
❖ Less individual responsibility and authority can result in less initiative • Communication difficulties of pure hierarchical
and less personal drive from team members. organization are ameliorated. Reporting concerning project
The chief programmer team progress is institutionalized.
Baker's organizational model ([Baker 1972]) • Small teams are generally more productive than large teams.
• The lack of a project manager who is not personally involved in • It is limited to small teams. Not every project can be handled by a small
system development team.
• The use of very good specialists • Personnel requirements can hardly be met. Few software engineers can
meet the qualifications of a chief programmer or a project assistant.
• The restriction of team size
• The project secretary has an extremely difficult and responsible job,
➢ The chief programmer team consists of: although it consists primarily of routine tasks, which gives it a
• The chief programmer subordinate position. This has significant psychological disadvantages.
Due to the central position, the project secretary can easily become a
• The project assistant bottleneck.
• The project secretary The organizational model provides no replacement for the project
secretary. The loss of the project secretary would have failing
• Specialists (language specialists, programmers, test specialists).
consequences for the remaining course of the project.
➢ The chief programmer is actively involved in the planning, Hierarchical organizational model
specification and design process and, ideally, in the implementation
process as well. ➢ There are many ways to organize the staff of a project. For a long time the
organization of software projects oriented itself to the hierarchical
➢ The chief programmer controls project progress, decides all organization common to other industrial branches. Special importance
important questions, and assumes overall responsibility. is
➢ The qualifications of the chief programmer need to be accordingly high.
➢ The project assistant is the closest technical coworker of the chief
programmer.
➢ The project assistant supports the chief programmer in all important
activities and serves as the chief programmer's representative in the latter's
absence. This team member's qualifications need to be as high as those
of the chief programmer.
➢ The project secretary relieves the chief programmer and all other
29 30
Gate Degree & PG College
31
UNIT-II
Boehm three levels
SOFTWARE COST FACTORS PM=programmer months
KDSI= number of thousands of delivered instructions
1. Programmer Ability Application
Experiment sackman and colleagues. The goal was to determine the relativeinfluence of batch and time- PM=2.4*(KDSI)**1.05 Program cost
shared access on programmers productivity. Util Programmer effort
Ex: 12 programmers given 2 programmes each
PM=3.0*(KDSI)**1.12
11 years experience ity
productivity variation 16:1 PM=3.6*(KDSI)**1.20
Individual differences in ability can be significant. Syst
2. Product complexity
em
3 categories of software product
✓ Application programs
✓ Utility programs
✓ System programs
3. Product size:
A large software product is obviously more expensive to develop the small one. Boehm’s equations
Brooks states that utility programs are 3 times as difficult to write as application programs and that systems
indicate that the rate of increase in required effortgrows with the number of source instructions at an
programs and that systems programsare 3 times as difficult to write utility programs
exponential rate slightlygreater than one.
1(App)-3(utility)-9(System)
2 3
4.Available time
Total project effort is sensitive to the calendar time available for project completion. Most of them agree 4. Level of technology
that software projects require more total effort if development time is compressed or expanded from the Software development project is reflected by the programming language, theabstract machine, the
optimal time. programming practices and software tools used. The number of source instructions written per day is
5.Required level of reliability largely dependent of the language used, written in HLL, expand into several machine level statements.
Software reliability can be defined as the probability that a program willperform a required function under
stated conditions for a stated period of time. It can be expressed in terms of accuracy, robustness, SOFTWARE COST ESTIMATION TECHNIQUES
completeness, consistency of the source code. ✓ Software cost estimates are based on past performance.
Boehm describes five categories
✓ Historical data are used to identify cost factors and determine the relativeimportance and various
factors with in the environment of that organization.
Category Effect of failure
✓ Cost estimates can be either top-down or bottom-up.
Very low Slight inconvenience
✓ Top-down estimation first focuses on system-level costs(such as personalrequired to develop the
Low Losses easily recovered system)
Nominal Moderately difficult to recover losses ✓ Bottom-up cost estimation first estimates the cost of develop each module
or subsystem.
High High financial loss
4 5
1. Expert judgement
➢ They complete their estimates. They may ask questions of the co-
ordinator but they do not discuss their estimates with one another.
➢ The most widely used cost estimation technique is expert judgement,which is an inherently
top-down estimation technique . ➢ The coordinator prepared distributes a summary of the estimators, responses and includes any
➢ Expert judgement relies on the experience background and business senseof one or more key people unusual rationales, noted by the estimators.
in the organization.
➢ Estimators complete another estimate, using the results from the previousestimate.
Advantages
✓ Experience can also be a liability.
➢ The process is iterated for as many rounds as required . No groupdiscussion is allowed
during the entire process.
✓ The expert may be confident that the project is similar to a previous one,but may have overlooked
some factors that make the new project significantly different.
3. Work breakdown structure
2. Delphi cost estimation ✓ The work break down chart can indicate either product hierarchy orprocess hierarchy.
Developed by Rand corporation in 1948 to gain expert consensus withoutintroducing the adverse side
effects. ✓ Product hierarchy identifies the product components and indicates themanner in which the
Delphi technique can be adapted to software cost estimation in the followingmanner. components and interconnected.
✓ Process hierarchy identifies the work activities and relationship among
➢ Coordinator provides each estimator with the system definition documentwith the system definition activities.
document and a form for recording a cost estimate.
7
Product hierarchy
8 9
approximated by the Rayleigh distribution curve. Norden represented the Rayleigh curve by the following
equation:
4. Algorithmic cost models E = K/t²d * t * e-t² / 2 t² d
➢ Bottom-up estimators Where E is the effort required at time t. E is an indication of the number of engineers (or the staffing level)
at any particular time during the duration of the project, K is the area under the curve, and td is the time at
➢ Constructive cost model(COCOMO) is an algorithmic cost model describedby bohem which the curve attains its maximum value. It must be remembered that the results of Norden are applicable
➢ COCOMO effort multipliers to general R & D projects and were not meant to model the staffing pattern of software development
projects.
a. Product attributes
b. computer attributes
c. personal attributes
d. project attributes
Ex:normal organic mode equations apply in the following types of situations.Small to medium size
projects(2k to 32k) familiar applications area.
Stable, well-understood virtual machine in house development effort.
12 13
o If the application of the program is defined and well understood, the system requirements may be
definitive and maintenance due to changing needs minimized.
o If the form is entirely new, it is likely that the initial conditions will be modified frequently, as user gain
experience with the system.
2. Staff Stability
o It is simple for the original writer of a program to understand and change an application rather than some
other person who must understand the program by the study of the reports and code listing.
Non-Technical Factors: o If the implementation of a system also maintains that systems, maintenance costs will reduce.
o In practice, the feature of the programming profession is such that persons change jobs regularly. It is
unusual for one user to develop and maintain an application throughout its useful life.
3. Program Lifetime
o Programs become obsolete when the program becomes obsolete, or their original hardware is replaced,
and conversion costs exceed rewriting costs.
4. Dependence on External Environment
o If an application is dependent on its external environment, it must be modified as the climate changes.
o For example:
o Changes in a taxation system might need payroll, accounting, and stock control programs to be modified.
o Taxation changes are nearly frequent, and maintenance costs for these programs are associated with the
frequency of these changes.
o A program used in mathematical applications does not typically depend on humans changing the
assumptions on which the program is based.
5. Hardware Stability
o If an application is designed to operate on a specific hardware configuration and that configuration does
not changes during the program's lifetime, no maintenance costs due to hardware changes will be
incurred.
1.Application Domain o Hardware developments are so increased that this situation is rare.
14 15
o The application must be changed to use new hardware that replaces obsolete equipment. Programming Style
The method in which a program is written contributes to its understandability and hence, the ease with
Technical Factors: which it can be modified.
Technical Factors include the following: Program Validation and Testing
o Generally, more the time and effort are spent on design validation and program testing, the fewer bugs in
the program and, consequently, maintenance costs resulting from bugs correction are lower.
o Maintenance costs due to bug's correction are governed by the type of fault to be repaired.
o Coding errors are generally relatively cheap to correct, design errors are more expensive as they may
include the rewriting of one or more program units.
o Bugs in the software requirements are usually the most expensive to correct because of the drastic design
which is generally involved.
Documentation
o If a program is supported by clear, complete yet concise documentation, the functions of understanding
the application can be associatively straight-forward.
o Program maintenance costs tends to be less for well-reported systems than for the system supplied with
inadequate or incomplete documentation.
Configuration Management Techniques
o One of the essential costs of maintenance is keeping track of all system documents and ensuring that
these are kept consistent.
o Effective configuration management can help control these costs.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all terms and
units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements described in its
conflict. There are three types of possible conflict in the SRS:
(1). The specified characteristics of real-world objects may conflicts. For example,
(a) The format of an output report may be described in one requirement as tabular but in another as textual.
(b) One condition may state that all lights shall be green while another states that all lights shall be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions. For example,
(a) One requirement may determine that the program will add two inputs, and another may determine that
the program will multiply them.
Following are the features of a good SRS document:
(b) One condition may state that "A" must always follow "B," while other requires that "A and B" co-occurs.
18 19
(3). Two or more requirements may define the same real-world object but use different terms for that object. 1. Backward Traceability: This depends upon each requirement explicitly referencing its source in earlier
For example, a program's request for user input may be called a "prompt" in one requirement's and a "cue" documents.
in another. The use of standard terminology and descriptions promotes consistency.
2. Forward Traceability: This depends upon each element in the SRS having a unique name or reference
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one interpretation. This number.
suggests that each element is uniquely interpreted. In case there is a method used with multiple definitions,
the requirements report should determine the implications in the SRS so that it is clear and simple to The forward traceability of the SRS is especially crucial when the software product enters the operation and
understand. maintenance phase. As code and design document is modified, it is necessary to be able to ascertain the
complete set of requirements that may be concerned by those modifications.
. Ranking for importance and stability: The SRS is ranked for importance and stability if each
requirement in it has an identifier to indicate either the significance or stability of that particular 9. Design Independence: There should be an option to select from multiple design alternatives for the final
requirement. system. More specifically, the SRS should not contain any implementation details.
Typically, all requirements are not equally important. Some prerequisites may be essential, especially for 10. Testability: An SRS should be written in such a method that it is simple to generate test cases and test
life-critical applications, while others may be desirable. Each element should be identified to make these plans from the report.
differences clear and explicit. Another way to rank requirements is to distinguish classes of items as
essential, conditional, and optional. 11. Understandable by the customer: An end user may be an expert in his/her explicit domain but might
not be trained in computer science. Hence, the purpose of formal notations and symbols should be avoided
6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly obtain too as much extent as possible. The language should be kept simple and clear.
changes to the system to some extent. Modifications should be perfectly indexed and cross-referenced.
12. The right level of abstraction: If the SRS is written for the requirements stage, the details should be
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-effective system explained explicitly. Whereas,for a feasibility study, fewer analysis can be used. Hence, the level of
to check whether the final software meets those requirements. The requirements are verified with the help of abstraction modifies according to the objective of the SRS.
reviews.
Properties of a good SRS document
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it facilitates the
referencing of each condition in future development or enhancement documentation. The essential properties of a good SRS document are the following:
26 27
1. Z Notation:
o Description: A formal specification language used for describing and modeling computing
systems.
o Use Case: Defining data and operations of systems rigorously.
2. B-Method:
o Description: Focuses on the specification, design, and verification of software through a
mathematical approach.
o Use Case: Formal development and verification of software components.
3. VDM (Vienna Development Method):
o Description: Provides a framework for developing precise and abstract models of software
systems.
o Use Case: Specifying and modeling software and system requirements.
4. Alloy:
o Description: A lightweight modeling language for software design that uses a relational model to
describe structures and behaviors.
o Use Case: Analyzing complex system structures and their properties.
28 29
UNIT-III Interface design is the specification of the interaction between a system and its
environment. This phase proceeds at a high level of abstraction with respect to the
Software Design: inner workings of the system i.e, during interface design, the internal of the systems
are completely ignored, and the system is treated as a black box. Attention is focused
The design phase of software development deals with transforming the customer on the dialogue between the target system and the users, devices, and other systems
requirements as described in the SRS documents into a form implementable using a with which it interacts. The design problem statement produced during the problem
programming language. The software design process can be divided into the following three analysis step should identify the people, other systems, and devices which are
levels or phases of design: collectively called agents.
1. Interface Design Interface design should include the following details:
2. Architectural Design 1. Precise description of events in the environment, or messages from agents to which
3. Detailed Design the system must respond.
Elements of a System 2. Precise description of the events or messages that the system must produce.
1. Architecture: This is the conceptual model that defines the structure, behavior, and 3. Specification of the data, and the formats of the data coming into and going out of the
views of a system. We can use flowcharts to represent and illustrate the architecture. system.
2. Modules: These are components that handle one specific task in a system. A 4. Specification of the ordering and timing relationships between incoming events or
combination of the modules makes up the system. messages, and outgoing events or outputs.
3. Components: This provides a particular function or group of related functions. They Architectural Design
are made up of modules. Architectural design is the specification of the major components of a system, their
4. Interfaces: This is the shared boundary across which the components of a system responsibilities, properties, interfaces, and the relationships and interactions between
exchange information and relate. them. In architectural design, the overall structure of the system is chosen, but the
internal details of major components are ignored. Issues in architectural design
5. Data: This is the management of the information and data flow. includes:
1. Gross decomposition of the systems into major components.
2. Allocation of functional responsibilities to components.
3. Component Interfaces.
4. Component scaling and performance properties, resource consumption properties,
reliability properties, and so forth.
5. Communication and interaction between components.
The architectural design adds important details ignored during the interface design.
Design of the internals of the major components is ignored until the last phase of the
design.
Detailed Design
Detailed design is the specification of the internal elements of all major system
components, their properties, relationships, processing, and often their algorithms and
the data structures. The detailed design may include:
The most desirable form of coupling between modules is combination of Stamp and Coupling and Cohesion:
Data coupling. The fundamental goal of software designs to structure the software product so that the
▪ Cohesion – Cohesion or internal Cohesion between two modules is measured in terms number of complexity of interconnection between modules is minimized.
of the strength of binding of elements within the module. The following are the The strength of coupling between two modules is influenced by the complexity of the
various cohesion mechanisms – interface, the type of connection, and the type of communication.
– Coincidental Cohesion – occurs when the elements with a module have no apparent Obvious relationships results in less complexity.
relationship to one another
Ex: common control blocks, common data blocks, common overlay regions in memory.
– Logical Cohesion – refers to some relationship among the elements of the module
such as those perform all input / output operation or those edit or validate data. Logically Loosely coupled= connections established by referring to other module.
bound modules often combines several related functions in a complex and interrelated Connections between modules involves, passing of data, passing of elements(flags, switches,
fashion. labels and procedure names) degree of coupling
– Temporal Cohesion – forms complex connections as logical cohesion but are on the lowest- data communication
higher scale of binding since all elements are executed at one time and no parameter logic are
required to determine which elements to execute such as in the case of a module performing higher- control communication
program initialization. highest- modify other modules.
– Communicational Cohesion -refers to same set of input / output data and the Coupling can be ranked as follows:
binding is higher on the binding scale than temporal binding since the elements are executed
at one time and also refer to the same data. a. Content coupling: when one module modifies local data values or instructions in another
module.
– Sequential Cohesion – occurs when the output of one element is the input of
another element. Sequential cohesion has higher binding levels since the module structure b. Common coupling: are bound together by global data structures
usually represents the problem structure. c. Control coupling: involves passing of control flags between modules so that one module
– Functional Cohesion – is a situation in which every elements function towards the controls the sequence of processing steps in another module.
performance of a single method such as in the case of data elements of a method performing d. Stamp coupling: similar to common coupling except that global data items are shared
sqrt(). selectively among routines that require the data.
e. Data coupling: involves the use of parameter lists to pass data items between routines.
– Informational Cohesion – of elements in a module occurs when a complex data • Internal cohesion of a module is measured in terms of the strength of binding of element
structure manipulated by several routines in that module. Also each routine in the module within the module.
exhibits functional binding.
• Cohesion elements occur on the scale of weakest to strongest as follows:
Modules and modularization Criteria:
a. Coincidental cohesion: Module is created from a group of unrelated instructions that There are extensions for real-time systems that distinguish control flow from data flow.
appear several times in other modules.
DFDs: Diagrammatic elements
b. Logical cohesion: implies some relationship among the elements of the module.
ex: module performs all i/o operations.
c. Temporal cohesion: all elements are executed at one time and no parameter logic are
required to determine which elements to execute. A producer or consumer of information that resides outside the bounds of the system
to be modeled.
d. Communication cohesion: refer to same set of input or output data
Ex: ‘print and punch’ the output file is communicationally bound.
e. Sequential cohesion: of elements occurs when the output of one element is the input for
the next element.
A transformation of information (a function) that resides within the bounds of the system to
ex: ‘read next transaction and update master file’ be modeled.
f. Functional Cohesion: is strong type of binding of elements in a module because all
elements are related to the performance of a single function.
Ex: computer square root, obtain random number etc.,
g. Informational cohesion: occurs when the module contains a complex data structure and
several routines to manipulate the data structure. A data object; the arrowhead indicates the direction of data flow.
Design notations:
Dynamic
Data flow diagrams (DFDs). A repository of data that is to be stored for use by one or more processes; may be as simple as
a buffer or queue or as sophisticated as a relational database
State transition diagrams (STDs).
State charts.
Structure diagrams.
Static
Entity Relationship Diagrams (ERDs).
Class diagrams.
Structure charts.
Object diagrams.
State charts:
Developed by David Harel.
A generalization of STDs:
States can have zero, one, two or more STDs contained within.
Related to Petri nets.
Higraph–based diagrammatic notation.
Labeled nodes correspond to states.
State Transition Diagrams (STDs) : Arcs correspond to transitions.
Arcs are labeled with events and actions (actions can cause further events to occur).
Describes one or more underlying processes.
Structure Diagrams :
Used in Jackson Structured Programming.
Used to describe several kinds of things.
Ordered hierarchical structure.
Sequential processing.
Based on the idea of regular languages.
Sequencing.
Selection.
Iteration.
Entity Relationship Diagrams (ERDs):
Structure Charts
Based on the fundamental notion of a module.
Used in structured systems analysis/structured design (SSA/SD).
Graph–based diagrammatic notation:
a structure chart is a collection of one or more node labeled rooted directed acyclic graphs.
Each graph is a process.
Nodes and modules are synonymous.
A directed edge from module M1 to module M2 captures the fact that M1 directly uses in
some way the services provided by M2.
Definitions: The fan-in of a module is the count of the number of arcs directed toward the
module. The fan-out of a module is the count of the number of arcs outgoing from the
module.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an
existing system.
The bottom up design model starts with most specific and basic components. It
proceeds with composing higher level of components by using basic or lower level
Strategy of Design: components. It keeps creating higher level components until the desired system is not
evolved as one single component. With each higher level, the amount of abstraction is
A good system design strategy is to organize the program modules in such a method that are
increased.
easy to develop and latter too, change. Structured design methods help developers to deal
Bottom-up strategy is more suitable when a system needs to be created from some
with the size and complexity of programs. Analysts generate instructions for the developers
existing system, where the basic primitives can be used in the newer system.
about how code should be composed and how pieces of code should fit together to form a
program.
To design a system, there are two possible approaches:
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.
We know that a system is composed of more than one sub-systems and it contains a number Both, top-down and bottom-up approaches are not practical individually. Instead, a
of components. Further, these sub-systems and components may have their on set of sub- good combination of both is used.
system and components and creates hierarchical structure in the system. Walkthrough
Top-down design takes the whole software system as one entity and then decomposes it to Walkthrough is a method of conducting informal group/individual review. In a
achieve more than one sub-system or component based on some characteristics. Each walkthrough, author describes and explain work product in a informal meeting to his
subsystem or component is then treated as a system and decomposed further. This process peers or supervisor to get feedback. Here, validity of the proposed solution for work
keeps on running until the lowest level of system in the top-down hierarchy is achieved. product is checked.
• It is cheaper to make changes when design is on the paper rather than at time of
Top-down design starts with a generalized model of system and keeps on defining the more conversion. Walkthrough is a static method of quality assurance. Walkthrough are
specific part of it. When all components are composed the whole system comes into informal meetings but with purpose.
existence.
INSPECTION
Top-down design is more suitable when the software solution needs to be designed from
scratch and specific details are unknown.
• An inspection is defined as formal, rigorous, in depth group review designed to
identify problems as close to their point of origin as possible. Inspections improve
UNIT-IV
reliability, availability, and maintainability of software product. USER INTERFACE DESIGN:
• Anything readable that is produced during the software development can be
inspected. Inspections can be combined with structured, systematic testing to provide The user interface is the front-end application view to which the user
a powerful tool for creating defect-free programs. interacts to use the software. User can manipulate and control the software as
• Inspection activity follows a specified process and participants play welldefined well as hardware by means of user interface.
roles. An inspection team consists of three to eight members who plays roles of
moderator, author, reader, recorder and inspector. User interface design creates an effective communication medium
between a human and a computer. UI provides fundamental platform for human
computer interaction.
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
1. Command Line Interface: The Command Line Interface provides a
command prompt, where the user types the command and feeds it to the
system. The user needs to remember the syntax of the command and its
use.
2. Graphical User Interface: Graphical User Interface provides a simple
interactive interface to interact with the system. GUI can be a
combination of both hardware and software. Using GUI, the user
interprets the software.
User Interface Design Process
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user
interface consists of four framework activities.
stated by Theo Mandel. Design issues such as response time, command and
action structure, error handling, and help facilities are considered as the design
model is refined. This phase serves as the foundation for the implementation
phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model) that
enables usage scenarios to be evaluated. As iterative design process continues a
User Interface toolkit that allows the creation of windows, menus, device
interaction, error messages, commands, and many other elements of an
interactive environment can be used for completing the construction of an
interface.
4. Interface Validation
1. User, Task, Environmental Analysis, and Modeling
This phase focuses on testing the interface. The interface should be in such a
Initially, the focus is based on the profile of users who will interact with the way that it should be able to perform tasks correctly, and it should be able to
system, i.e., understanding, skill and knowledge, type of user, etc., based on the handle a variety of tasks. It should achieve all the user’s requirements. It should
user’s profile users are made into categories. From each category requirements be easy to use and easy to learn. Users should accept the interface as a useful
are gathered. Based on the requirement’s developer understand how to develop one in their work.
the interface. Once all the requirements are gathered a detailed analysis is
User Interface Design Golden Rules
conducted. In the analysis part, the tasks that the user performs to establish the
goals of the system are identified, described and elaborated. The analysis of the The following are the golden rules stated by Theo Mandel that must be followed
user environment focuses on the physical work environment. Among the during the design of the interface. Place the user in control:
questions to be asked are:
1. Define the interaction modes in such a way that does not force the
1. Where will the interface be located physically? user into unnecessary or undesired actions: The user should be able to
easily enter and exit the mode with little or no effort.
2. Will the user be sitting, standing, or performing other tasks unrelated to
the interface? 2. Provide for flexible interaction: Different people will use different
interaction mechanisms, some might use keyboard commands, some
3. Does the interface hardware accommodate space, light, or noise
might use mouse, some might use touch screen, etc., Hence all interaction
constraints?
mechanisms should be provided.
4. Are there special human factors considerations driven by environmental
3. Allow user interaction to be interruptible and undoable: When a user
factors?
is doing a sequence of actions the user must be able to interrupt the
2. Interface Design sequence to do some other work without losing the work that had been
The goal of this phase is to define the set of interface objects and actions i.e., done. The user should also be able to do undo operation.
control mechanisms that enable the user to perform desired tasks. Indicate how 4. Streamline interaction as skill level advances and allow the
these control mechanisms affect the system. Specify the action sequence of interaction to be customized: Advanced or highly skilled user should be
tasks and subtasks, also called a user scenario. Indicate the state of the system provided a chance to customize the interface as user wants which allows
when the user performs a particular task. Always follow the three golden rules
different interaction mechanisms so that user doesn’t feel bored while 2. Maintain consistency across a family of applications: in The
using the same interaction mechanism. development of some set of applications all should follow and implement
the same design, rules so that consistency is maintained among
5. Hide technical internals from casual users: The user should not be
applications.
aware of the internal technical details of the system. He should interact
with the interface just to do his work. 3. If past interactive models have created user expectations do not make
changes unless there is a compelling reason: once a particular
6. Design for direct interaction with objects that appear on-screen: The
interactive sequence has become standard (eg: ctrl+s to save file) the user
user should be able to use the objects and manipulate the objects that are
expects this in every application she encounters.
present on the screen to perform a necessary task. By this, the user feels
easy to control over the screen. User interface design is a crucial aspect of software engineering, as it is
the means by which users interact with software applications. A well-
Reduce the User’s Memory Load
designed user interface can improve the usability and user experience of
1. Reduce demand on short-term memory: When users are involved in an application, making it easier to use and more effective.
some complex tasks the demand on short-term memory is significant. So
Key Principles for Designing User Interfaces
the interface should be designed in such a way to reduce the remembering
of previously done actions, given inputs and results. 1. User-centered design: User interface design should be focused on the
needs and preferences of the user. This involves understanding the user’s
2. Establish meaningful defaults: Always an initial set of defaults should
goals, tasks, and context of use, and designing interfaces that meet their
be provided to the average user, if a user needs to add some new features
needs and expectations.
then he should be able to add the required features.
2. Consistency: Consistency is important in user interface design, as it helps
3. Define shortcuts that are intuitive: Mnemonics should be used by the
users to understand and learn how to use an application. Consistent
user. Mnemonics means the keyboard shortcuts to do some action on the
design elements such as icons, color schemes, and navigation menus
screen.
should be used throughout the application.
4. The visual layout of the interface should be based on a real-world
3. Simplicity: User interfaces should be designed to be simple and easy to
metaphor: Anything you represent on a screen if it is a metaphor for a
use, with clear and concise language and intuitive navigation. Users
real-world entity then users would easily understand.
should be able to accomplish their tasks without being overwhelmed by
5. Disclose information in a progressive fashion: The interface should be unnecessary complexity.
organized hierarchically i.e., on the main screen the information about the
4. Feedback: Feedback is significant in user interface design, as it helps
task, an object or some behavior should be presented first at a high level
users to understand the results of their actions and confirms that they are
of abstraction. More detail should be presented after the user indicates
making progress towards their goals. Feedback can take the form of
interest with a mouse pick.
visual cues, messages, or sounds.
Make the Interface Consistent
5. Accessibility: User interfaces should be designed to be accessible to all
1. Allow the user to put the current task into a meaningful users, regardless of their abilities. This involves considering factors such
context: Many interfaces have dozens of screens. So it is important to as color contrast, font size, and assistive technologies such as screen
provide indicators consistently so that the user know about the doing readers.
work. The user should also know from which page has navigated to the
6. Flexibility: User interfaces should be designed to be flexible and
current page and from the current page where it can navigate.
customizable, allowing users to tailor the interface to their own
preferences and needs.
Real-time systems: 3. Release time of a job: It is the time at which the job becomes ready for
execution.
A real-time system means that the system is subjected to real-time, i.e.,
the response should be guaranteed within a specified timing constraint or 4. Execution time of a job: It is the time taken by the job to finish its
the system should meet the specified deadline. For example flight control execution.
systems, real-time monitors, etc.
5. Deadline of a job: It is the time by which a job should finish its
Types of real-time systems based on timing constraints: execution. Deadline is of two types: absolute deadline and relative
deadline.
1. Hard real-time system: This type of system can never miss its deadline.
Missing the deadline may have disastrous consequences. The usefulness 6. Response time of a job: It is the length of time from the release time of a
of results produced by a hard real-time system decreases abruptly and job to the instant when it finishes.
may become negative if tardiness increases. Tardiness means how late a
7. The maximum allowable response time of a job is called its relative
real-time system completes its task with respect to its deadline. Example:
deadline.
Flight controller system.
8. The absolute deadline of a job is equal to its relative deadline plus its
2. Soft real-time system: This type of system can miss its deadline
release time.
occasionally with some acceptably low probability. Missing the deadline
have no disastrous consequences. The usefulness of results produced by a 9. Processors are also known as active resources. They are essential for the
soft real-time system decreases gradually with an increase in tardiness. execution of a job. A job must have one or more processors in order to
Example: Telephone switches. execute and proceed towards completion. Example: computer,
transmission links.
3. Firm Real-Time Systems: These are systems that lie between hard and
soft real-time systems. In firm real-time systems, missing a deadline is 10.Resources are also known as passive resources. A job may or may not
tolerable, but the usefulness of the output decreases with time. Examples require a resource during its execution. Example: memory, mutex
of firm real-time systems include online trading systems, online auction 11.Two resources are identical if they can be used interchangeably else they
systems, and reservation systems. are heterogeneous.
Reference model of the real-time system:
Our reference model is characterized by three elements: Advantages:
1. A workload model: It specifies the application supported by the system. • Real-time systems provide immediate and accurate responses to external
events, making them suitable for critical applications such as air traffic
2. A resource model: It specifies the resources available to the application.
control, medical equipment, and industrial automation.
3. Algorithms: It specifies how the application system will use resources.
• They can automate complex tasks that would otherwise be impossible to
Terms related to real-time system: perform manually, thus improving productivity and efficiency.
1. Job: A job is a small piece of work that can be assigned to a processor • Real-time systems can reduce human error by automating tasks that
and may or may not require resources. require precision, accuracy, and consistency.
2. Task: A set of related jobs that jointly provide some system • They can help to reduce costs by minimizing the need for human
functionality. intervention and reducing the risk of errors.
• Real-time systems can be customized to meet specific requirements, The engineering discipline for designers and developers must focus on
making them ideal for a wide range of applications. the following:
Disadvantages: • Users and their psychology
• Real-time systems can be complex and difficult to design, implement, and • Amount of work that the user must do, including task goals,
test, requiring specialized skills and expertise. performance requirements and group communication requirements.
• Quality and performance.
• They can be expensive to develop, as they require specialized hardware • Information required by users and their job.
and software components.
Benefits:
• Real-time systems are typically less flexible than other types of computer
systems, as they must adhere to strict timing requirements and cannot be • Elevated user satisfaction.
easily modified or adapted to changing circumstances. • Decreased training time and costs.
• Reduced operator stress.
• They can be vulnerable to failures and malfunctions, which can have
• Reduced product liability.
serious consequences in critical applications.
• Decrement of operating costs.
• Real-time systems require careful planning and management, as they • Lesser operational error.
must be continually monitored and maintained to ensure they operate
correctly.
Based approach to human factors:
HUMAN FACTORS:
It is often that people take human factors not too seriously because
The essentially of human factors are imperative for the design and it is often regarded as common sense. Many companies heavily channel their
development of any software work. It presents the underlying idea for resources and time towards factors of software development like planning,
incorporating these factors into the software life cycle. Many giant management, control. They often neglect the fact that they must present their
companies came to recognise that the success of a product depends upon product in such a way that it is easy to learn and implement and that it should be
a solid Human factors design. Human factors discovers and applies aesthetic in nature.
information about human behaviour, abilities, limitations and other
characteristics to the design of tools machines, systems, tasks, jobs and Interface designers and engineering psychologies apply systematic human factors
environment for productive, safe, comfortable and effective human use. technique to produce designs for hardware and software.
Study of human factors is essential for every software manager since A systematic approach is required in the design process in human factors design
he/she must be acquitted with low his/her staff members interact with and thus usability is required.
each other .Generally ,software products are used by variety of populace Usability is a software quality characteristics that surveys on software usability
and its necessary to take account the abilities of such a group to make the cost and benefits and it can be simply be defined as the external attributes of
software more useful and popular. software quality. The process involving users in the development life cycle
Objective of human factors design: ensures that the product is user friendly and is widely accepted.
The purpose of human factors design into create products that meet Usability aims at the following:
the operability and learn ability goals. This design should meet the user’s • Shortening the time to accomplish tasks.
needs by being effective. Efficient but also high quality keeping an eye on • Reducing the no. of mistakes made.
the major concern of the customer in most cases, that is affordability. • Reducing learning time.
• Improving people’s satisfication with a system. Long term memory: Magnetic and optical disks capacity limitation
related to
Benefits of usability: Document and wide storage.
• Elevated sales and consumer satisfaction.
• Increased productivity and efficiency. Processing:
• Decreased training costs and time. The effects when systems run too slow too fast, the myth of the
• Lesser support and maintenance costs. infinitely fast machine.
• Reduced documentation and support costs.
Limitations and processing speed.
• Increased satisfaction, performance and productivity.
Instead of workstations, computer may be in the form of
For software product to be successful with the customer, a software embedded computational machines, such as parts of microwave ovens.
engineer needs to develop his/her product in such a way that it is easy to Because the technique for designing these interfaces bear so much
understand, learn and use human factors play a very important role in the relationship to the techniques for designing workstations interfaces, they
software life cycle. can be profitably treated together. Human computer interaction, by
A software engineer must always keep in mind the end user who is contrast, studies both the mechanism side and the human side, but of a
going to use the product and should make things as simple as possible and narrower class of devices.
provide the best, at the same time not being too hard at his/her pocket.
Usability testing deals with the effective designing of a product. Human:
Humans are limited in their capacity to process information. This
Human-computer Interaction: has important implications for design. Information is received and response
given via a no of input and output channels.
The Human-computer interaction (HCI) program will play a leading
➢ Visual channel.
role in the creation of tomorrow’s exciting new user interface design
➢ Auditory channel
software and technology, by supporting the broad spectrum of fundamental
➢ Movement
research that will ultimately transform the human computer interaction
experience so the computer is no longer a distracting focus of attention. Information is stored in memory:
➢ Sensory memory.
Computer: ➢ Short term memory.
A Computer system comprises various elements, each of ➢ Long term memory.
which affects the user of the system. Input devices for interactive use,
allowing text entry, drawing and selection from the screen. Information is processed applied:
➢ Text entry: Traditional keyboard, phone text entry. ➢ Reasoning.
➢ Pointing: Mouse, but also touch pads. ➢ Problem solving.
Output display devices for interactive use ➢ Error.
➢ Different types of screen mostly using same form of bitmap display.
➢ Large displays and situated displays for shared and public use. Interaction:
Memory: The communication between the user and the system their interaction
framework has four parts:
Short term memory: RAM
1.User Command line interface:
2.Input Way of expressing instructions to the computer directly, can be
function keys, single characters, short abbreviations.
3.System
➢ Suitable for repetitive tasks.
4.Output
➢ Better for expert users than invoices.
Interaction models help us to understand what is going on in the interaction ➢ Offer direct access to system functionality.
between user and system. They address the translations between what the user
Menus:
wants and what the system does.
Set of options displayed on the screen options visible so demand less recall-rely
Human-Computer interaction is concerned with the joint performance of tasks by
on recognition so names should be meaningful select by using mouse, numeric or
humans and machines; the structure of communication between human and
alphabetic keys.
machine, human capabilities to use machines.
Menu system can be
The goals of HCI are to produce usable and safe system as well as functional
systems. In order to produce computer system with good usability develops must ➢ Purely text based, with options presented as numbered choices or
attempt to: ➢ Can have graphical component with menu appearing in box and choices
made either by typing initial letter or moving around arrow keys.
➢ Understand the factors that determines how people use technology.
➢ Develop tools and technique to enable building suitable system. Form filling interfaces:
➢ Achieve efficient, effective and safe interactive.
➢ Primarily for data entry or data retrieval.
➢ Put people first.
➢ Screen like paper form.
➢ Data put in relevant place.
HCI arise as a field from inter wined roots in computer graphics,
operating systems, human factors, ergonomics, cognitive WIMP interface:
psychology and the systems part of computer science. ➢ Windows
A key aim of HCI is to understand how human interface with ➢ Icon
computers, and to represent how knowledge is passed between the ➢ Menus
two. ➢ Pointers
Interaction styles: Windows: Areas of the screen that behave as if they were independent terminals.
Interaction can be seen as a dialogue between the computer and the user.
• Can contain text bro graphics.
Some applications have very distinct styles of interaction.
• Can be moved or resized.
We can identify some common styles. • Scroll bars allow user to move the contents of the window up and down or
from side to side.
• Command line interface
• Title bars describe the name of the window.
• Menus
• Natural language Icon: Small picture or image, used to represent same object in the interface, often
• Form-fills and spread sheets a window. Windows can be closed down to this small representation allowing
• WIMP many windows to be accessible. Icons can be many and various highly stylized
or realistic representations.
Pointers: Important component, since WIMP style relies on pointing and Human- Computer Interface Design:
selecting things such as icons and menu items.
The overall process for designing a user interface begins with
➢ Usually achieved with mouse. the creation of different models. The intention of computer interface design is to
➢ Joystick, track ball, cursor keys or keyboard shortcuts are also used wide learn the ways of designing user-friendly interfaces or interactions.
variety.
Interface Design Models:
Menus: Choice of operations or services that can be performed offered on the
Four different models come into play when a human-computer
screen, Required option selected with pointer.
interface (HCI) is to be designed.
➢ Problem – menus an take up a lot of screen space.
The software engineering creates a design model, a human engineer (or the
➢ Solution – Use pull-down or pop-up menus.
software engineer) establishes a user model, the end user develops a mental image
➢ Pull-down menus are dragged down from single title at the top of the
that is often called the user’s model or the system perception, and the implements
screen.
of the system create a system image.
➢ Popup menus appear when a particular region of the screen is clicked on.
Task Analysis and Modelling:
Interaction devices: Task analysis and modelling can be applied to understand the tasks that
people currently perform and map these into similar set of tasks.
Different tasks, different types of data and different types of users
all require different user interface devices. In most cases, interface devices are For example, assume that a small software company wants build a
either input or output devices. For example: A touch screen combines both. computer-aided design system explicitly for interior designers. By of serving a
designer at work, the engineer notices that the interior design is comprised of a
➢ Interface devices correlate to the human senses.
number of activities : furniture layout, fabric and material selection, wall and
➢ Now a day, a device usually is designed either for input or for output.
window covering selection, presentation costing and shopping. Each of these
Input devices: major tasks can be elaborated into subtasks. For example, furniture layout can be
refined into the following tasks:
Most commonly, personal computers are equipped with text input and
pointing devices. For text input, the QWERTY keyboard is the standard solution, (1) Draw floor plan based on room dimensions;
but depending on the purpose of the system. At the same time, the mouse is not (2) Place windows and doors at appropriate locations;
only imaginable pointing device. Alternative for similar but slightly different (3) Use furniture templates to draw scaled furniture outlines on floor
purposes include touchpad, track ball, joystick. plan;
(4) Move furniture outlines to get best placement;
Output devices:
(5) Label all furniture outlines;
Output from a personal computer in most cases means output of visual data. (6) Draw dimensions to show location; and
Devices for ‘dynamic visualisation’ include the traditional cathode ray tube (7) Draw perspective view for customer.
(CRT), liquid crystal display (LCD). Printers are also a very important device for
visual output but are substantially different from screens in that output is static.
Subtask 1 to 7 each be refined further. Subtask 1 to 6 will be performed by
The subject of HCI is very rich both terms of the disciplines it draws from manipulating information and performing actions with the user interface. On the
as well as opportunities for research. The study of user interface provides a other hand, subtask 7 can be performed automatically in software and will result
double-sided approach to understand how human and machines interact. From in little direct user interaction.
studying how human psychology, We can design better into for people to interact
with computer.
Desing issues: routines or objects that facilitate certain of windows, menus, device interaction,
error messages, commands, and many other elements of an interactive
As the design of a user interface evolves, four common design issues
environment.
almost all ways surface: system response time, user help facilities, error
information handling, and command labelling. Design Evaluation:
System response time is the primarily complaint for many interactive After the preliminary design has been completed, an operational user
systems. In general, system response time is measured from the point at which interface prototype has been created. The protype is evaluated by the user, who
the user performs some control action until the software responds with desired provides the designer with direct comments about the efficiency of the interface.
output or action.
In addition, if formal evaluation techniques are used (eg.
System response has two important characteristics length and variability. Questionaires, rating sheets), the designers may extract information from this
If the system response time too long, user frustration and stress is the inevitable information (eg. 80 percent of all users did not like the mechanism for saving data
result. files).
Variability refers to the deviation from average response time, and in many Design modifications are made based on user input and the next-
ways, it is the important of the response time characteristics. level prototype is created. The evaluation cycle continues until no further
modifications to the interface design are necessary.
In many cases, however, modern software provides on-line help facilities
that enable a user to get a question answered or resolve a problem without leaving
the interface.
Two different types of help facilities are encountered: integrated and add
on. An integrated help facility is designed into the software from the beginning.
An add-on help facility is added to the software after the system has been built.
In many ways, it is really an on-line user’s manual with limited query capability.
There is little doubt that the integrated help facility is preferable to the add-on
approach.
The error message provides no real indication of what is wrong or where
to look to get additional information. An error message presented in the manner
shown above does nothing to assuage user anxiety or to help correct the problem.
• The message should describe the problem in jargon that the user can
understand.
• The message should provide constructive advice for recovering from the
error.
• The message should indicate any negative consequences of the error. Interface design :
Implementation Tools: Interface design is one of the most important part of software design. It is crucial
The process of user interface design is iterative. That is, a design model is in a sense that user interaction with the system takes place through various
implemented as a prototype, and modified based on their comments. To interfaces provided by the software product.
accommodate this iterative design approach a board class of interface design and Think of the days of text based system where user had to type command on the
prototyping tools has evolved, called user interface toolkits, these tools provide command line to execute a simple task.
Example of a command line interface: • Expert user with adequate knowledge of the system and application
• run prog1.exe /i=2 message=on • Average user with reasonable knowledge
The above command line interface executes a program prog1.exe with a input i=2 • Novice user with little or no knowledge.
with message during execution set to on. Although such command line interface
The following are the elements of good interface design:
gives liberty to the user to run a program with a concise command. It is difficult
for a novice user and is error prone. This also requires the user to remember the • Goal and the intension of task must be identified.
command for executing various commands with various details of options as • The important thing about designing interfaces is all about maintaining
shown above. Example of Menu with option being asked from the user (refer to consistency. Use of consistent color scheme, message and terminologies helps.
Figure 3.11).
• Develop standards for good interface design and stick to it.
• Use icons where ever possible to provide appropriate message.
• Allow user to undo the current command. This helps in undoing mistake
committed by the user.
• Provide context sensitive help to guide the user.
• Use proper navigational scheme for easy navigation within the application.
• Discuss with the current user to improve the interface.
• Think from user prospective.
This simple menu allow the user to execute the program with option available as • The text appearing on the screen are primary source of information exchange
a selection and further have option for exiting the program and going back to between the user and the system. Avoid using abbreviation. Be very specific in
previous screen. Although it provide grater flexibility than command line option communicating the mistake to the user. If possible provide the reason for error.
and does not need the user to remember the command still user can’t navigate to
the desired option from this screen. At best user can go back to the previous screen • Navigation within the screen is important and is specially useful for data entry
to select a different option. screen where keyboard is used intensively to input data.
Modern graphical user interface provides tools for easy navigation and • Use of color should be of secondary importance. It may be kept in mind about
interactivity to the user to perform different tasks. user accessing application in a monochrome screen.
The following are the advantages of a Graphical User Interface (GUI): • Expect the user to make mistake and provide appropriate measure to handle
such errors through proper interface design.
• Various information can be display and allow user to switch to different
task directly from the present screen. • Grouping of data element is important. Group related data items accordingly.
• Useful graphical icons and pull down menu reduces typing effort by the user. • Justify the data items.
• Provides key-board shortcut to perform frequently performed tasks. • Avoid high density screen layout. Keep significant amount of screen blank.
• Simultaneous operations of various task without loosing the present context. • Make sure an accidental double click instead of a single click may does some
thing unexpected.
Any interface design is targeted to users of different categories.
• Provide file browser. Do not expect the user to remember the path of the required Example:
file.
1.Modelling a system which has user controlled display options.
• Provide key-board shortcut for frequently done tasks. This saves time.
2.User can select from one of three choices.
• Provide on-line manual to help user in operating the software.
3.choices determine the size of the current window display.
• Always allow a way out (i.e., cancellation of action already completed).
4.so they came up with schema and present first prototype.
• Warn user about critical task, like deletion of file, updating of critical
Select screen display
information.
FULL
• Programmers are not always good interface designer. Take help of expert
professional who understands human perception better than programmers. HALF
• Include all possible features in the application even if the feature is available in PANEL
operating system.
Word the message carefully in a user understandable manner.
• Develop navigational procedure prior to developing the user interface. Problem:
Interface standards: ➢ User testing shows the system breaks when a user selects more than one
A user interface is the system by which people (user) interact with machine. option.
➢ Designer fixes it and present second prototype.
Why we need standards? ➢ But isn’t this the original prototype?
➢ Despite the best efforts of HCI, we are still getting if wrong. ➢ Designer has ‘improved it’.
➢ We specify the system the system behaviour. ➢ User can now only select one checkbox.
➢ We validate our specification. ➢ Designer has broken guidelines regarding selection controls.
➢ We test the code and prove the correctness of our system. Guidelines for using selection controls:
➢ It is not just design issue or usability testing issue.
➢ Use radio buttons to indicate one or more options that must be either on or
History of user interface standards off, but which are mutually exclusive.
• In 1965, human factors specialists worked to make user interfaces- it is, ➢ Use checkboxes to indicate one or more options that must be either on or
accurate and easy to learn. off, but which are not mutually exclusive.
• In 1985, We realised that usability was not enough. We needed consistency Extending the specification:
standards become important.
➢ Design must satisfy our specification.
• User interface standards are very effective for when you are developing,
➢ Design must also satisfy guidelines.
testing or designing any new site or application or when you are revising
➢ Find a way to specify selection widget guidelines.
over so percent of the [pages in an existing application or site.
➢ Ensure the described property holds in our system.
Creating a user interface standard helps you to create user interface that are ➢ So, they extend specification and present revised prototype.
consistent and easy to understand
Types of standards: ➢ Present user interface document.
There are 3 types of standards • You present the UI document in electronic form or paper form.
Methodological standards: This is S checklist to remind developers of the tasks Benefits of standards:
to create usable systems such as user interview, task analysis and design etc. 1.The goal of UI design is to made the user interaction as simple as efficient as
Design standards: This is building code. A set of absolute legal requirements that possible.
ensure a consistent look and feel. 2.Your user or customers see a consistent UI within and between applications.
Design principles: Good design principles are specific and research – based and 3.Reduced costs for support, user training packages and job aids.
developers work well within the design standards rules.
4.Most important customer satisfaction, your users will reduce errors, training
Building the design standards: requirement, and frustration time per transaction.
Major activities when building these standards are 5.Reduced cost and effort for system maintenance.
➢ Project kick off and planning
• You collaborate with key members of the project team to define the
goals and scope of the user interface standards
• This includes whether the UI document is to be considered a
guideline, standard or style guide, which UI technology it will be
based on and who should participate in its development.
• You work closely with your team and other stake holders to identify
your key business need and business flows.
➢ Gather user interface samples
Based on the information and direction received from your team,
you begin by reviewing your major business applications and
extracting. Examples for the UI standard.
This is an iterative process that takes feedback from as wide
an audience as is appropriate.
➢ Develop user interface document
The document itself includes
• How to change and update the document.
• Common UI elements and when to use them.
• General navigation, graphic look and feel(or style), error handling,
messages.
➢ Review with team
• This is an iterative process that takes feedback from as wide an
audience as it is appropriate.
• The standard is reviewed and refined with your team and stake
holders in a consensus building process.
UNIT-V SQA process Specific quality assurance and quality control tasks (including
technical reviews and a multitiered testing strategy) Effective software
What is Software Quality? engineering practice (methods and tools) Control of all software work products
Software Quality shows how good and reliable a product is. To convey an and the changes made to them a procedure to ensure compliance with software
associate degree example, think about functionally correct software. It performs development standards (when applicable) measurement and reporting
all functions as laid out in the SRS document. But, it has an associate degree mechanisms
virtually unusable program. even though it should be functionally correct, we tend Elements of Software Quality Assurance (SQA)
not to think about it to be a high-quality product.
1. Standards: The IEEE, ISO, and other standards organizations have
Software Quality Assurance (SQA): produced a broad array of software engineering standards and related
Software Quality Assurance (SQA) is simply a way to assure quality in the documents. The job of SQA is to ensure that standards that have been
software. It is the set of activities that ensure processes, procedures as well as adopted are followed and that all work products conform to them.
standards are suitable for the project and implemented correctly. 2. Reviews and audits: Technical reviews are a quality control activity
Software Quality Assurance is a process that works parallel to Software performed by software engineers for software engineers. Their intent is to
Development. It focuses on improving the process of development of software so uncover errors. Audits are a type of review performed by SQA personnel
that problems can be prevented before they become major issues. Software (people employed in an organization) with the intent of ensuring that
Quality Assurance is a kind of Umbrella activity that is applied throughout the quality guidelines are being followed for software engineering work.
software process. 3. Testing: Software testing is a quality control function that has one primary
For those looking to deepen their expertise in SQA and elevate their professional goal—to find errors. The job of SQA is to ensure that testing is properly
skills, consider exploring a specialized training program – Manual to planned and efficiently conducted for primary goal of software.
Automation Testing: A QA Engineer’s Guide . This program offers practical, 4. Error/defect collection and analysis : SQA collects and analyzes error
hands-on experience and advanced knowledge that complements the concepts and defect data to better understand how errors are introduced and what
covered in. software engineering activities are best suited to eliminating them.
What is quality? 5. Change management: SQA ensures that adequate change management
Quality in a product or service can be defined by several measurable practices have been instituted.
characteristics. Each of these characteristics plays a crucial role in determining 6. Education: Every software organization wants to improve its software
the overall quality. engineering practices. A key contributor to improvement is education of
software engineers, their managers, and other stakeholders. The SQA
organization takes the lead in software process improvement which is key
proponent and sponsor of educational programs.
7. Security management: SQA ensures that appropriate process and
technology are used to achieve software security.
8. Safety: SQA may be responsible for assessing the impact of software
failure and for initiating those steps required to reduce risk.
Software Quality Assurance (SQA) Activities There is a number of metrics available based on which software quality is
measured. But among them, there are a few most useful metrics which are
Software Quality Assurance is composed of a variety of tasks associated essential in software quality measurement. They are –
with two different fields:
1. Code Quality
1. The software engineers who do technical work.
2. Reliability
2. SQA group that has responsibility for quality assurance planning,
oversight, record keeping, analysis, and reporting. 3. Performance
• All the documents to be produced by the SQA group. 2. Reliability – Reliability metrics express the reliability of software in different
conditions. The software is able to provide exact service at the right time or not
• The total amount of feedback provided to the software project team. checked. Reliability can be checked using Mean Time Between Failure (MTBF)
and Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of
the software. Each software has been developed for some specific purposes.
Performance metrics measure the performance of the software by determining
whether the software is fulfilling the user requirements or not, by analyzing how
Factors Influencing Software Reliability
much time and resource it is utilizing for providing the service.
• A user’s perception of the reliability of a software depends upon two
4. Usability – Usability metrics check whether the program is user-friendly or
categories of information.
not. Each software is used by the end-user. So it is important to measure that the
end-user is happy or not by using this software. o The number of faults present in the software.
5. Correctness – Correctness is one of the important software quality metrics as o The way users operate the system. This is known as the operational
this checks whether the system or software is working correctly without any profile.
error by satisfying the user. Correctness gives the degree of service each • The fault count in a system is influenced by the following.
function provides as per developed.
o Size and complexity of code.
6. Maintainability – Each software product requires maintenance and up-
gradation. Maintenance is an expensive and time-consuming process. So if the o Characteristics of the development process used.
software product provides easy maintainability then we can say software quality o Education, experience, and training of development personnel.
is up to mark. Maintainability metrics include the time required to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing o Operational environment.
environments, etc. Applications of Software Reliability
7. Integrity – Software integrity is important in terms of how much it is easy to The applications of software reliability includes
integrate with other required software which increases software functionality
and what is the control on integration from unauthorized software’s which • Comparison of software engineering technologies.
increases the chances of cyberattacks. o What is the cost of adopting a technology?
8. Security – Security metrics measure how secure the software is. In the age of o What is the return from the technology — in terms of cost and
cyber terrorism, security is the most essential part of every software. Security quality?
assures that there are no unauthorized changes, no fear of cyber attacks, etc
when the software product is in use by the end-user. • Measuring the progress of system testing –The failure intensity
measure tells us about the present quality of the system: high intensity
means more tests are to be performed.
SOFTWARE RELIABILITY • Controlling the system in operation –The amount of change to a
Software reliability is defined as the probability of failure-free operation of a software for maintenance affects its reliability.
software system for a specified time in a specified environment. • Better insight into software development processes – Quantification of
DEFINITIONS OF SOFTWARE RELIABILITY quality gives us a better insight into the development processes.
Software reliability is defined as the probability of failure-free operation of a FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS
software system for a specified time in a specified environment. The key System functional requirements may specify error checking, recovery features,
elements of the definition include probability of failure-free operation, length of and system failure protection. System reliability and availability are specified as
time of failure-free operation and the given execution environment. Failure part of the non-functional requirements for the system.
intensity is a measure of the reliability of a software system operating in a given
environment. Example: An air traffic control system fails once in two years.
SYSTEM RELIABILITY SPECIFICATION SOFTWARE RELIABILITY METRICS
• Hardware reliability focuses on the probability a hardware component Reliability metrics are units of measure for system reliability. System reliability
fails. is measured by counting the number of operational failures and relating these to
demands made on the system at the time of failure. A long-term measurement
• Software reliability focuses on the probability a software component will
program is required to assess the reliability of critical systems.
produce an incorrect output.
PROBABILITY OF FAILURE ON DEMAND
• The software does not wear out and it can continue to operate after a bad
result. The probability system will fail when a service request is made. It is useful
when requests are made on an intermittent or infrequent basis. It is appropriate
• Operator reliability focuses on the probability when a system user makes
for protection systems where service requests may be rare and consequences
an error.
can be serious if service is not delivered. It is relevant for many safety-critical
FAILURE PROBABILITIES systems with exception handlers.
If there are two independent components in a system and the operation of the RELIABILITY METRICS
system depends on them both then, P(S) = P (A) + P (B)
• Probability of Failure on Demand (PoFoD)
If the components are replicated then the probability of failure is P(S) = P (A) n
o PoFoD = 0.001.
which means that all components fail at once.
o For one in every 1000 requests the service fails per time unit.
FUNCTIONAL RELIABILITY REQUIREMENTS
• Rate of Fault Occurrence (RoCoF)
• The system will check all operator inputs to see that they fall within their
required ranges. o RoCoF = 0.02.
• The system will check all disks for bad blocks each time it is booted. o Two failures for each 100 operational time units of operation.
• The system must be implemented in using a standard implementation of • Mean Time to Failure (MTTF)
Ada.
o The average time between observed failures (aka MTBF)
NON-FUNCTIONAL RELIABILITY SPECIFICATION
o It measures time between observable system failures.
The required level of reliability must be expressed quantitatively. Reliability is
o For stable systems MTTF = 1/RoCoF.
a dynamic system attribute. Source code reliability specifications are
meaningless (e.g. N faults/1000 LOC). An appropriate metric should be chosen o It is relevant for systems when individual transactions take lots of
to specify the overall system reliability. processing time (e.g. CAD or WP systems).
HARDWARE RELIABILITY METRICS • Availability = MTBF / (MTBF+MTTR)
Hardware metrics are not suitable for software since its metrics are based on o MTBF = Mean Time Between Failure
notion of component failure. Software failures are often design failures. Often o MTTR = Mean Time to Repair
the system is available after the failure has occurred. Hardware components can
wear out. • Reliability = MTBF / (1+MTBF)
TIME UNITS BUILDING RELIABILITY SPECIFICATION
Time units include: The building of reliability specification involves consequences analysis of
possible system failures for each sub-system. From system failure analysis,
• Raw Execution Time which is employed in non-stop system
partition the failure into appropriate classes. For each class send out the
• Calendar Time is employed when the system has regular usage patterns appropriate reliability metric.
• Number of Transactions is employed for demand type transaction SPECIFICATION VALIDATION
systems
It is impossible to empirically validate high reliability specifications. No
AVAILABILITY database corruption really means PoFoD class < 1 in 200 million. If each
Availability measures the fraction of time system is really available for use. It transaction takes 1 second to verify, simulation of one day’s transactions takes
takes repair and restart times into account. It is relevant for non-stop 3.5 days.
continuously running systems (e.g. traffic signal). Software testing:
FAILURE CONSEQUENCES – STUDY 1 Software testing is an important process in the software development
Reliability does not take consequences into account. Transient faults have no lifecycle . It involves verifying and validating that a software application is
real consequences but other faults might cause data loss or corruption. Hence it free of bugs, meets the technical requirements set by
may be worthwhile to identify different classes of failure, and use different its design and development , and satisfies user requirements efficiently and
metrics for each. effectively.
FAILURE CONSEQUENCES – STUDY 2 This process ensures that the application can handle all exceptional and
boundary cases, providing a robust and reliable user experience. By
When specifying reliability both the number of failures and the consequences systematically identifying and fixing issues, software testing helps deliver high-
of each matter. Failures with serious consequences are more damaging than quality software that performs as expected in various scenarios.
those where repair and recovery is straightforward. In some cases, different
reliability specifications may be defined for different failure types. Software Testing is a method to assess the functionality of the software
program. The process checks whether the actual software matches the expected
FAILURE CLASSIFICATION requirements and ensures the software is bug-free. The purpose of software
Failure can be classified as the following testing is to identify the errors, faults, or missing requirements in contrast to
actual requirements. It mainly aims at measuring the specification, functionality,
• Transient – only occurs with certain inputs. and performance of a software program or application.
• Permanent – occurs on all Software testing can be divided into two steps
• Recoverable – system can recover without operator help. 1. Verification: It refers to the set of tasks that ensure that the software
• Unrecoverable – operator has to help. correctly implements a specific function. It means “Are we building the
product right?”.
• Non-corrupting – failure does not corrupt system state or d
2. Validation: It refers to a different set of tasks that ensure that the
• Corrupting – system state or data are altered. software that has been built is traceable to customer requirements. It
means “Are we building the right product?”.
Different Types Of Software Testing Testing is used to re-run the test scenarios quickly and repeatedly, that
Explore diverse software testing methods were performed manually in manual testing.
including manual and automated testing for improved quality assurance .
Enhance software reliability and performance through functional and non- Apart from Regression testing , Automation testing is also used to test the
functional testing, ensuring user satisfaction. Learn about the significance of application from a load, performance, and stress point of view. It increases the
various testing approaches for robust software development. test coverage, improves accuracy, and saves time and money when compared
to manual testing.
Software Testing can be broadly classified into 3 types:
Different Types of Software Testing Techniques
1. Functional testing : It is a type of software testing that validates the
software systems against the functional requirements. It is performed to Software testing techniques can be majorly classified into two categories:
check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are 1. Black box Testing : Testing in which the tester doesn’t have access to
Unit testing, Integration testing, System testing, Smoke testing, and so the source code of the software and is conducted at the software
on. interface without any concern with the internal logical structure of the
software known as black-box testing.
2. Non-functional testing : It is a type of software testing that checks the
application for non-functional requirements like performance, 2. White box Testing : Testing in which the tester is aware of the internal
scalability, portability, stress, etc. Various types of non-functional workings of the product, has access to its source code, and is conducted
testing are Performance testing, Stress testing, Usability Testing, and so by making sure that all internal operations are performed according to
on. the specifications is known as white box testing.
3. Maintenance testing : It is the process of changing, modifying, and 3. Grey Box Testing : Testing in which the testers should have knowledge
updating the software to keep up with the customer’s needs. It of implementation, however, they need not be experts.
involves regression testing that verifies that recent changes to the code
have not adversely affected other previously working parts of the Software Testing can be broadly classified into 3 types:
software.
1. Functional testing : It is a type of software testing that validates the
Apart from the above classification software testing can be further divided into software systems against the functional requirements. It is performed to
2 more ways of testing: check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are
1. Manual testing : It includes testing software manually, i.e., without Unit testing, Integration testing, System testing, Smoke testing, and so
using any automation tool or script. In this type, the tester takes over the on.
role of an end-user and tests the software to identify any unexpected
behaviour or bug. There are different stages for manual testing such as 2. Non-functional testing : It is a type of software testing that checks the
unit testing, integration testing, system testing, and user acceptance application for non-functional requirements like performance,
testing. Testers use test plans, test cases, or test scenarios to test scalability, portability, stress, etc. Various types of non-functional
software to ensure the completeness of testing. Manual testing also testing are Performance testing, Stress testing, Usability Testing, and so
includes exploratory testing, as testers explore the software to identify on.
errors in it.
3. Maintenance testing : It is the process of changing, modifying, and
2. Automation testing : It is also known as Test Automation, is when the updating the software to keep up with the customer’s needs. It
tester writes scripts and uses another software to test the product. This involves regression testing that verifies that recent changes to the code
process involves the automation of a manual process. Automation
Different Levels of Software Testing find a set of linearly independent paths of execution. In this method,
Cyclomatic Complexity is used to determine the number of linearly
Software level testing can be majorly classified into 4 levels: independent paths and then test cases are generated for each path.
It gives complete branch coverage but achieves that without covering all
1. Unit testing : It a level of the software testing process where individual possible paths of the control flow graph. McCabe’s Cyclomatic
units/components of a software/system are tested. The purpose is to Complexity is used in path testing. It is a structural testing method that
validate that each unit of the software performs as designed. uses the source code of a program to find every possible executable
path.
2. Integration testing : It is a level of the software testing process where
individual units are combined and tested as a group. The purpose of this
level of testing is to expose faults in the interaction between integrated
units.
• Cyclomatic Complexity:
• Customer satisfaction: Software testing aims to detect the errors or
vulnerabilities in the software early in the development phase so that the After the generation of the control flow graph, calculate the cyclomatic
detected bugs can be fixed before the delivery of the product. Usability complexity of the program using the following formula .
testing is a type of software testing that checks the application for how
easily usable it is for the users to use the application.
• Independent paths: 1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic
An Independent path is a path through a Decision to Decision path expressions and ‘OP’ is an operator.
graph that cannot be reproduced from other paths by other methods. 2. A simple condition like any relational expression preceded by a NOT
(~) operator. For example, (~E1) where ‘E1’ is an arithmetic expression
Advantages of Path Testing and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions,
1. The path testing method reduces the redundant tests. Boolean operator, and parenthesis. For example, (E1 & E2)|(E2 & E3)
where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote
2. Path testing focuses on the logic of the programs. AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like
3. Path testing is used in test case design.
‘AND’, OR, NOT. For example, ‘A|B’ is a Boolean expression where
Disadvantages of Path Testing ‘A’ and ‘B’ denote operands and | denotes OR operator.
1. A tester needs to have a good understanding of programming knowledge 3. Data Flow Testing: The data flow test method chooses the test path of a
or code knowledge to execute the tests. program based on the locations of the definitions and uses all the
variables in the program. The data flow test approach is depicted as
2. The test case increases when the code complexity is increased. follows suppose each statement in a program is assigned a unique
statement number and that theme function cannot modify its parameters
3. It will be difficult to create a test path if the application has a high or global variables. For example, with S as its statement number.
complexity of code.
DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
4. Some test paths may skip some of the conditions in the code. It
may not cover some conditions or scenarios if there is an error in If statement S is an if loop statement, them its DEF set is empty and its USE
the specific paths. set depends on the state of statement S. The definition of the variable X at
statement S is called the line of statement S’ if the statement is any way from
Control structure testing: S to statement S’ then there is no other definition of X. A definition use (DU)
chain of variable X has the form [X, S, S’], where S and S’ denote statement
Control structure testing is used to increase the coverage area by testing numbers, X is in DEF(S) and USE(S’), and the definition of X in statement S
various control structures present in the program. The different types of testing is line at statement S’. A simple data flow test approach requires that each DU
performed under control structure testing are as follows chain be covered at least once. This approach is known as the DU test
approach. The DU testing does not ensure coverage of all branches of a 2. Unstructured loops – This type of loops should be redesigned,
program. However, a branch is not guaranteed to be covered by DU testing whenever possible, to reflect the use of unstructured the structured
only in rare cases such as then in which the other construct does not have any programming constructs.
certainty of any variable in its later part and the other part is not present. Data
flow testing strategies are appropriate for choosing test paths of a program
containing nested if and loop statements.
1. Simple Loop – The following set of test can be applied to simple loops,
where the maximum allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
2. Concatenated Loops – If loops are not dependent on each other,
contact loops can be tested using the approach used in simple loops. if Black Box Testing:
the loops are interdependent, the steps are followed in nested loops. Black Box Testing is an important part of making sure software works
as it should. Instead of peeking into the code, testers check how the software
behaves from the outside, just like users would. This helps catch any issues or
bugs that might affect how the software works.
This simple guide gives you an overview of what Black Box Testing is all
about and why it matters in software development.
Black-box testing is a type of software testing in which the tester is not
concerned with the software’s internal knowledge or implementation details
but rather focuses on validating the functionality based on the provided
specifications or requirements.
1. Nested Loops – Loops within loops are called as nested loops. when
testing nested loops, the number of tested increases as level nesting
increases. The following steps for testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
Types Of Black Box Testing
The following are the several categories of black box testing: Nonfunctional Testing
1. Functional Testing • Non-functional testing is a software testing technique that checks the
non-functional attributes of the system.
2. Regression Testing
• Non-functional testing is defined as a type of software testing to check
3. Nonfunctional Testing (NFT) non-functional aspects of a software application.
Before we move in depth of the Black box testing do you known that their are • It is designed to test the readiness of a system as per nonfunctional
many different type of testing used in industry and some automation testing parameters which are never addressed by functional testing.
tools are there which automate the most of testing so if you wish to learn the
latest industry level tools then you check-out our manual to automation testing • Non-functional testing is as important as functional testing.
course in which you will learn all these concept and tools
• Non-functional testing is also known as NFT. This testing is not
Functional Testing functional testing of software. It focuses on the software’s performance,
usability, and scalability.
• Functional testing is defined as a type of testing that verifies that each
function of the software application works in conformance with the Advantages of Black Box Testing
requirement and specification.
• The tester does not need to have more functional knowledge or
• This testing is not concerned with the source code of the application. programming skills to implement the Black Box Testing.
Each functionality of the software application is tested by providing
appropriate test input, expecting the output, and comparing the actual • It is efficient for implementing the tests in the larger system.
output with the expected output.
• Tests are executed from the user’s or client’s point of view.
• This testing focuses on checking the user interface, APIs, database,
security, client or server application, and functionality of the • Test cases are easily reproducible.
Application Under Test. Functional testing can be manual or
• It is used to find the ambiguity and contradictions in the functional
automated. It determines the system’s software functional requirements.
specifications.
Regression Testing
Disadvantages of Black Box Testing
• Regression Testing is the process of testing the modified parts of the
• There is a possibility of repeating the same tests while implementing the
code and the parts that might get affected due to the modifications to
ensure that no new errors have been introduced in the software after the testing process.
modifications have been made.
• Without clear functional specifications, test cases are difficult to
• Regression means the return of something and in the software field, it implement.
refers to the return of a bug. It ensures that the newly added code is
• It is difficult to execute the test cases because of complex inputs at
compatible with the existing code.
different stages of testing.
• In other words, a new software update has no impact on the
• Sometimes, the reason for the test failure cannot be detected.
functionality of the software. This is carried out after a system
maintenance operation and upgrades. • Some programs in the application are not tested.
• It does not reveal the errors in the control structure. 3. Boundary value analysis – Boundaries are very good places for errors to
occur. Hence, if test cases are designed for boundary values of the input
• Working with a large sample space of inputs can be exhaustive and domain then the efficiency of testing improves and the probability of finding
consumes a lot of time. errors also increases. For example – If the valid range is 10 to 100 then test for
10,100 also apart from valid and invalid inputs.
Ways of Black Box Testing Done
4. Cause effect graphing – This technique establishes a relationship between
1. Syntax-Driven Testing – This type of testing is applied to systems that can logical input called causes with corresponding actions called the effect. The
be syntactically represented by some language. For example, language can be causes and effects are represented using Boolean graphs. The following steps
represented by context-free grammar. In this, the test cases are generated so are followed:
that each grammar rule is used at least once.
1. Identify inputs (causes) and outputs (effect).
2. Equivalence partitioning – It is often seen that many types of inputs work
similarly so instead of giving all of them separately we can group them and 2. Develop a cause-effect graph.
test only one input of each group. The idea is to partition the input domain of
the system into several equivalence classes such that each member of the class 3. Transform the graph into a decision table.
works similarly, i.e., if a test case in one class results in some error, other
members of the class would also result in the same error. 4. Convert decision table rules to test cases.
2. Generating test cases – (i) To each valid and invalid class of input
assign a unique identification number. (ii) Write a test case covering all
valid and invalid test cases considering that no two invalid inputs mask
each other. To calculate the square root of a number, the equivalence
classes will be (a) Valid inputs:
• Positive decimals
Integration testing is one of the basic type of software testing and there are
many other basic and advance software testing. If you are interested in
learning all the testing concept and other more advance concept in the field of
the software testing
• It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of
individual module testing.
• In simple words, all the modules of the system are simply put together
and tested.
• High-risk critical modules are not isolated and tested on priority since
all modules are tested at once.
• This can result in long and complex debugging and troubleshooting • In this testing, the complexity that occurs when the system is made up of
efforts. a large number of small subsystems.
• This can lead to system downtime and increased development costs. • As Far modules have been created, there is no working model can be
represented.
• May not provide enough visibility into the interactions and data
exchange between components. 3. Top-Down Integration Testing
• This can result in a lack of confidence in the system’s stability and Top-down integration testing technique is used in order to simulate the
reliability. behaviour of the lower-level modules that are not yet integrated. In this
integration testing, testing takes place from top to bottom. First, high-level
• This can lead to decreased efficiency and productivity. modules are tested and then low-level modules and finally integrating the low-
level modules to a high level to ensure the system is working as intended.
• This may result in a lack of confidence in the development team.
Advantages of Top-Down Integration Testing • This Sandwich approach overcomes this shortcoming of the top-down
and bottom-up approaches.
• Separately debugged module.
• Parallel test can be performed in top and bottom layer tests.
• Few or no drivers needed.
Disadvantages of Mixed Integration Testing
• It is more stable and accurate at the aggregate level.
• For mixed integration testing, it requires very high cost because one part
• Easier isolation of interface errors. has a Top-down approach while another part has a bottom-up approach.
• In this, design defects can be found in the early stages. • This integration testing cannot be used for smaller systems with huge
interdependence between different modules.
Disadvantages of Top-Down Integration Testing
Applications of Integration Testing
• Needs many Stubs.
1. Identify the components: Identify the individual components of your
• Modules at lower level are tested inadequately. application that need to be integrated. This could include the frontend,
backend, database, and any third-party services.
• It is difficult to observe the test output.
2. Create a test plan: Develop a test plan that outlines the scenarios and
• It is difficult to stub design. test cases that need to be executed to validate the integration points
between the different components. This could include testing data flow,
communication protocols, and error handling.
• Data Integration Testing : Validate data integrity and consistency It is virtually impossible for a software developer to foresee how the
across different modules. Test data transformation and conversion customer will really use a program:
between formats. Verify proper handling of edge cases and boundary
conditions. • Instructions for use may misinterpreted.
• Dependency Testing : Test interactions between dependent modules. • strange combinations of data may be regularly used
Verify that changes in one module do not adversely affect others.
Ensure proper synchronization and communication between modules. • output that seemed clear to the tester may be unintelligible to a user in
the field. When custom software is built for one customer, a series of
• Error Handling Testing : Validate error detection and reporting acceptance tests are conducted to enable the customer to validate all
mechanisms. Test error recovery and fault tolerance capabilities. Ensure requirements. If software is developed as a product to be used by many
that error messages are clear and informative. customers, it is impractical to perform acceptance tests with each one.
• Performance Testing : Measure system performance under integrated alpha and beta tests are used to uncover errors that only the end-user seems
conditions. Test response times, throughput, and resource utilization. able to find.
Verify scalability and concurrency handling between modules.
The Alpha Test is conducted at the developer’s site by a customer. The
• Security Testing : Test access controls and permissions between software is used in a natural setting with the developer “looking over the
integrated modules. Verify encryption and data protection mechanisms. shoulder” of the user and recording errors and usage problems. Alpha tests
Ensure compliance with security standards and regulations. are conducted in a controlled environment.
• Compatibility Testing : Test compatibility with external systems, APIs, The Beta test is conducted at one or more customer sites by the end-user of
and third-party components. Validate interoperability and data exchange the software. Unlike alpha testing, the developer is generally not present.
protocols. Ensure seamless integration with different platforms and
environments. Unlike alpha testing, the developer is generally not present. Therefore, the
beta test is a "live" application of the software in an environment that
Validation and System Testing: cannot be controlled by the developer. The customer records all problems
(real or imagined) that are encountered during beta testing and reports these
At the end of integration testing, software is completely assembled as a to the developer at regular intervals. As a result of problems reported
package, interfacing errors have been uncovered and corrected and now during beta tests, software engineers make modifications and then prepare
validation testing is performed. Software validation is achieved through a for release of the software product to the entire customer base
series of black-box tests that demonstrate conformity with requirements.
System Testing:
After each validation test case has been conducted, one of two possible
condition exist:
System testing is actually a series of different tests whose primary which tools and a human analyst work together, and the directionality of the
purpose is to fully exercise the computer-based system. Although each test has process are highly variable.
a different purpose, all work to verify that system elements have been properly
integrated and perform allocated functions. Objective of Reverse Engineering:
System Testing is basically performed by a testing team that is 1. Reducing Costs: Reverse engineering can help cut costs in product
independent of the development team that helps to test the quality of the development by finding replacements or cost-effective alternatives for
system impartial. systems or components.
System Testing is carried out on the whole system in the context of 2. Analysis of Security: Reverse engineering is used in cybersecurity to
either system requirement specifications or functional requirement examine exploits, vulnerabilities, and malware. This helps in
specifications or in the context of both. System testing tests the design and understanding of threat mechanisms and the development of practical
behavior of the system and also the expectations of the customer. defenses by security experts.
Types of System Testing: 3. Integration and Customization: Through the process of reverse
engineering, developers can incorporate or modify hardware or software
• Performance Testing: Performance Testing is a type of software components into pre-existing systems to improve their operation or
testing that is carried out to test the speed, scalability, stability and reliability tailor them to meet particular needs.
of the software product or application.
4. Recovering Lost Source Code: Reverse engineering can be used to
• Load Testing: Load Testing is a type of software testing which is recover the source code of a software application that has been lost or is
carried out to determine the behavior of a system or software product under inaccessible or at the very least, to produce a higher-level representation
extreme load. of it.
• Stress Testing: Stress Testing is a type of software testing performed 5. Fixing bugs and maintenance: Reverse engineering can help find and
to check the robustness of the system under the varying loads. repair flaws or provide updates for systems for which the original source
code is either unavailable or inadequately documented.
• Scalability Testing: Scalability Testing is a type of software testing
which is carried out to check the performance of a software application or Reverse Engineering Goals:
system in terms of its capability to scale up or scale down the number of user
request load. 1. Cope with Complexity: Reverse engineering is a common tool used to
understand and control system complexity. It gives engineers the ability
Reverse Engineering: to analyze complex systems and reveal details about their architecture,
relationships and design patterns.
Software Reverse Engineering is a process of recovering the design,
requirement specifications, and functions of a product from an analysis of its 2. Recover lost information: Reverse engineering seeks to retrieve as
code. It builds a program database and generates information from this. This much information as possible in situations where source code or
article focuses on discussing reverse engineering in detail. documentation are lost or unavailable. Rebuilding source code,
analyzing data structures and retrieving design details are a few
What is Reverse Engineering? examples of this.
Reverse engineering can extract design information from source code, 3. Detect side effects: Understanding a system or component’s behavior
but the abstraction level, the completeness of the documentation, the degree to requires analyzing its side effects. Unintended implications,
dependencies, and interactions that might not be obvious from the
system’s documentation or original source code can be found with the Re-engineering, also known as software re-engineering, is the process of
use of reverse engineering. analyzing, designing, and modifying existing software systems to
improve their quality, performance, and maintainability.
4. Synthesis higher abstraction: Abstracting low-level features in order
to build higher-level representations is a common practice in reverse 1. This can include updating the software to work with new hardware or
engineering. This abstraction makes communication and analysis easier software platforms, adding new features, or improving the software’s
by facilitating a greater understanding of the system’s functionality. overall design and architecture.
5. Facilitate Reuse: Reverse engineering can be used to find reusable 2. Software re-engineering, also known as software restructuring or
parts or modules in systems that already exist. By understanding the software renovation, refers to the process of improving or upgrading
functionality and architecture of a system, developers can extract and existing software systems to improve their quality, maintainability, or
repurpose components for use in other projects, improving efficiency functionality.
and decreasing development time.
3. It involves reusing the existing software artifacts, such as code, design,
and documentation, and transforming them to meet new or updated
requirements.
Objective of Re-engineering
What is Re-engineering?
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering
2. Analysis: The next step is to analyze the existing system, including the
code, documentation, and other artifacts. This involves identifying the
system’s strengths and weaknesses, as well as any issues that need to be
addressed.
3. Design: Based on the analysis, the next step is to design the new or
updated software system. This involves identifying the changes that
need to be made and developing a plan to implement them. Re-engineering Cost Factors
4. Implementation: The next step is to implement the changes by 1. The quality of the software to be re-engineered.
modifying the existing code, adding new features, and updating the
documentation and other artifacts. 2. The tool support available for re-engineering.
5. Testing: Once the changes have been implemented, the software system 3. The extent of the required data conversion.
needs to be tested to ensure that it meets the new requirements and
4. The availability of expert staff for re-engineering.
specifications.
Advantages of Re-engineering
6. Deployment: The final step is to deploy the re-engineered software
system and make it available to end-users. 1. Reduced Risk: As the software is already existing, the risk is less as
compared to new software development. Development problems,
staffing problems and specification problems are the lots of problems
that may arise in new software development.
Steps involved in Re-engineering 2. Reduced Cost: The cost of re-engineering is less than the costs of
developing new software.
1. Inventory Analysis
3. Revelation of Business Rules: As a system is re-engineered , business 2. Disruption to business operations: Re-engineering can disrupt normal
rules that are embedded in the system are rediscovered. business operations and cause inconvenience to customers, employees
and other stakeholders.
4. Better use of Existing Staff: Existing staff expertise can be maintained
and extended accommodate new skills during re-engineering. 3. Resistance to change: Re-engineering can encounter resistance from
employees who may be resistant to change and uncomfortable with new
5. Improved efficiency: By analyzing and redesigning processes, re- processes and technologies.
engineering can lead to significant improvements in productivity, speed,
and cost-effectiveness. 4. Risk of failure: Re-engineering projects can fail if they are not planned
and executed properly, resulting in wasted resources and lost
6. Increased flexibility: Re-engineering can make systems more adaptable opportunities.
to changing business needs and market conditions.
5. Lack of employee involvement: Re-engineering projects that are not
7. Better customer service: By redesigning processes to focus on properly communicated and involve employees, may lead to lack of
customer needs, re-engineering can lead to improved customer employee engagement and ownership resulting in failure of the project.
satisfaction and loyalty.
6. Difficulty in measuring success: Re-engineering can be difficult to
8. Increased competitiveness: Re-engineering can help organizations measure in terms of success, making it difficult to justify the cost and
become more competitive by improving efficiency, flexibility, and effort involved.
customer service.
7. Difficulty in maintaining continuity: Re-engineering can lead to
9. Improved quality: Re-engineering can lead to better quality products significant changes in processes and systems, making it difficult to
and services by identifying and eliminating defects and inefficiencies in maintain continuity and consistency in the organization.
processes.
CASE Tools:
10.Increased innovation: Re-engineering can lead to new and innovative
ways of doing things, helping organizations to stay ahead of their CASE tools are set of software application programs, which are used to
competitors. automate SDLC activities. CASE tools are used by software project
managers, analysts and engineers to develop software system.
11.Improved compliance: Re-engineering can help organizations to
comply with industry standards and regulations by identifying and There are number of CASE tools available to simplify various stages of
addressing areas of non-compliance. Software Development Life Cycle such as Analysis tools, Design tools,
Project management tools, Database Management tools, Documentation
Disadvantages of Re-engineering tools are to name a few.
Major architectural changes or radical reorganizing of the systems data Use of CASE tools accelerates the development of project to produce
desired result and helps to uncover flaws before moving ahead with next
management has to be done manually. Re-engineered system is not likely to be
stage in software development.
as maintainable as a new system developed using modern software Re-
engineering methods. Components of CASE Tools
1. High costs: Re-engineering can be a costly process, requiring CASE tools can be broadly divided into the following parts based on their
significant investments in time, resources, and technology. use at a particular SDLC stage:
• Central Repository - CASE tools require a central repository, which can Analysis Tools
serve as a source of common, integrated and consistent information.
Central repository is a central place of storage where product These tools help to gather requirements, automatically check for any
specifications, requirement documents, related reports and diagrams, inconsistency, inaccuracy in the diagrams, data redundancies or erroneous
other useful information regarding management is stored. Central omissions. For example, Accept 360, Accompa, CaseComplete for requirement
repository also serves as data dictionary. analysis, Visible Analyst for total analysis.
Design Tools
These tools help software designers to design the block structure of the
software, which may further be broken down in smaller modules using
refinement techniques. These tools provides detailing of each module and
interconnections among modules. For example, Animated Software Design
Programming Tools
In Communication, the user request for software by meeting During this stage, unit testing, integration testing, system testing, acceptance
testing are done.
service provider. Requirement Analysis is the most
Stage6: Deployment
important and necessary stage in SDLC.
Once the software is certified, and no bugs or errors are stated, then it is
Business analyst and Project organizer set up a meeting with the client to deployed.
gather all the data like what the customer wants to build, who will be the end
user, what is the objective of the product. Before creating a product, a core Then based on the assessment, the software may be released as it is or with
understanding or knowledge of the product is very necessary. suggested enhancement in the object segment.
Once the requirement is understood, the SRS (Software Requirement Stage7: Maintenance
Specification) document is created. The developers should thoroughly follow
Once when the client starts using the developed systems, then the real issues
this document and also should be reviewed by the customer for future
come up and requirements to be solved from time to time.
reference.
This procedure where the care is taken for the developed product is known as
Stage2: Feasibility study and system analysis maintenance.
Rough plan and road map is done for software by using algorithms, models. Different Software models
Stage3: Designing the Software
Waterfall Model:
The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of the The Waterfall Model was the first Process Model to be introduced. It is also
last two, like inputs from the customer, requirement gathering and blueprint referred to as a linear- sequential life cycle model or classic model. It is
of software. very simple to understand and use. In a waterfall model, each phase must be
completed before the next phase can begin and there is no overlapping in the
phases. the next phase. Each unit is developed and tested for its functionality,
which is referred to as Unit Testing.
The Waterfall model is the earliest SDLC approach that was used for software
development. Integration and Testing − All the units developed in the
The waterfall Model illustrates the software development process in a linear implementation phase are integrated into a system after testing of each
sequential flow. This means that any phase in the development process begins unit. Post integration the entire system is tested for any faults and
only if the previous phase is complete. In this waterfall model, the phases do failures.
not overlap. Deployment of system − Once the functional and non-functional
Waterfall approach was first SDLC Model to be used widely in Software testing is done; the product is deployed in the customer environment or
Engineering to ensure success of the project. In "The Waterfall" approach, the released into the market.
whole process of software development is divided into separate phases. In Maintenance − There are some issues which come up in the client
this Waterfall model, typically, the outcome of one phase acts as the input for environment. To fix those issues, patches are released. Also to enhance
the next phase sequentially. the product some better versions are released. Maintenance is done to
The following illustration is a representation of the different phases of the deliver these changes in the customer environment.
Waterfall Model.
All these phases are cascaded to each other in which progress is seen as
flowing steadily downwards (like a waterfall) through the phases. The next
phase is started only after the defined set of goals are achieved for previous
phase and it is signed off, so the name "Waterfall Model". In this model, phases
do not overlap.
Advantages:
1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed
during the module design phase. These UTPs are executed to
eliminate errors at code level or unit level. A unit is the smallest entity
which can independently exist, e.g., a program module. Unit testing
verifies that the smallest entity can function correctly when isolated
from the rest of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during
the Architectural Design Phase. These tests verify that groups
created and tested independently can coexist and communicate
among themselves.
There are the various phases of Verification Phase of V-model:
3. System Testing: System Tests Plans are developed during System
1. Business requirement analysis: This is the first step where product Design Phase. Unlike Unit and Integration Test Plans, System Tests
requirements understood from the customer's side. This phase Plans are composed by the client’s business team. System Test
contains detailed communication to understand customer's ensures that expectations from an application developer are met.
expectations and exact requirements. 4. Acceptance Testing: Acceptance testing is related to the business
2. System Design: In this stage system engineers analyze and requirement analysis part. It includes testing the software product in
interpret the business of the proposed system by studying the user atmosphere. Acceptance tests reveal the compatibility problems
user requirements document. with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like
3. Architecture Design: The baseline in selecting the architecture is that
load and performance defects within the real user atmosphere.
it should understand all which typically consists of the list of modules,
brief functionality of each module, their interface relationships, When to use V-Model?
dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular
1. When the requirement is well defined and not ambiguous. RAD Model Design:
2. The V-shaped model should be used for small to
medium-sized projects where requirements are clearly RAD model distributes the analysis, design, build and test phases into a series
defined and fixed. of short, iterative development cycles.
3. The V-shaped model should be chosen when sample technical
resources are available with essential technical expertise.
Advantage:
• Easy to Understand.
• Testing Methods like planning, test designing happens well before
coding.
• This saves a lot of time. Hence a higher chance of success over the
waterfall model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.
Disadvantage:
Following are the various phases of the RAD Model −
• Very rigid and least flexible. Business Modelling:
• Not a good for a complex project. The business model for the product under development is designed in terms
• Software is developed during the implementation stage, so no of flow of information and the distribution of information between various
early prototypes of the software are produced. business channels. A complete business analysis is performed to find the vital
• If any changes happen in the midway, then the test documents information for business, how it can be obtained, how and when is the
along with the required documents, information processed and what are the factors driving successful flow of
information.
SDLC - RAD Model
Data Modelling:
The RAD (Rapid Application Development) model is based on prototyping The information gathered in the Business Modelling phase is reviewed and
and iterative development with no specific planning involved. The process of analyzed to form sets of data objects vital for the business. The attributes of all
writing the software itself involves the planning required for developing the data sets is identified and defined. The relation between these data objects are
product. established and defined in detail in relevance to the business model.
Rapid Application Development focuses on gathering customer requirements Process Modelling:
through workshops or focus groups, early testing of the prototypes by the
The data object sets defined in the Data Modelling phase are converted to
customer using iterative concept, reuse of the existing prototypes
establish the business information flow needed to achieve specific business
(components), continuous integration and rapid delivery.
objectives as per the business model. The process model for any changes or
enhancements to the data object sets is defined in this phase. Process
descriptions for adding, deleting, retrieving or modifying a data object are 2. Design: In the design phase, team design the software by the different
given. diagrams like Data Flow diagram, activity diagram, class diagram, state
transition diagram, etc.
Application Generation:
The actual system is built and coding is done by using automation tools to 3. Implementation: In the implementation, requirements are written in
convert process and data models into actual prototypes. the coding language and transformed into computer programs which are
called Software.
Testing and Turnover
The overall testing time is reduced in the RAD model as the prototypes are 4. Testing: After completing the coding phase, software testing starts using
independently tested during every iteration. However, the data flow and the different test methods. There are many test methods, but the most common
interfaces between all the components need to be thoroughly tested with are white box, black box, and grey box test methods.
complete test coverage. Since most of the programming components have
already been tested, it reduces the risk of any major issues. 5. Deployment: After completing all the phases, software is deployed to its
work environment.
Incremental model:
6. Review: In this phase, after the product deployment, review phase is
It is a process of software development where requirements divided into performed to check the behavior and validity of the developed product.
multiple models of SDLC. And if there are any error found then the process starts again from the
Each module goes through the requirement, design, implementation and requirement gathering.
testing phases, this process continues until complete system is achieved.
7. Maintenance: In the maintenance phase, after deployment of the
software in the working environment there may be some bugs, some
errors or new updates are required. Maintenance involves debugging
and new addition options.
Advantages:
1. Requirement gathering & analysis: In this phase, requirements are 1. It is not suitable for smaller projects.
gathered from customers and check by an analyst whether requirements will 2. Design can be changed again and again because of imperfect
fulfil or not. Analyst checks that need will achieve within budget or not. requirements.
After all of this, the software team skips to the next phase. 3. Requirement changes can cause over budget.
4. Project completion date not confirmed because of changing
requirements.