Software Engineering
Software Engineering
2023-24
SOFTWARE ENGINEERING
(R20A0511)
LECTURE NOTES
SOFTWARE ENGINEERING
(R20A0511)
LECTURE NOTES
Prepared by
Dr. P. HARIKRISHNA,
ASSOCIATE PROFESSOR
Department of Computational Intelligence
CSE (Artificial Intelligence and Machine Learning),
Artificial Intelligence and Machine Learning
Vision
Mission
QUALITY POLICY
To provide sophisticated technical infrastructure and to inspire students to reach their
full potential.
To provide students with a solid academic and research environment for a
comprehensive learning experience.
To provide research development, consulting, testing, and customized training to satisfy
specific industrial demands, thereby encouraging self-employment and
entrepreneurship among students.
L/T/P/D/C
III Year B.Tech. CSE (AI&ML) -I SEM 3 / -/-/-/ 3
(R20A0511) SOFTWARE ENGINEERING
COURSE OBJECTIVES:
1. To provide the idea of decomposing the given problem into Analysis, Design, Implementation,
Testing and Maintenance phases
2. To understand software process models such as waterfall and evolutionary models and software
requirements and SRS document.
3. To understand different software design and architectural styles & software testing approaches such
as unit testing and integration testing.
4. To understand quality control and how to ensure good quality software through quality assurance .
5. To gain the knowledge of how Analysis, Design, Implementation, Testing and Maintenance
processes are conducted in an object oriented software projects.
UNIT -I:
Introduction to Software Engineering: The evolving role of software, Changing Nature of Software,
Software myths.
A Generic view of process: Software engineering- A layered technology, a process framework, The
Capability Maturity Model Integration (CMMI), Process patterns, process assessment, personal and
team process models.
Process models: The waterfall model, Incremental process models, Evolutionary process models, The
Unified process.
UNIT-II:
Software Requirements: Functional and non-functional requirements, User requirements, System
requirements, Interface specification, the software requirements document.
Requirements engineering process: Feasibility studies, Requirements elicitation and analysis,
Requirements validation, Requirements management.
System models: Context Models, Behavioral models, Data models, Object models, structured
methods.
UNIT-III:
Design Engineering: Design process and Design quality, Design concepts, the design model.
Creating an architectural design: Software architecture, Data design, Architectural styles and
patterns, Architectural Design.
Performing User interface design: Golden rules, User interface analysis and design, interface
analysis, interface design steps, Design evaluation.
UNIT-IV:
Testing Strategies: A strategic approach to software testing, test strategies for conventional software,
Black-Box and White-Box testing, Validation testing, System testing, the art of Debugging. Risk
management: Reactive vs. Proactive Risk strategies, software risks, Risk identification, Risk
projection, Risk refinement RMMM, RMMM Plan
UNIT-V:
Quality Management: Software Quality, Quality concepts, Software quality assurance, Software
Reviews, Formal technical reviews, Statistical Software quality Assurance,
Softwarereliability,TheISO9000 quality standards.
REFERENCEBOOKS:
1. Software Engineering, A Precise Approach, Pankaj Jalote, Wiley India, 2010.
2. Software Engineering: APrimer, Waman SJawadekar, TataMcGraw-Hill,2008
3. FundamentalsofSoftwareEngineering,RajibMall,PHI,2005
4. Software Engineering, Principles and Practices,DeepakJain,OxfordUniversityPress.
5. Software Engineering: Abstraction and modelling, Diner Bjorner,Springer International edition,2006.
6. Software Engineering: Specification of systems and languages, Diner Bjorner,
SpringerInternationaledition2006.
7. Software Engineering Foundations, YinguxWang, Auerbach Publications, 2008.
8. Software Engineering Principles and Practice, Hans Van Vliet, 3rd edition, John Wiley & Sons Ltd.
9. Software Engineering: Domains, Requirements, and Software Design, D.Bjorner, Springer
International Edition.
10. Introduction to Software Engineering, R.J.Leach, CRC Press.
Outcomes:
1. Identify the minimum requirements for the development of application.
2. Develop, maintain, efficient, reliable and cost effective software solutions.
3. Critically thinking and evaluate assumptions and arguments.
4. Test and maintain process in an object orient software project.
5. Ensure good quality software through quality assurance.
MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY
DEPARTMENT OF COMPUTATIONAL INTELLIGENCE
INDEX
S. No Topic Page no
Unit
4 I Process models 11
5 II Software Requirements 19
7 II System models 28
11 IV Testing Strategies 44
12 IV Risk management 51
13 V Quality Management 55
Department of CI III Year/I Sem
UNIT - I
INTRODUCTION:
Software Engineering is a framework for building software and is an engineering approach to
software development. Software programs can be developed without S/E principles and
methodologies but they are indispensable if we want to achieve good quality software in a cost-
effective manner.
Software is defined as:
Instructions + Data Structures + Documents
Engineering is the branch of science and technology concerned with the design, building, and
use of engines, machines, and structures. It is the application of science, tools and methods to find
cost effective solution to simple and complex problems.
Software Engineering is defined as a systematic, disciplined and quantifiable approach for the
development, operation and maintenance of software.
Characteristics of software
• Software is developed or engineered, but it is not manufactured in the classical sense.
• Software does not wear out, but it deteriorates due to change.
• Software is custom built rather than assembling existing components.
System software. System software is a collection of programs written to service other programs
Embedded software-- resides in read-only memory and is used to control products and systems for
the consumer and industrial markets.
Artificial intelligence software. Artificial intelligence (AI) software makes use of nonnumeric
algorithms to solve complex problems that are not amenable to computation or straightforward
analysis
Engineering and scientific software. Engineering and scientific software have been characterized
by "number crunching" algorithms.
LEGACY SOFTWARE
Legacy software are older programs that are developed decades ago. The quality of legacy
software is poor because it has inextensible design, convoluted code, poor and nonexistent
documentation, test cases and results that are not achieved.
As time passes legacy systems evolve due to following reasons:
The software must be adapted to meet the needs of new computing environment or technology.
The software must be enhanced to implement new business requirements.
The software must be extended to make it interoperable with more modern systems or database
The software must be rearchitected to make it viable within a network environment.
SOFTWARE MYTHS
Myths are widely held but false beliefs and views which propagate misinformation and confusion.
Three types of myth are associated with software:
- Management myth
- Customer myth
- Practitioner’s myth
MANAGEMENT MYTHS
• Myth(1)-The available standards and procedures for software are enough.
• Myth(2)-Each organization feel that they have state-of-art software development tools
since they have latest computer.
• Myth(3)-Adding more programmers when the work is behind schedule can catch up.
• Myth(4)-Outsourcing the software project to third party, we can relax and let that party build it.
CUSTOMER MYTHS
• Myth(1)- General statement of objective is enough to begin writing programs, the details
can be filled in later.
• Myth(2)-Software is easy to change because software is flexible
PRACTITIONER’S MYTH
• Myth(1)-Once the program is written, the job has been done.
• Myth(2)-Until the program is running, there is no way of assessing the quality.
A PROCESS FRAMEWORK
• Establishes the foundation for a complete software process
• Identifies a number of framework activities applicable to all software projects
• Also include a set of umbrella activities that are applicable across the entire software process.
A PROCESS FRAMEWORK
Used as a basis for the description of process models Generic process activities
• Communication
• Planning
• Modeling
• Construction
• Deployment
A PROCESS FRAMEWORK
Generic view of engineering complimented by a number of umbrella activities
Software project tracking and control
Formal technical reviews
Software quality assurance
Software configuration management
Document preparation and production
Reusability management
Measurement
Risk management
Continuous model:
-Lets organization select specific improvement that best meet its business objectives and minimize
risk-Levels are called capability levels.
-Describes a process in 2 dimensions
-Each process area is assessed against specific goals and practices and is rated according to the
following capability levels.
CMMI
• Six levels of CMMI
– Level 0:Incomplete
– Level 1:Performed
– Level 2:Managed
– Level 3:Defined
– Level 4:Quantitatively managed
– Level 5:Optimized
CMMI
• Incomplete -Process is adhoc . Objective and goal of process areas are not known
• Performed -Goal, objective, work tasks, work products and other activities of software
process are carried out
PROCESS PATTERNS
Software Process is defined as collection of Patterns. Process pattern provides a template. It
comprises of
• Process Template
-Pattern Name
-Intent
-Types
-Task pattern
- Stage pattern
-Phase Pattern
• Initial Context
• Problem
• Solution
• Resulting Context
• Related Patterns
PROCESS ASSESSMENT
Does not specify the quality of the software or whether the software will be
delivered on time or will it stand up to the user requirements. It attempts to keep a check on the
current state of the software process with the intention of improving it.
PROCESS ASSESSMENT
Software Process
Software Process Assessment Software Process Improvement Motivates Capability determination
APPROACHES TO SOFTWARE ASSESSMENT
• Standard CMMI assessment (SCAMPI)
• CMM based appraisal for internal process improvement
• SPICE(ISO/IEC 15504)
ISO 9001:2000 for software Personal and Team Software Process Personal software process
PLANNING
HIGH LEVEL DESIGN
HIGH LEVEL DESIGN REVIEW
DEVELOPMENT
POSTMORTEM
Communication
Planning
Modeling
Construction
Deployment
This Model suggests a systematic, sequential approach to SW development that begins at the
system level and progresses through analysis, design, code and testing
PROBLEMS IN WATERFALLMODEL
• Real projects rarely follow the sequential flow since they are always iterative
• The model requires requirements to be explicitly spelled out in the beginning, which is often
difficult
• A working model is not available until late in the project time plan
INCREMENT 1
Communication
Planning
Modeling
Construction
Deployment
INCREMENT 2
Communication
Planning
Modeling
Construction
: Deployment
:
:
INCREMENT N
Communication
Planning
Modeling
Construction
Deployment
EVOLUTIONARY PROCESSMODEL
• Software evolves over a period of time
• Business and product requirements often change as development proceeds making a straight-line
path to an end product unrealistic
• Evolutionary models are iterative and as such are applicable to modern day applications
Types of evolutionary models
– Prototyping
– Spiral model
– Concurrent development model
PROTOTYPING
• Mock up or model (throw away version) of a software product
• Used when customer defines a set of objectives but does not identify input, output, or
processing requirements
• Developer is not sure of:
- efficiency of an algorithm adaptability of an operating system
– human/machine interaction
STEPS IN PROTOTYPING
• Begins with requirement gathering
• Identify whatever requirements are known
• Outline areas where further definition is mandatory
• A quick design occur
• Quick design leads to the construction of prototype
• Prototype is evaluated by the customer
• Requirements are refined
• Prototype is turned to satisfy the needs of customer
LIMITATIONS OF PROTOTYPING
• In a rush to get it working, overall software quality or long term maintainability are
generally overlooked
• Use of inappropriate OS or PL
• Use of inefficient algorithm
Software Engineering Page 15
Department of CI III Year/I Sem
2. Elaboration Phase
*Use-Case model
*Analysis model
*Software Architecture description
*Preliminary design model
*Preliminary model
3. Construction Phase
*Design model
*System components
*Test plan and procedure
*Test cases
*Manual
4. Transition Phase
*Delivered software increment
*Beta test results
*General user feedback
UNIT-II
SOFTWARE REQUIREMENTS
SOFTWARE REQUIREMENTS
• Encompasses both the User’s view of the requirements ( the external view ) and the
Developer’s view( inside characteristics)
User’s Requirements
--Statements in a natural language plus diagram, describing the services the system is expected to
provide and the constraints
• System Requirements --Describe the system’s function, services and operational condition
SOFTWARE REQUIREMENTS
• System Functional Requirements
--Statement of services the system should provide
--Describe the behavior in particular situations
--Defines the system reaction to particular inputs
• Nonfunctional Requirements
- Constraints on the services or functions offered by the system
--Include timing constraints, constraints on the development process and standards
--Apply to system as a whole
• Domain Requirements
--Requirements relate to specific application of the system
--Reflect characteristics and constraints of that system
FUNCTIONAL REQUIREMENTS
• Should be both complete and consistent
• Completeness
-- All services required by the user should be defined
• Consistent
-- Requirements should not have contradictory definition
• Difficult to achieve completeness and consistency for large system
NON-FUNCTIONALREQUIREMENTS
Types of Non-functional Requirements
1.Product Requirements
-Specify product behavior
-Include the following
• Usability
• Efficiency
• Reliability
• Portability
2. Organizational Requirements
--Derived from policies and procedures
--Include the following:
• Delivery
• Implementation
• Standard
3. External Requirements
-- Derived from factors external to the system and its development process
--Includes the following
• Interoperability
• Ethical
• Legislative
• Outputs
• Destination
• Action
• Precondition
• Post condition
• Side effects
INTERFACE SPECIFICATION
• Working of new system must match with the existing system
• Interface provides this capability and precisely specified
Three types of interfaces
1. Procedural interface-- Used for calling the existing programs by the new programs 2.Data
structures-
-Provide data passing from one sub-system to another 3.Representations of Data
-- Ordering of bits to match with the existing system
--Most common in real-time and embedded system
Purpose of SRS
• Communication between the Customer, Analyst, system developers, maintainers,
• firm foundation for the design phase
• support system testing activities
• Support project management and control
• controlling the evolution of the system
FEASIBILITY STUDIES
Starting point of the requirements engineering process
• Input: Set of preliminary business requirements, an outline description of the system and
how the system is intended to support business processes
• Output: Feasibility report that recommends whether or not it is worth carrying out further
Feasibility report answers a number of questions:
Process activities
1. Requirement Discovery -- Interaction with stakeholder to collect their requirements including
domain and documentation
2. Requirements classification and organization -- Coherent clustering of requirements from
unstructured collection of requirements
3. Requirements prioritization and negotiation -- Assigning priority to requirements
--Resolves conflicting requirements through negotiation
4. Requirements documentation -- Requirements be documented and placed in the next round of
spiral
The spiral representation of Requirements Engineering
2. Interviewing--Puts questions to stakeholders about the system that they use and the
system to be developed. Requirements are derived from the answers.
Two types of interview
– Closed interviews where the stakeholders answer a pre-defined set of questions.
– Open interviews discuss a range of issues with the stakeholders for better understanding their
needs.
Effective interviewers
a) Open-minded: no pre-conceived ideas
b) Prompter: prompt the interviewee to start discussion with a question or a proposal
3. Scenarios --Easier to relate to real life examples than to abstract description. Starts with
an outline of the interaction and during elicitation, details are added to create a complete
description of that interaction
Scenario includes:
• 1. Description at the start of the scenario
• 2. Description of normal flow of the event
• 3. Description of what can go wrong and how this is handled
• 4.Information about other activities parallel to the scenario
• 5.Description of the system state when the scenario finishes
LIBSYS scenario
• Initial assumption: The user has logged on to the LIBSYS system and has located the
journal containing the copy of the article.
• Normal: The user selects the article to be copied. He or she is then prompted by the
system to either provide subscriber information for the journal or to indicate how they will pay for
the article. Alternative payment methods are by credit card or by quoting an organizational account
number.
• The user is then asked to fill in a copyright form that maintains details of the transaction
and they then submit this to the LIBSYS system.
• The copyright form is checked and, if OK, the PDF version of the article is downloaded
to the LIBSYS working area on the user’s computer and the user is informed that it is available.
The user is asked to select a printer and a copy of the article is printed
LIBSYS scenario
• What can go wrong: The user may fail to fill in the copyright form correctly. In this case,
the form should be re-presented to the user for correction. If the resubmitted form is still incorrect
then the user’s request for the article is rejected.
• The payment may be rejected by the system. The user’s request for the article is rejected.
• The article download may fail. Retry until successful or the user terminates the session..
• Other activities: Simultaneous downloads of other articles.
• System state on completion: User is logged on. The downloaded article has been
deleted from LIBSYS workspace if it has been flagged as print-only.
4. Use cases -- scenario based technique for requirement elicitation. A fundamental feature
of UML, notation for describing object-oriented system models. Identifies a type of interaction
andthe actors involved. Sequence diagrams are used to add information to a Use case
Article printing use-case Article printing LIBSYS use cases Article printing Article search
User administration Supplier Catalogue services Library
User Library Staff
REQUIREMENTS VALIDATION
Concerned with showing that the requirements define the system that the customer wants. Important
because errors in requirements can lead to extensive rework cost
Validation checks
1. Validity checks --Verification that the system performs the intended function by the user
2.Consistencycheck --Requirements should not conflict
3. Completeness checks --Includes requirements which define all functions and constraints
intended by the system user
4. Realism checks --Ensures that the requirements can be actually implemented
5. Verifiability -- Testable to avoid disputes between customer and developer.
VALIDATION TECHNIQUES
1. REQUIREMENTS REVIEWS
Reviewers check the following:
(a) Verifiability: Testable
(b) Comprehensibility
(c) Traceability
(d) Adaptability
2.PROTOTYPING
3. TEST-CASE GENERATION Requirements management
Requirements are likely to change for large software systems and as such requirements
management process is required to handle changes.
Reasons for requirements changes
(a) Diverse Users community where users have different requirements and priorities
(b) System customers and end users are different
(c) Change in the business and technical environment after installation Two classes of requirements
(a) Enduring requirements: Relatively stable requirements
(b) Volatile requirements: Likely to change during system development process or during operation
Traceability
Maintains three types of traceability information.
1. Source traceability--Links the requirements to the stakeholders
2. Requirements traceability--Links dependent requirements within the requirements document
3. Design traceability-- Links from the requirements to the design module
1. Problem analysis and change specification-- Process starts with a specific change
proposal and analysed to verify that it is valid
2. Change analysis and costing--Impact analysis in terms of cost, time and risks
3. Change implementation--Carrying out the changes in requirements document, system
design and its implementation
SYSTEM MODELS
Used in analysis process to develop understanding of the existing system or new system. Excludes
details. An abstraction of the system
Types of system models 1.Context models
2. Behavioural models 3.Data models 4.Object models 5.Structured models
CONTEXT MODELS
A type of architectural model. Consists of sub-systems that make up an entire system First step: To
identify the subsystem.
Represent the high level architectural model as simple block diagram
• Depict each sub system a named rectangle
• Lines between rectangles indicate associations between subsystems Disadvantages
--Concerned with system environment only, doesn't take into account other systems, which may take
data or give data to the model.
The context of an ATM system consists of the following Auto-teller system Security system
Maintenance system Account data base Usage database Branch accounting system Branch counter
system
Behavioral models
Describes the overall behaviour of a system. Two types of behavioural model1.Data Flow models
2.State machine models
Data flow models --Concentrate on the flow of data and functional transformation on that data.
Show the processing of data and its flow through a sequence of processing steps. Help analyst
understand what is going on
Advantages
-- Simple and easily understandable
-- Useful during analysis of requirements
DATA MODELS
Used to describe the logical structure of data processed by the system. An entity-relation- attribute
model sets out the entities in the system, the relationships between these entities and the entity
attributes. Widely used in database design. Can readily be implemented using relational databases.
No specific notation provided in the UML but objects and associations can be used.
Library semantic model
OBJECT MODELS
An object oriented approach is commonly used for interactive systems development. Expresses the
systems requirements using objects and developing the system in an object oriented PL such as c++
A object class: An abstraction over a set of objects that identifies common attributes. Objects are
instances of object class. Many objects may be created from a single class.
Analysis process
-- Identifies objects and object classes Object class in UML
--Represented as a vertically oriented rectangle with three sections
(a) The name of the object class in the top section
(b) The class attributes in the middle section
(c) The operations associated with the object class are in lower section.
OBJECT-BEHAVIORAL MODEL
-- Shows the operations provided by the objects
-- Sequence diagram of UML can be used for behavioral modeling
UNIT III
DESIGN ENGINEERING
QUALITY GUIDELINES
• Uses recognizable architectural styles or patterns
• Modular; that is logically partitioned into elements or subsystems
• Distinct representation of data, architecture, interfaces and components
• Appropriate data structures for the classes to be implemented
• Independent functional characteristics for components
• Interfaces that reduces complexity of connection
• Repeatable method
QUALITY ATTRIBUTES
FURPS quality attributes
• Functionality
* Feature set and capabilities of programs
* Security of the overall system
• Usability
* user-friendliness
* Aesthetics
* Consistency
* Documentation
• Reliability
* Evaluated by measuring the frequency and severity of failure
* MTTF
• Supportability
* Extensibility
* Adaptability
* Serviceability
DESIGN CONCEPTS
1. Abstractions
2. Architecture
3. Patterns
4. Modularity
5. Information Hiding
6. Functional Independence
7. Refinement
8. Re-factoring
9. Design Classes
DESIGN CONCEPTS
ABSTRACTION
Many levels of abstraction.
Highest level of abstraction: Solution is slated in broad terms using the language of the problem
environment
Lower levels of abstraction: More detailed description of the solution is provided
• Procedural abstraction-- Refers to a sequence of instructions that a specific and limited function
• Data abstraction-- Named collection of data that describe a data object
DESIGN CONCEPTS
ARCHITECTURE--Structure organization of program components (modules) and their
interconnection Architecture Models
(a) Structural Models-- An organized collection of program components
(b) Framework Models-- Represents the design in more abstract way
(c) Dynamic Models-- Represents the behavioral aspects indicating changes as a function of
external events
(d). Process Models-- Focus on the design of the business or technical process
PATTERNS
Provides a description to enables a designer to determine the followings:
(a). whether the pattern is applicable to the current work
(b) Whether the pattern can be reused
(c) Whether the pattern can serve as a guide for developing a similar but functionally or structurally
different pattern
MODULARITY
Divides software into separately named and addressable components, sometimes called modules.
Modules are integrated to satisfy problem requirements. Consider two problems p1 and p2. If the
complexity of p1 iscp1 and of p2 is cp2 then effort to solve p1=cp1 and effort to solve p2=cp2If
INFORMATION HIDING
Information contained within a module is inaccessible to other modules who do not need such
information. Achieved by defining a set of Independent modules that communicate with one
another only that information necessary to achieve S/W function. Provides the greatest benefits
when modifications are required during testing and later. Errors introduced during modification are
less likely to propagate to other location within the S/W.
FUNCTIONAL INDEPENDENCE
A direct outgrowth of Modularity. abstraction and information hiding. Achieved by developing a
module with single minded function and an aversion to excessive interaction with other modules.
Easier to develop and have simple interface. Easier to maintain because secondary effects caused b
design or code modification are limited, error propagation is reduced and reusable modules are
possible. Independence is assessed by two quantitative criteria:
(1) Cohesion
(2) Coupling
Cohesion -- Performs a single task requiring little interaction with other components Coupling--
Measure of interconnection among modules. Coupling should be low and cohesion should be high
for good design.
DESIGN CLASSES
Class represents a different layer of design architecture. Five types of Design Classes
1. User interface class -- Defines all abstractions that are necessary for human computer interaction
2. Business domain class -- Refinement of the analysis classes that identity attributes and services to
implement some of business domain
3. Process class -- implements lower level business abstractions required to fully manage the
business domain classes
Software Engineering Page 34
Department of CI III Year/I Sem
4. Persistent class -- Represent data stores that will persist beyond the execution of the software
5. System class -- Implements management and control functions to operate and communicate within
the computer environment and with the outside world.
Software Architecture is not the operational software. It is a representation that enables a software
engineer to
• Analyze the effectiveness of the design in meeting its stated requirements.
• • consider architectural alternative at a stage when making design changes is still relatively easy .
• Reduces the risk associated with the construction of the software.
Why Is Architecture Important? Three key reasons
--Representations of software architecture enable communication and understanding between
stakeholders
--Highlights early design decisions to create an operational entity.
--constitutes a model of software components and their interconnection
Data Design
The data design action translates data objects defined as part of the analysis model into data
structures at the component level and database architecture at application level when necessary.
ARCHITECTURAL STYLES
Describes a system category that encompasses:
(1) a set of components
(2) a set of connectors that enables “communication and coordination
(3) Constraints that define how components can be integrated to form the system
(4) Semantic models to understand the overall properties of a system
Data-flow architectures
Shows the flow of input data, its computational components and output data. Structure is also
called pipe and Filter. Pipe provides path for flow of data. Filters manipulate data and work
independent of its neighboring filter. If data flow degenerates into a single line of transform, it is
termed as batch sequential.
Call and return architectures
Achieves a structure that is easy to modify and scale.
Two sub styles
(1) Main program/sub program architecture
-- Classic program structure
-- Main program invokes a number of components, which in turn invoke still other components
Layered architectures
A number of different layers are defined Inner Layer (interface with OS)
• Intermediate Layer Utility services and application function) Outer Layer (User interface)
FIG: Layered
ARCHITECTURAL PATTERNS
A template that specifies approach for some behavioral characteristics of the system Patterns are
imposed on the architectural styles
Pattern Domains
1.Concurrency
--Handles multiple tasks that simulate parallelism.
--Approaches (Patterns)
(a) Operating system process management pattern
(b) A task scheduler pattern
(c) 2.Persistence
--Data survives past the execution of the process
--Approaches (Patterns)
(a) Data base management system pattern
(b) Application Level persistence Pattern ( word processing software)
3. Distribution
-- Addresses the communication of system in a distributed environment
--Approaches (Patterns)
(a) Broker Pattern
-- Acts as middleman between client and server.
Object-Oriented Design: Objects and object classes, An Object-Oriented design process, Design
evolution.
• Performing User interface design: Golden rules, User interface analysis and design,
interface analysis, interface design steps, Design evaluation.
Systems context and modes of use. It specifies the context of the system. it also specify the
relationships between the software that is being designed and its external environment.
• If the system context is a static model it describes the other system in that environment.
• If the system context is a dynamic model then it describes how the system actually interact
with the environment.
System Architecture
Once the interaction between the software system that being designed and the system environment
have been defined. We can use the above information as basis for designing the System
Architecture.
Object Identification--This process is actually concerned with identifying the object classes. We can
identify the object classes by the following
1) Use a grammatical analysis 2) Use a tangible entities 3) Use a behavioral approach
4) Use a scenario based approach
Design model
Design models are the bridge between the requirements and implementation. There are two type of
design models
1) Static model describe the relationship between the objects.
2) Dynamic model describe the interaction between the objects
Golden Rules
1. Place the user in control
2. Reduce the user’s memory load
3. Make the interface consistent
Make the Interface Consistent. Allow the user to put the current task into a meaningful context.
Maintain consistency across a family of applications. If past interactive models have created user
expectations, do not make changes unless there is a compelling reason to do so.
Interface analysis
-Understanding the user who interacts with the system based on their skill levels .i.e, requirement
gathering
-The task the user performs to accomplish the goals of the system are identified, described and
elaborated. Analysis of work environment.
Interface design
In interface design, all interface objects and actions that enable a user to perform all desired task are
defined
Implementation
A prototype is initially constructed and then later user interface development tools may be used to
complete the construction of the interface.
• Validation
The correctness of the system is validated against the user requirement
Interface Analysis
Interface analysis means understanding
– (1) the people (end-users) who will interact with the system through the interface;
– (2) the tasks that end-users must perform to do their work,
– (3) the content that is presented as part of the interface
– (4) the environment in which these tasks will be conducted.
User Analysis
• Are users trained professionals, technician, clerical, o manufacturing workers?
• What level of formal education does the average user have?
• Are the users capable of learning from written materials or have they expressed a desire
for classroom training?
• Are users expert typists or keyboard phobic?
• What is the age range of the user community?
• Will the users be represented predominately by one gender?
• How are users compensated for the work they perform?
• Do users work normal office hours or do they work until the job is done?
– Tables
– Direct data manipulation
– Navigation
– Searching
– Page elements
– e-Commerce
Design Issues
• Response time
• Help facilities
• Error handling
• Menu and command labeling
• Application accessibility
• Internationalization
UNIT IV
TESTING STRATEGIES
Software is tested to uncover errors introduced during design and construction. Testing often
accounts for ore project effort than other s/e activity. Hence it has to be done carefully using a
testing strategy.
The strategy is developed by the project manager, software engineers and testing specialists.
Testing is the process of execution of a program with the intention of finding errors Involves 40%
of total project cost
Testing Strategy provides a road map that describes the steps to be conducted as part of
testing. It should incorporate test planning, test case design, test execution and resultant data
collection and execution
Validation refers to a different set of activities that ensures that the software is traceable tothe
Customer requirements.
V&V encompasses a wide array of Software Quality Assurance
Testing Strategy
Testing can be done by software developer and independent testing group. Testing and
debugging are different activities. Debugging follows testing
Low level tests verifies small code segments. High level tests validate major system
functionsagainst customer requirements
Unit Testing begins at the vortex of the spiral and concentrates on each unit of software insource
code.
It uses testing techniques that exercise specific paths in a component and its control structure to
ensure complete coverage and maximum error detection. It focuses on the internal processing logic
and data structures. Test cases should uncover errors.
Boundary testing also should be done as s/w usually fails at its boundaries. Unit tests can be
Software Engineering Page 45
Department of CI III Year/I Sem
Integration testing: In this the focus is on design and construction of the software architecture. It
addresses the issues associated with problems of verification and program construction by testing
inputs and outputs. Though modules function independently problems may arise because of
interfacing. This technique uncovers errors associated with interfacing. We can use top-down
integration wherein modules are integrated by moving downward through the control hierarchy,
beginning with the main control module. The other strategy is bottom –up which begins construction
and testing with atomic modules which are combined into clusters as we move up the hierarchy. A
combined approach called Sandwich strategy can be used i.e., top- down for higher level modules
and bottom-up for lower level modules.
Validation Testing: Through Validation testing requirements are validated against s/w constructed.
These are high-order tests where validation criteria must be evaluated to assure that s/w meets all
functional, behavioural and performance requirements. It succeeds when the software functions in a
manner that can be reasonably expected by the customer.
1)Validation Test Criteria2)Configuration Review 3)Alpha And Beta Testing
The validation criteria described in SRS form the basis for this testing. Here, Alpha and Beta testing
is performed. Alpha testing is performed at the developers site by end users in a natural setting and
with a controlled environment. Beta testing is conducted at end-user sites. It is a “live” application
and environment is not controlled.
End-user records all problems and reports to developer. Developer then makes modifications and
releases the product.
System Testing: In system testing, s/w and other system elements are tested as a whole. This is the
last high-order testing step which falls in the context of computer system engineering. Software is
combined with other system elements like H/W, People, Database and the overall functioning is
checked by conducting a series of tests. These tests fully exercise the computer based system. The
types of tests are:
1. Recovery testing: Systems must recover from faults and resume processing within a
prespecified time.
It forces the system to fail in a variety of ways and verifies that recovery is properly performed.
Herethe Mean Time To Repair (MTTR) is evaluated to see if it is within acceptable limits.
2. Security Testing: This verifies that protection mechanisms built into a system will protect it
from improper penetrations. Tester plays the role of hacker. In reality given enough resources and
time it is possible to ultimately penetrate any system. The role of system designer is to make
penetration cost more than the value of the information that will be obtained.
3. Stress testing: It executes a system in a manner that demands resources in abnormal quantity,
frequency or volume and tests the robustness of the system.
4. Performance Testing: This is designed to test the run-time performance of s/w within the
context of an integrated system. They require both h/w and s/w instrumentation.
Testing Tactics:
The goal of testing is to find errors and a good test is one that has a high probability of finding
anerror.
A good test is not redundant and it should be neither too simple nor too complex. Two major
categories of software testing
Black box testing: It examines some fundamental aspect of a system, tests whether each
functionof product is fully operational.
White box testing: It examines the internal operations of a system and examines the procedural
detail.
Object
Link
2) Equivalence partitioning: This divides the input domain of a program into classes of data
from which test
Cases can be derived. Define test cases that uncover classes of errors so that no. of test cases are
reduced. This is based on equivalence classes which represents a set of valid or invalid states for
input conditions. Reduces the cost of testing
Example
Input consists of 1 to 10
Then classes are n<1,1<=n<=10,n>10
Choose one valid class with value within the allowed range and two invalid classes where values
aregreater than maximum value and smaller than minimum value.
Example
If 0.0<=x<=1.0
Then test cases are (0.0,1.0) for valid input and (-0.1 and 1.1) for invalid input
Example
Three inputs A,B,C each having three values will require 27 test cases. Orthogonal testing will
reduce the number of test case to 9 as shown below
This selects test paths according to the locations of definitions and use of variables in a
program Aims to ensure that the definitions of variables and subsequent use is tested
First construct a definition-use graph from the control flow of a program DEF(definition):definition of
a variable on the left-hand side of an assignment statement USE: Computational use of a variable
like read, write or variable on the right hand of
assignment statement Every DU chain be tested at least once.
c) Loop Testing
This focuses on the validity of loop constructs. Four categories can be defined
Debugging occurs as a consequence of successful testing. It is an action that results in the removal
of errors.
It is very much an art.
Brute Force: Most common and least efficient method for isolating the cause of a s/w error.
This is applied
when all else fails. Memory dumps are taken, run-time traces are invoked and program is loaded
with output statements. Tries to find the cause from the load of information Leads to waste of time
and effort.
Cause Elimination: Based on the concept of Binary partitioning. Data related to error
occurrence are organized to isolate potential causes. A “cause hypothesis” is devised and data is
used to prove or disprove it. A list of all possible causes is developed and tests are conducted to
eliminate each
Automated Debugging: This supplements the above approaches with debugging tools that
provide semi-automated support like debugging compilers, dynamic debugging aids, test case
generators,mapping tools etc.
Regression Testing: When a new module is added as part of integration testing the software
changes.
This may cause problems with the functions which worked properly before. This testing is there-
execution of some subset of tests that are already conducted to ensure that changes have not
propagated unintended side effects. It ensures that changes do not introduce unintended behaviour
or errors. This can be done manually or automated.
Risk Management
Risk is an undesired event or circumstance that occur while a project is underway It is
necessary for the project manager to anticipate and identify different risks that a project may be
susceptible to Risk Management. It aims at reducing the impact of all kinds of risk that may effecta
Software Engineering Page 51
Department of CI III Year/I Sem
Software Risk
It involve 2 characteristics
Uncertainty : Risk may or may not happen
Loss : If risk is reality unwanted loss or consequences will occur It includes
1)Project Risk 2)Technical Risk 3)Business Risk 4)Known Risk 5)Unpredictable Risk
6) Predictable risk
Project risk: Threaten the project plan and affect schedule and resultant cost Technical risk:
Threaten the quality and timeliness of software to be produced Business risk: Threaten the viability
of software to be built
Known risk: These risks can be recovered from careful evaluation Predictable risk: Risks are
identified by past project experience Unpredictable risk: Risks that occur and may be difficult to
identify
Risk Identification
It concerned with identification of riskStep1: Identify all possible risks Step2: Create item check
list
Step3: Categorize into risk components-Performance risk, cost risk, support risk and schedule risk
Step4: Divide the risk into one of 4 categoriesNegligible-0
Marginal-1Critical-2
Risk Identification
Risk Identification includes Product size
Business impact Development environment Process definition Customer
characteristicsTechnology to be built Staff size and experience
Risk Projection
Also called risk estimation. It estimates the impact of risk on the project and the product.
Estimation is done by using Risk Table. Risk projection addresses risk in 2 ways
Software Engineering Page 52
Department of CI III Year/I Sem
Prob
abilit Imp RM
Risk Category y act MM
Size
estimate PS 60% 2
may be
significantly
low
Larger no.
of PS 30% 3
users than
planned
Risk Projection
Steps in Risk projection
1. Estimate Li for each risk
2. Estimate the consequence Xi
3. Estimate the impact
4. Draw the risk table
Ignore the risk where the management concern is low i.e., risk having impact high or lowwith low
probability of occurrence
Consider all risks where management concern is high i.e., high impact with high or moderate
probability of occurrence or low impact with high probability of occurrence
Risk Projection Projection
The impact of each risk is assessed by Impact valuesCatastrophic-1 Critical-2 Marginal-3
Negligible-4
Risk Refinement
Also called Risk assessment
Refines the risk table in reviewing the risk impact based on the following three factorsa. Nature:
Likely problems if risk occurs
b. Scope: Just how serious is it?c. Timing: When and how long
UNIT – V
QUALITY CONCEPTS
Variation control is the heart of quality control
Form one project to another, we want to minimize the difference between the predicted
resources needed to complete a project and the actual resources used, including staff,
equipment, and calendar time
Quality of design
Refers to characteristics that designers specify for the end product Quality
Management
Quality of conformance
Degree to which design specifications are followed in manufacturing the product
Quality control
Series of inspections, reviews, and tests used to ensure conformance of a work
product to itsspecifications
Quality assurance
Consists of a set of auditing and reporting functions that assess the effectiveness and
completeness of quality control activities
COST OF QUALITY
Prevention costs
Quality planning, formal technical reviews, test equipment, training Appraisal
costs In-process and inter-process inspection, equipment calibration and
maintenance, testing Failure costs
rework, repair, failure mode analysis External failure costs
Complaint resolution, product return and replacement, help line support, warranty work
Software Quality Assurance
Software quality assurance (SQA) is the concern of every software engineer to
reduce cost and improve product time-to-market.
A Software Quality Assurance Plan is not merely another name for a test plan,
though test plans are
included in an SQA plan.
SQA activities are performed on every software project.
Use of metrics is an important part of developing a strategy to improve the quality of
both software processes and work products.
SQA Activities
Prepare SQA plan for the project.
Participate in the development of the project's software process description.
Review software engineering activities to verify compliance with the defined
software process.
Audit designated software work products to verify compliance with those defined
as part of the software process.
Ensure that any deviations in software or work products are documented and
handled according to a documented procedure.
Record any evidence of noncompliance and reports them to management.
SOFTWARE REVIEWS
Purpose is to find errors before they are passed on to another software engineering
activity or released to the customer.
Software engineers (and others) conduct formal technical reviews (FTRs) for
software quality assurance.
Using formal technical reviews (walkthroughs or inspections) is an effective means for
improving software quality.
The project leader contacts a review leader, who evaluates the product for
readiness, generates copy of product material and distributes them to two or three
review members for advance preparation.
Each reviewer is expected to spend between one and two hours reviewing the
product, making notes
The review leader also reviews the product and establish an agenda for the review meeting
The review meeting is attended by review leader, all reviewers and theproducer.
One of the reviewer act as a recorder, who notes down all important points
discussed in the meeting.
The meeting(FTR) is started by introducing the agenda of meeting and then the
producer introduces his product. Then the producer “walkthrough” the product,
the reviewers raise issues which they have prepared in advance.
If errors are found the recorder notes down
The review issues list serves two purposes To identify problem areas in the product
To serve as an action item checklist that guides the producer as corrections are made
Review Guidelines
Review the product, not the producer Set an agenda and maintain
itLimit debate and rebuttal
Enunciate problem areas, but don’t attempt to solve every problem noted Take
returnnotes
Limit the number of participants and insist upon advance preparation. Develop a
checklist for each product i.e likely to be reviewed Allocate resources and schedule
time for FTRS
Conduct meaningful training for all reviewer Review your early
reviewsSoftware Defects
Industry studies suggest that design activities introduce 50-65% of all defects or errors
during the software process
Review techniques have been shown to be upto 75% effective in uncovering design
flaws which ultimately reduces the cost of subsequent activities in the software
process
Using the Pareto principle (80% of the defects can be traced to 20% of the causes)
isolate the "vital few" defect causes.
Move to correct the problems that caused the defects in the "vital few”
SOFTWARE RELIABILITY
Defined as the probability of failure free operation of a computer program in a
specified environment for a specified time period
Can be measured directly and estimated using historical and developmental data
Software reliability problems can usually be traced back to errors in design or
implementation.
Measures of Reliability
Mean time between failure (MTBF) = MTTF + MTTRMTTF = mean time to failure
MTTR = mean time to repair
Availability = [MTTF / (MTTF + MTTR)] x 100%
3. Document review and Adequacy of Audit: During this stage, the registrar
reviews the document submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the
organization has compiled the suggestion made by it during the review or not.
Software Engineering Page 59
Department of CSE III Year/I Sem
5. Registration: The Registrar awards the ISO certification after the successful
completion of all the phases.
6. Continued Inspection: The registrar continued to monitor the organization time by
time.