Software Engineering Coursework 4
Software Engineering Coursework 4
JINJA CAMPUS
NAME : KUER MATUOR MICHEAL
REG NO : 20FE/KUJ/BCS/918R
YEAR : TWO
SEMESTER : TWO
Questions
1.Discuss software verification and validation model.
2. Write short notes on the following terms as used in software verification and validation:
I. Inspection
II. Walkthrough
III. Buddy checks
IV. Code validation
V. Integration testing
VI. Functional testing
VII. User acceptance testing
VIII. User requirements
IX. System requirements
X. Non-functional requirements
3. A) Define compatibility testing.
B) Explain usefulness of compatibility testing
C) what are the elements in computing environment
D) write a short note on capability maturity model (CMM).
4 A) what is software quality assurance (SQA)?
B) Explain the concept of standards and procedures in SQA.
C) Discuss Software Quality Assurance Activities.
Software Engineering Over View.
Types of software
Software can be categorized into three major types namely system software,
programming software and application software.
System software
System software helps to run the computer hardware and the entire computer
system. It includes the following:
Device drivers
Operating systems
Utilities
Windowing systems
The function of systems software is to assist the applications programmer from the details of the
particular computer complex being used, including such peripheral devices as communications,
printers, readers, displays and keyboards, and also to partition the computer's resources such as
memory and processor time in a safe and stable manner.
Programming software
Programming software offers tools to assist a programmer in writing programs, and
software using different programming languages in a more convenient way. The tools
include:
Compilers
Debuggers
interpreters
Linkers
Text editors
Application software
Application software is a class of software, which the user of computer needs to
accomplish one or more definite tasks. The common applications software includes the
following:
industrial automation
business software
computer games
quantum chemistry and solid state physics software
Telecommunications (i.e., the internet and everything that flows on it) databases
etc.
Characteristics of software
Software Verification
Now the question here is: what are the intermediary or mediator products?
Mediator products are documents which are produced during the development phases
like, requirements specification, design documents, database table design, ER diagrams,
test cases, traceability matrix, etc.
Verification ensures that the system (software, hardware, documentation, and personnel)
complies with an organization’s standards and processes, relying on the review or non-
executable methods.
Software Validation
Software validation is the process of evaluating the final product to check whether the
software meets the business needs. In simple words, the test execution which we do in
our daily life is actually the validation activity which includes smoke testing, functional
testing, regression testing, systems testing, etc.
Software Verification and Validation Model is the software development life cycle
(SDLC) model where the execution of process happens in a sequential manner in a V-
shape. It is also known as V-Model.
The V-Shape is an extension of the water fall model and is based on the association of a
testing phase for each corresponding development stage. This means that for every single
phase in the development cycle, there is directly associated testing phase.
Under the V-model design, the corresponding testing phase of the development
phase is planned in parallel.
There are verification phases on one side of the ‘V’ and Validation phases on
other side.
The coding phase joins the two sides of the V-model.
V-Model-Verification Phases
There are several Verification phases in the V-model, each of these are explained in
detail below.
This is the first phase in the development cycle where the product requirements are
understood from the customer perspective. This phase involves detailed communication
with the customer to understand his expectations and exact requirements. This is a very
important activity and needs to be managed well, as most of the customers are not sure
about what exactly they need. The acceptance test design planning is done at this stage
as business requirements can be used as an input for acceptance.
System Design
Once you have the clear and detailed product requirements, it is time to design the
complete system. The system design will have the understanding and detailing the
complete hardware and communication setup for the product under development. The
system test plan is developed based on the system design.
Architectural Design
Architectural specifications are understood and designed in this phase. Usually more
than one technical approach is proposed and based on the technical and financial
feasibility, the final decision is taken. The system design is broken further into modules
taking up different functionality. This is also known as High Level Design (HLD).
The data transfer and communication between the internal modules and with the outside
world (other systems) is clearly understood and defined in this stage. With this
information, integration tests can be design and documented during this stage.
Module design
In this phase, the detailed internal design for all the system modules is specified, referred
to as Low Level Design (LLD). It is important that the design is compatible with other
modules in the system architecture and the other external systems. Unit tests can be
designed at this stage base on the internal module designs.
Coding Phase
The actual coding of the system modules designed in the design phase is taken up in the
coding phase. The best suitable programming language is decided based on the system
and architectural requirements. The coding is performed based on the coding guidelines
and standards. The code goes through numerous codes reviews and is optimized for best
performance before the final build is checked into the repository.
Unit Testing
Unit Tests designed in the module design phase are executed on the code during this
validation phase. Unit testing is the testing at the code level and help eliminate bugs at an
early stage, though all defects cannot be uncovered by unit testing.
Integration Testing
Integration testing is associated with architectural design phase. Integration tests are
performed to test the coexistence and communication of the internal modules within the
system.
System testing
System testing is directly associated with the system design phase. System tests check
the entire system functionality and the communication of the system under development
with external systems. Most of the software and hardware compatibility issues can be
uncovered during this system test execution.
Acceptance testing
Acceptance testing is associated with the business requirement analysis phase and
involves testing the product in the user environment. Acceptance tests uncover the
compatibility issues with the other systems available in the user environment.
(2) Below is the short note of the term as use in Verification and
Validation:
(i) Inspection
Inspection is define as a formal, rigorous, in depth group review designed to identify
problems as close to their point of origin as possible. It is a peer review of any work
product by trained individuals who look for defects in software product using a well
defined process.
Characteristics of inspection
Inspection is usually led by a trained moderator, who is not the author. A
moderator role is to do a peer examination of a document.
Inspection is most formal and driven by checklists and rules.
This review process makes use of entry and exit criteria.
It is essential to have a pre-meeting preparation.
Inspection report is prepared and shared with the author for appropriate actions.
Aim of inspection is NOT only to identify defects but also to bring in for process
improvement.
(ii) Walkthroughs: walkthrough is a software peer review in which a designer or
programmer leads members of the development team and other interested parties through
a software product and participants ask questions and make comments about possible
errors, violation of development standard and other problems.
Participants of walkthrough
Author- The author of the document under review.
Presenter- The presenter usually develops the agenda for the walkthrough and
presents the output being reviewed.
Moderator- The moderator facilitates the walkthrough session ensures the
walkthrough agenda is followed, and encourages all the reviewers to participate.
Reviewers-The reviewers evaluate the document under test to determine if it is
technically accurate.
Scribe- The scribe is the recorder of the structured walkthrough outcomes who
records the issues identifies and any other technical comments, suggestions and
unresolved questions.
Benefits of walkthrough:
Below are the benefits of walkthrough-
Walkthrough save time and money as defects are found and rectified very early
in the lifecycle.
Walkthrough provide value-added comments from reviewers with different
technical backgrounds and experiences.
It notifies the project management team about the progress of the development
process.
It creates awareness about different development or maintenance methodologies
which can provide a professional growth to participants.
(iii) Buddy Checks- Buddy checks or buddy testing is a type of software testing where
two people test the same features of application at the same place, at the same time, on
the same codes, on the same machine by exchanging the ideas. With this technique, more
ideas will be generated and gives better tests results. This type of test involves two
members; one from the development team and one from the testing team. Both
individuals work together on the same modules sharing ideas and uncovering defects and
bugs in the application.
(iv) Code validation is a process of checking whether the codes are up to the mark. It
check what we are developing is the right product.
(v) Integration Testing- integration testing is a type of testing meant to check the
combinations of different units, their interactions, the way subsystems unite into one
common system, and code compliance with the requirements. For example, when we
check login and sign up features in electronic commerce app, we view them as separate
units.
(vi) Functional testing- Functional testing is a kind of black box testing that is
performed to confirm that the functionality of an application or system is behaving as
expected.
It is done to verify all the functionalities of the application.
(ix) System requirements: Are more detailed descriptions of the software system’s
functions, services, and operational constraints. The system requirements document
(sometimes called a functional specification) should define exactly what is to be
implemented. It may be part of the contract between the system buyer and the software
developers. System requirements are a more detailed description of the functionality to
be provided.
These two types of compatibility testing will also include several, more specific
categories of testing.
Computer machinery-is any machinery (most of which use digital circuits) that
assists in software development, input, processing, storage, and output activities
of an information system. These include monitor, CPU, keyboard, mouse, Hard
disk drive etc.
Data storage devices- These include hard disk drive, floppy disk, and compact
disk to mentioned the few.
Work stations-Workstation is a high-performance computer system that is
basically designed for a single user and has advanced graphics capabilities, large
storage capacity, and a central processing unit.
Software applications-Application software consists of a program that helps
users to solve particular computing problems.
Networks-communication media, devices, and software needed to connect two
or more computer systems and/or devices.
(d) A short note on capability maturity model (CMM).
Capability maturity model (CMM) is a methodology used to developed and refine an
organization’s software development process. Organizational experience with the system
development process is a key factor for the systems development success. The capability
maturity model is (CMM) is one way to measure this experience.
Repeatabl
e
Disciplined process
Initial
(1)
1. Initial. This level is typical of organization inexperienced with software and systems
development. This level often has a hoc or even chaotic development process.
2. Repeatable. The second level tracks development cost, schedules, and functionality.
The discipline to repeat the previous system development success is in place.
3. Defined. At the third level, organization use documented and defined procedures. All
projects done by the organization use this standardized approaches to develop software
and systems. Programming standards are often use at this level.
4. Managed. At this level, organization use detailed measures of the system
development to help manage the process and improve software and systems quality.
5. Optimized. This is the highest level of experience and maturity. Continuous
improvement is use to strengthen all aspects of the systems development process.
Organizations at this level often initiate innovative project. This is to optimize all aspects
of the systems development effort.
The CMM model has been popular around the world, and SEI certifies organizations as
being at one of the five levels. Any organization can seek certification, and many
computer-consulting companies attempt to be certified at the highest level
(optimization).
Wipro GE medical, for example, received level 5 certification. The company develops
advanced medical software for computerized tomography (CT) scanners, magnetic
resonance imaging (devices), and other medical equipments.
Software quality assurance (SQA) is a means and practice of monitoring the software
engineering processes and methods used in a project to ensure proper quality of the
software. It may include ensuring conformance to standards or models, such as ISO/IEC
(now superseded by ISO 25010), SPICE OR CMM.
(b) The concepts of standards and procedures in software quality assurance SQA are
explained as follows:
Design Standards specify the form and content of the design product. They provide
rules and methods for translating the software requirements into the software design
and for representing it in the design documentation.
Code Standards specify the language in which the code is to be written and
define any restrictions on use of language features. They define legal language
structures, style conventions, rules for data structures and interfaces, and
internal code documentation.
Procedures are explicit steps to be followed in carrying out a process. All processes
should have documented procedures. Examples of processes for which procedures are
needed are configuration management, non-conformance reporting and corrective action,
testing, and formal inspections.
If developed according to the NASA DID, the Management Plan describes the software
development control processes, such as configuration management, for which there have to
be procedures, and contains a list of the product standards.
Standards are to be documented according to the Standards and Guidelines DID in the
Product Specification. The planning activities required to assure that both products and
processes comply with designated standards and procedures are described in the QA
portion of the Management Plan.
Product evaluation is an SQA activity that assures standards are being followed. Ideally, the
first products monitored by SQA should be the project's standards and procedures. SQA
assures that clear and achievable standards exist and then evaluates compliance of the
software product to the established standards. Product evaluation assures that the software
product reflects the requirements of the applicable standard(s) as identified in the
Management Plan.
Process monitoring is an SQA Activity that ensures that appropriate steps to carry out the
process are being followed. SQA monitors processes by comparing the actual steps carried out
with those in the documented procedures. The Assurance section of the Management Plan
specifies the methods to be used by the SQA process monitoring activity.
A fundamental SQA technique is the audit, which looks at a process and/or a product in depth,
comparing them to established procedures and standards. Audits are used to review
management, technical, and assurance processes to provide an indication of the quality
and status of the software product.
The purpose of an SQA audit is to assure that proper control procedures are being followed,
that required documentation is maintained, and that the developer's status reports accurately
reflect the status of the activity. The SQA product is an audit report to management consisting
of findings and recommendations to bring the development into conformance with standards
and/or procedures.
References
E. Anderson , Z. Bai , J. Dongarra , A. Greenbaum , A. McKenney , J. Du Croz , S.
Hammerling , J. Demmel , C. Bischof , D. Sorensen, LAPACK: a portable linear algebra
library for high-performance computers, Proceedings of the 1990 conference on
Supercomputing, p.2-11, October 1990, New York, New York, United States
S. Balay, K. Buschelman, V. Eijkhout, W. D. Gropp, D. Kaushik, M. G. Knepley, L. C. McInnes, B. F.
Smith, and H. Zhang. PETSc users manual. Technical Report ANL-95/11 -- Revision 2.3.2, Argonne
National Laboratory, Sep. 2006.
Principles of information systems by RALPH STAIR page 401
Bamford, R. and Deibler, W. J. (eds.) (2003). ‘ISO 9001:2000 for Software and Systems
Providers:
An Engineering Approach’. Boca Raton, Fla.: CRC Press.
Barnard, J. and Price, A. (1994). ‘Managing Code Inspection Information’. IEEE Software, 11
(2),59–69.
Basili, V. R. and Rombach, H. D. (1988). ‘The TAME project: Towards Improvement-Oriented
Software Environments’. IEEE Trans. on Software Eng., 14 (6), 758–773.
Boehm, B. W., Brown, J. R., Kaspar, H., Lipow, M., Macleod, G. and Merrit, M. (1978).
Characteristics of Software Quality. Amsterdam: North-Holland.
Chidamber, S. and Kemerer, C. (1994). ‘A Metrics Suite for Object-Oriented Design’. IEEE
Trans. On Software Eng., 20 (6), 476–93.