Software Engineering
Software Engineering
Unit 1
Q1. Explain software process.
Ans: A software development process, also known as a software development lifecycle, is a structure
imposed on the development of a software product. Similar terms include software life cycleand software
process. There are several models for such processes, each describing approaches to a variety of tasks or
activities that take place during the process. Some people consider a lifecycle model a more general term
and a software development process a more specific term. For example, there are many specific software
development processes that 'fit' the spiral lifecycle model.
Q2. Define the term software engg.
Ans: Software engineering (SE) is a profession dedicated to designing, implementing, and
modifying software so that it is of higher quality, more affordable, maintainable, and faster to build. It is a
"systematic approach to the analysis, design, assessment, implementation, test, maintenance and
reengineering of software, that is, the application of engineering to software
Q3. Explain spiral model with merits and demerits.
Ans: The spiral model, also known as the spiral lifecycle model, is a systems development lifecycle (SDLC)
model used in information technology (IT). This model of development combines the features of the
prototyping model and the waterfall model. The spiral model is favored for large, expensive, and
complicated projects.
The new system requirements are defined in as much detail as possible. This usually involves
interviewing a number of users representing all the external or internal users and other aspects of the
existing system.
A preliminary design is created for the new system.
A first prototype of the new system is constructed from the preliminary design. This is usually a scaleddown system, and represents an approximation of the characteristics of the final product.
A second prototype is evolved by a fourfold procedure: (1) evaluating the first prototype in terms of its
strengths, weaknesses, and risks; (2) defining the requirements of the second prototype; (3) planning and
designing the second prototype; (4) constructing and testing the second prototype.
5.
6.
7.
8.
9.
At the customer's option, the entire project can be aborted if the risk is deemed too great. Risk factors
might involve development cost overruns, operating-cost miscalculation, or any other factor that could, in
the customer's judgment, result in a less-than-satisfactory final product.
The existing prototype is evaluated in the same manner as was the previous prototype, and, if necessary,
another prototype is developed from it according to the fourfold procedure outlined above.
The preceding steps are iterated until the customer is satisfied that the refined prototype represents the
final product desired.
The final system is constructed, based on the refined prototype.
The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing
basis to prevent large-scale failures and to minimize downtime.
Advantages
1. Estimates become more realistic as work progresses, because important issues are discovered earlier.
2. It is more able to cope with the changes that software development generally entails.
3. Software engineers can get their hands in and start working on a project earlier.
Disadvantages
1. Highly customized limiting re-usability
2. Applied differently for each application
3. Risk of not meeting budget or schedule
4. Risk of not meeting budget or schedule
Q4. Explain system engg hierarchy.
Ans: Good system engineering begins with a clear understanding of context - the world view - and then
progressively narrows focus until technical details are understood. System engineering encompasses a
collection of top-down and bottom-up methods to navigate the hierarchy.
System engineering process begins with a world of view which is refined to focus more fully on a specific
domain of interest. Within a specific domain, the need for targeted system elements is analyzed. Finally,
the analysis, design, and construction of targeted system element is initiated. Broad context is established
at the top of the hierarchy and at the bottom, detailed technical activities are conducted. It is important
for a system engineer narrows the focus of work as one moves downward in the hierarchy.
System modeling is an important element of system engineering process. System engineering model
accomplishes the following:
- define processes.
- represent behavior of the process.
- define both exogenous and endogenous input to model.
- represent all linkages.
Some restraining factors that are considered to construct a system model are:
- Assumptions that reduce number of possible permutations and variations thus enabling a model to
reflect the problem in a reasonable manner.
- Simplifications that enable the model to be created in a timely manner.
- Limitations that help to bound the system.
- Constraints that will guide the manner in which the model is created and the approach taken when the
model is implemented.
- Preferences that indicate the preferred architecture for all data, functions , and technology.
The resultant system model may call for a completely automated or semi automated or a non automated
solution.
Q5. Difference between verification and validation.
Ans: From a general perspective, verification and validation are often used to describe the
similar process. However, from the quality assurance or testing perspective, these two are
completely different terms. For developing any kind of product, for example, software,
there will be some requirements. These requirements are set before the development of
the product is started.
It is the job of the verification to verify that these intermediate requirements are met as
the development of the product continues. The job of the validations is to verify some of
the end requirements are met after the development of the product finishes. Verification
is generally a continuous process. The development of a product are divided i nto several
cycles, each cycle will have different requirements and objectives. There will be separate
validation for each of these cycles. The validati on process will validate the requirements
and trace back to the original objectives.
Validation, on the other hand, is not continues. After the end product is finished, the
process of validation begins. It will have some original end requirements and end
objectives and the verification will verify that those requirements are made and
objectives are achieved. The difference can be explained further with an example. Suppose
a financial software is needed to be developed. One of the requirements of the software
will be to calculate daily saving. The programmer will write a function that can take the
daily earnings and spending to calculate the saving.
Then he will test if the function is working. This is the verification part of the job. When
the software is completed, the programmer will test if it is runni ng and performs
everything correctly, not only the calc ulations of daily saving but also the other functions,
for example, saving the data. This is the validation process. Therefore, verification is
testing a part of the product and validation is testing the whole product. Another
difference between validation and verification is the timing. In the development process,
the process of validation will take place before the process of verifications.
Q6. Explain capability maturity model integration(CMMI).
Ans: The Capability Maturity Model Integration for Software (CMMI) is a model for judging the maturity
of the software processes of an organization and for identifying the key practices that are required to
increase process maturity.
The Software CMMI has become a de facto standard for assessing and improving software
processes. Through the SW-CMMI, the SEI and community have put in place an effective means for
documenting, defining, and measuring the maturity of the processes used by software professionals.
Whether you are aiming for CMMI 'certification' or purely improving your software development
processes Select Business Solutions has process maturity tools to help you move up through each level of
maturity. Until now, no project management tools were available to truly support the evolutionary
progress of IT groups, as they mature through the different CMMI levels. Select Process Director provides
features that support and deliver practical CMMI benefits such as:
CMMI Level 1 to 2
Pre-packaged methodology documentation and active mentoring
CMMI Level 2 to 3
Process documentation, configuration and accelerated project management
CMMI Level 3 to 4
Managed process through Metrics Capture in an XML process repository
CMMI Level 4 to 5
Process feedback loop to enable process improvement
Q7. Explain WINWIN model with application.
Ans: The spiral model suggests a framework activity that addresses customer communication.
The objective of this activity is to elicit project requirements from the customer. In an ideal context, the
developer simply asks the customer what is required and the customer provides sufficient detail to
proceed. Unfortunately, this rarely happens. In reality, the customer and the developer enter into a
process of negotiation, where the customer may be asked to balance functionality, performance, and
other product or system characteristics against cost and to market. The best negotiations strive for a
win-win result. That is, the customer wins by getting the system or product that satisfies the majority of
the customers needs and the developer wins by working to realistic and achievable budgets and
deadlines.
Boehms WINWIN spiral model defines a set of negotiation activities at the beginning of each pass around
the spiral. Rather than a single customer communication activity, the following activities are defined:
1. Identification of the system or subsystems key stakeholders.
2. Determination of the stakeholders win conditions.
3. Negotiation of the stakeholders win conditions to reconcile them into a set of
win-win conditions for all concerned (including the software project team).
Successful completion of these initial steps achieves a win-win result, which becomes the key criterion for
proceeding to software and system definition.
In addition to the emphasis placed on early negotiation, the WINWIN spiral model introduces three
process milestones, called anchor points , that help establish the completion of one cycle around the spiral
and provide decision milestones before the software project proceeds.
In essence, the anchor points represent three different views of progress as the project traverses the
spiral. The first anchor point, life cycle objectives (LCO), defines a set of objectives for each major
software engineering activity. For example, as part of LCO, a set of objectives establishes the definition of
top-level system/product requirements. The second anchor point, life cycle architecture (LCA),
establishes objectives that must be met as the system and software architecture is defined. For example,
as part of LCA, the software project team must demonstrate that it has evaluated the applicability of offthe-shelf and reusable software components and considered their impact on architectural
decisions. Initial operational capability (IOC) is the third anchor point and represents a set of objectives
associated with the preparation of the software for installation/distribution, site preparation prior to
installation, and assistance required by all parties that will use or support the software.
Q8. Explain the development process of software.
Ans: A software development process or life cycle is a structure imposed on the development of a software
product. There are several models for such processes, each describing approaches to a variety of tasks or
activities that take place during the process.
Processes
More and more software development organizations implement process methodologies. The Capability
Maturity Model (CMM) is one of the leading models. Independent assessments can be used to grade
organizations on how well they create software according to how they define and execute their processes.
Process Activities/Steps
Software Engineering processes are composed of many activities, notably the following:
Requirements Analysis
o
Extracting the requirements of a desired software product is the first task in creating it. While customers
probably believe they know what the software is to do, it may require skill and experience in software
engineering to recognize incomplete, ambiguous or contradictory requirements.
Specification
Specification is the task of precisely describing the software to be written, in a mathematically rigorous
way. In practice, most successful specifications are written to understand and fine-tune applications that
were already well-developed, although safety-critical software systems are often carefully specified prior
to application development. Specifications are most important for external interfaces that must remain
stable.
Software architecture
The architecture of a software system refers to an abstract representation of that system. Architecture is
concerned with making sure the software system will meet the requirements of the product, as well as
ensuring that future requirements can be addressed.
Implementation
Reducing a design to code may be the most obvious part of the software engineering job, but it is not
necessarily the largest portion.
Testing
Testing of parts of software, especially where code by two different engineers must work together, falls to
the software engineer.
Documentation
An important task is documenting the internal design of software for the purpose of future maintenance
and enhancement.
Training and Support
A large percentage of software projects fail because the developers fail to realize that it doesn't matter
how much time and planning a development team puts into creating software if nobody in an organization
ends up using it. People are occasionally resistant to change and avoid venturing into an unfamiliar area,
so as a part of the deployment phase, its very important to have training classes for the most enthusiastic
software users (build excitement and confidence), shifting the training towards the neutral users
intermixed with the avid supporters, and finally incorporate the rest of the organization into adopting the
new software. Users will have lots of questions and software problems which leads to the next phase of
software.
Maintenance
Maintaining and enhancing software to cope with newly discovered problems or new requirements can
take far more time than the initial development of the software. Not only may it be necessary to add code
that does not fit the original design but just determining how software works at some point after it is
completed may require significant effort by a software engineer. About 60% of all software engineering
work is maintenance, but this statistic can be misleading. A small part of that is fixing bugs. Most
maintenance is extending systems to do new things, which in many ways can be considered new work.
Q9. What are the concept of 4P in software engg.
Ans: In the SE-class platform, the 4-Ps map to a SE course as follows:
People the instructor and all students in the class
Project a single project for all people in one semester
Product one complete release of a software product
Process the Unified Software Development Process
PEOPLE
People: The architects, developers, testers, and their supporting management, plus users, customers, and
other stakeholders are the prime movers in a software project. In the SE-class platform, the instructorand
the students play all the above roles.
1. Multi-role
To understand the different roles of people, each student plays different parts and works cooperatively
with others. In each stage of the project, the roles played should be explicitly indicated. Each student will
understand the title and the responsibility of the role and know whom to contact in order to solve a
specific problem in each workflow (or phase) of the software development process.
2. Team Members
No matter which specific roles each student takes, the most important one he/she must learn is the role
of team member. The purpose of assigning students the roles mentioned in 1 is to help them to
understand who should do what when in a complete software development process, while training
students to be good team members helps foster the right working attitude to prepare them for work in
the real world after graduation.
3. Communication
Communication is a team members most basic and important skill. It is also one that most students are
weak in. The typical problems that may seriously affect the progress of the project are:
Students do not check or reply to the e-mails related to the project.
Students are not used to discussing the project out of the class. They discuss them in class or the group
meeting time only.
Students have little chance to meet after class
Students do not ask for help when they have problems that potentially jeopardize the project.
PROJECT
In the SE-class platform, students have a chance to get seriously involved in project management at
different levels besides learning the concepts from lecture.
1. What Students Can Do
The following are some aspects of project management that students may participate in within different
stages of the project:
1) Meetings - There are three levels of meetings in the SE class platform: 1) department, 2) team and 3)
team leaders with the supervisor. Students can learn meeting management through the different styles of
meetings and understand the importance of meetings for a successful project.
2) Estimation - Offer more than one possible product to the students at the beginning of the semester.
Under the guidance of the instructor, the students estimate the time limitation, the possible artifacts that
will have to be packed in the product, and the resources needed including people, existing systems, tools
and lab environment. Based on the estimates, the students pick the product they want to do. Through
participating in the product estimates, the students tend to be more responsible for the project they
decide to do.
3) Task scheduling - Since students have no idea which exact activities they will do at the beginning of the
project, the instructor should make a rough schedule for the whole project. Under the project schedule,
the students can participate in the planning of the subschedule for each workflow or mini project.
4) Risk management Some things that cause project failure in the real world, like cost overruns, never
occur in a SE class project. However, the following risks are possible: wrong self-cognition in survey,
misunderstanding between team members or teams, the work of one person or one team is behind
schedule, and a key person dropping from the class during the semester.
2. Workload Distribution Models
There are two ways to assign the tasks of the project to students:
Vertical style One student plays a limited role in the project by focusing on the tasks in some specific
area.
Horizontal style Every student plays as many key roles as possible in every workflow. This will allow
students to gain wider ranges of knowledge and skills
3. Project Control
1 Project Size Control
So far, all projects selected for experimental classes based on the SE-class platform have been in the
business software category: a Student Information Management system, a Hotel Reservation system, and
an Online Book Store. The size of a chosen project depends on the size of the class and the human
resources. The functions of each product can be prioritized and divided into components. After
identifying the key functions that must be implemented, the rest of system functions can be organized
into optional service packs to balance the workload among teams. For example, in the Online Book Store
system, the required functions are browsing, searching and purchasing, and the optional extended
functions include login, stocking, billing, shipping, etc. A class with a larger size works from a larger core
function (use case) set, while a smaller class from a smaller set. Normally, the number of use cases
identified in requirement elicitation tends to be larger than the number finally implemented.
2 Risk Control
Some possible risks in large SE class projects have been listed. One of the most difficult problems is when
one person or one team is behind schedule. When one person is behind, the problem generally is that the
student is weak or does not take responsibility. The following strategies are used in the platform to solve
the problem:
1) The report scheme allows the supervisor to watch each students progress closely and to identify the
problems as early as possible.
2) For a slow task, we first try to help the student with a team meeting or lab in which the student can get
detailed advice on his/her work. This normally works for the
serious students.
3) Otherwise, the task is assigned to some strong student(s) in the team as a backup and extra credit is
given to the helper(s). Since the tasks are distributed in each workflow, the workload for one person in
one development activity is not very heavy, and a strong student normally can handle extra workloads
PRODUCT
The correctness of the artifacts created in each development activity is a key criterion in evaluating the
results of our teaching. In the SE-class platform,
1. Domain Knowledge
In order for students to learn requirements collection and elicitation, each project starts from a vague text
description of the product. To obtain the correct requirements, the students have to learn the basic
business domain knowledge.
2. Entity Classes
In general, it is not difficult for students to identify entity classes, especially after the domain model has
been created. The hard parts are that identify relationships between entity classes, and implement
persistent data storage without using a database in a complicate system.
3. API or IPI?
During the design workflow, the students create the APIs of assigned classes and submit them at the end
of the workflow. However, many students do not follow the design decisions but change the interface by
themselves. I call this I preferred interface (IPI)
PROCESS
The Unified Process not only consists of the core software development workflows like other typical
processes, but uniquely features three software engineering methodologies: use-case driven, architecture
centric, and iterative and incremental. This section discusses how students learn those methodologies
through the project.
1. Use-case Driven
The use case realization is an activity occurred in workflows to transform the requirements into the
product. As a use cases owner, each student has to be responsible to guarantee that the use cases he/she
owns will eventually be implemented and function properly in the system. This may include:
Writing detailed descriptions of the use cases
Identifying analysis classes and design classes from the use cases
Creating the collaboration diagrams and sequence diagrams of the use cases in analysis and design
Cooperatively working with teammates and other teams to decide a unique list of all classes
Implementing assigned classes
Testing the use cases
2. Architecture centric
System architecture is usually very vague to the students when first introduced in lecture. In the Unified
Process, the architecture is described by the views of the models in each workflow. The activities related
to the architecture permeate almost all the workflows
3. Iterative and Incremental
Due to the limitations of the project size, the available time, and human resources (the students cannot be
treated as real experienced developers), one project cannot include many iterations. Another issue is that
the initial iteration may not include all the core use cases.
Unit 2
Q1. Define behavioral model.
Ans: Behavioral model used to describe overall behavior of the system.
There are two types of behavioral models:
1. Data processing model
2. State machine model
Q2. What is data dictionary.
Ans: A data dictionary is a collection of descriptions of the data objects or items in a data model for the
benefit of programmers and others who need to refer to them. A first step in analyzing a system of objects
with which users interact is to identify each object and its relationship to other objects
Q3. Define legacy system.
Ans: A legacy system is an old method, technology, computer system, or application program that
continues to be used, typically because it still functions for the users' needs, even though newer
technology or more efficient methods of performing a task are now available. A legacy system may
include procedures or terminology which are no longer relevant in the current context, and may hinder or
confuse understanding of the methods or technologies used.
Q4. What is the use of s/w document.
Ans;Software
documentation or source
code
documentation is
written
text
that
accompanies computer software. It either explains how it operates or how to use it, and may mean
different things to people in different roles.
Q5. Explain requirement engg. Process.
Ans: A requirements engineering process is a structured set of activities that is done to develop, validate
and maintain a system requirements document. Activities involved in requirements engineering process
are requirements elicitation, requirements analysis and negotiation, requirements documentation and
requirements validation.
Requirements elicitation.
The system requirements are known by consulting the end users, project managers, from domain
knowledge, from existing system documentation and other relevant sources. Other names for
requirements elicitation are requirements acquisition or requirements discovery.
Requirements Analysis .
There requirements are analyzed for necessity of the requirements, consistency and completeness
checking and whether the requirements are feasible in terms of budget and time.
Requirements Negotiation
Problematic requirements are identified and discussed to solve it. Stakeholders and software developers
finally agree to make the necessary changes and finalize with set of agreed requirements.
Requirements Documentation.
The agreed requirements are documented using natural language which is understandable by all
stakeholders. Supporting diagrams such use case diagrams or sequence diagrams are incorporated in the
document to give higher level of details.
Requirements Validation
The agreed requirements are checked for consistency and completeness to detect any problem is the
requirements before it used to develop the system. This activity will help to minimize cost of the fixing
errors in the early stage of system development.
1.
Introduction - an overview into what this document is about and also the objective and scope.
1.
Business Overview - the business objective should state why this project is being undertaken from a
business point of view.
2.
Project Scope - are-iteration of the project scope and what is to be done with this project and what is
to be left for future projects. This should contain as much detail as possible and should include a
matrix of key pages to be worked on during design.
2.
User Analysis - provides a high level overview of who the users of the digital property are and what
they might be doing on once they arrive.
1.
Audience - looks at who the various audiences are and why they would be coming to the site.
Included should be the various high level take aways that the audiences will walk away from the
project with.
2.
Personas - defines some sample users of the site. Typically will include some demographic /
psychographic information about the personal,
3.
Use cases - define how the various personas would interact with the site. This will show how the
users would actually navigate within the site, what functions they would expect to find, and what the
expected result would be.
4.
User Experience Requirements - shows how the users will experience and interact with the site.
1.
Site map - shows the organization or bucketing of information into primary, secondary, and tertiary
navigation items.
2.
Use flows - the use flows show certain decision paths as outlined by the use cases. The user flows are
meant to show how a user (admin or end user) will interact with the site.
3.
Key page wireframes - for those pages considered key to the design, wireframes will be produced.
The wireframes are used to build final consensus as to what functions and information each page will
contain.
5.
6.
Future enhancements - Future enhancements are items that should be documented and taken into
account when designing the site, architecture, technology, etc.
Marketing requirements - The marketing requirements will define how the users will be brought to
the site and messaged. The marketing requirements covers the types of campaigns to be conducted
and what the desired spends are for each
1.
In Scope requirements
2.
Requested enhancements
3.
Future enhancements
7.
End User functional requirements - The end user functional requirements describe what the site
visitor is able to do when they interact with the site. This will be a full inventory of the activities and
functions for all the various types of users.
1.
In Scope requirements
2.
Requested enhancements
3.
Future enhancements
8.
Administrative functional requirements - Like the end user, all administrative functions need to be
described. Administrative functions are functions performed by members of the host organization
(client) or their affiliates who log in with a password to perform certain actions to maintain the
website.
1.
In Scope requirements
2.
Requested enhancements
3.
Future enhancements
9.
Technical requirements - should spell out what the requirements are for the system, the core
software, front end, and tracking and measurement.
10. System and hosting requirements - will spell out how much traffic (number of users, amount of data
in the system, concurrent page views, reads vs. writes, etc) the site will be able to handle. The
requirements will state where the site needs to be hosted and on what base architecture (OS, DB,
Web Server).
11. Updated project plan - the updated project plan should be provided as the final part of the
requirements document. Any changes from the original plan should be noted and more details
should be provided
12. Appendix A - Assumptions - this will capture all the technical, creative, brand, user, admin, and
content assumptions made while creating the proposal and requirements document.
13. Appendix B - Scope Changes - In a tabular format, this section captures all of the current and future
scope changes. It summarizes each, approximate level of effort (sometimes in people days,
sometimes as low, medium and high).
Q8. Explain functional and non-functional requirement in detail
Ans; Functional Requirement (Function)
A Functional Requirement is a requirement that, when satisfied, will allow the user to perform some kind
of function. For example:
The customer must place an order within two minutes of registering
For the most part, when people are talking about Business Requirements, they are referring to Functional
Requirements which are generally referred to as requirements.
Functional Requirements have the following characteristics:
uses simple language
not ambiguous
contains only one point
specific to one type of user
is qualified
describes what and not how
Typical functional requirements are:
Business Rules
Transaction corrections, adjustments, cancellations
Administrative functions
Authentication
Authorization functions user is delegated to perform
Audit Tracking
External Interfaces
Certification Requirements
Reporting Requirements
Historical Data
Legal or Regulatory Requirements
Non-Functional Requirement
A Non-Functional Requirement is usually some form of constraint or restriction that must be considered
when designing the solution. For example:
The customer must be able to access their account 24 hours a day, seven days a week.
For the most part when people are talking about Constraints, they are referring to Non-Functional
Requirements.
Non-Functional Requirements have the same following characteristics:
uses simple language
not ambiguous
very time-intensive, it's an important step and shouldn't be rushed. Well-documented models allow
stake-holders to identify errors and make changes before any programmingcode has been written.
Data modelers often use multiple models to view the same data and ensure that all processes, entities,
relationships and data flows have been identified. There are several different approaches to data
modeling, including:
Conceptual Data Modeling - identifies the highest-level relationships between different entities.
Enterprise Data Modeling - similar to conceptual data modeling, but addresses the unique requirements
of a specific business.
Logical Data Modeling - illustrates the specific entities, attributes and relationships involved in a
business function. Serves as the basis for the creation of the physical data model.
Physical Data Modeling - represents an application and database-specific implementation of a logical
data model.
3) Functional model
In contrast, the functional model that this paper advocates is strictly based upon the correspondence of
software configuration tasks to the tasks found in the basic software development model. The basic
software development model, according to Sommerville (1996), who is the author of one of the most
commonly used textbooks on software engineering and consequently one that most people might be
familiar with, consists of four fundamental activities which are common to all software development
processes: software specification, software development, software validation, and software evolution.
The functional model of configuration management maps the functions of version control, documentation
control, change management, build management, and release control to the development model.
The model is called a "functional" model because the typology is based on tasks that are commonly called
out on a WBS (Work Breakdown Structure) and there is no need for interpretation of task versus
configuration management area type. A functional emphasis is important for the E-World, i.e. web
applications, because all actions must be as time-efficient as possible to meet deadlines and yet control of
baselines and changes must still occur. There is not the luxury of pondering over what configuration
management means according to some abstract model, but rather a driving concern with getting tangible
tasks performed quickly. A "functional" model is focused on getting those tasks done, not with generating
paper.
Mapping the functional software configuration model to the development model, version control takes
place at the conclusion of development with formal software baselines prior to validation. Documentation
control takes place at the conclusion of specification and then continues throughout with traceability of
documents to software baselines. Change management is initiated immediately following the first
instance of the use of version control or document control. Build management occurs with the initial
repeatable documentation of how to construct the first formal baseline with updates then being a
constant necessity. And release control is performed at the conclusion of validation such that all versions
of the system that are released to outside parties are approved, recorded, and tracked against requests
for defect resolution and enhancements. A description of typical tasks and activities for each functional
area follows and is organized on the basis of "What", "Why, "When", "Where", "Who", and "How" for the
purposes of clarity and definition. In addition, the special needs and consideration of web applications
will be discussed.
Q10. Explain different approaches used in user interface prototyping.
Ans: Prototyping is a development approach used to improve planning and execution of software projects
by executable software systems (prototypes) for experimental purposes. It is very suitable for gaining
experience in new application areas and for supportingincremental or evolutionary software
development.
Unit 3
Q1. What is SCM ? what its need?
Ans: In software engineering, software configuration management (SCM) is the task of tracking and
controlling changes in the software. Configuration management practices include revision control and the
establishment of baselines.
Q2. Define abstraction in design process of software.
Ans: Abstraction
Abstraction is a tool that permits a designer to consider a component at an abstract level without
worrying about the details of the implementation of the component.
An abstraction of a component describes the external behavior of that component without bothering
with the internal details that produce the behavior.
Abstraction is an indispensable part of the design process and is essential for problem partitioning
Q3. What do you mean by modular design.
Ans: In systems engineering, modular design or "modularity in design" is an approach that
subdivides a system into smaller parts (modules) that can be independently created and then used in
different systems to drive multiple functionalities. A modular system can be characterized by the
following:
(1) Functional partitioning into discrete scalable, reusable modules consisting of isolated, self-contained
functional elements;
(2) Rigorous use of well-defined modular interfaces, including object-oriented descriptions of module
functionality;
(3) Ease of change to achieve technology transparency and, to the extent possible, make use of industry
standards for key interfaces.
Q4. Explain SCM in detail.
Ans: The traditional software configuration management (SCM) process is looked upon by practitioners
as the best solution to handling changes in software projects. It identifies the functional and physical
attributes of software at various points in time, and performs systematic control of changes to the
identified attributes for the purpose of maintaining software integrity and traceability throughout the
software development life cycle.
The SCM process further defines the need to trace changes, and the ability to verify that the final
delivered software has all of the planned enhancements that are supposed to be included in the release. It
identifies four procedures that must be defined for each software project to ensure that a sound SCM
process is implemented. They are:
1.
2.
3.
4.
Configuration identification
Configuration control
Configuration status accounting
Configuration audits
These terms and definitions change from standard to standard, but are essentially the same.
Configuration identification is the process of identifying the attributes that define every aspect of a
configuration item. A configuration item is a product (hardware and/or software) that has an enduser purpose. These attributes are recorded in configuration documentation and
baselined. Baselining an attribute forces formal configuration change control processes to be effected
in the event that these attributes are changed.
Configuration change control is a set of processes and approval stages required to change a
configuration item's attributes and to re-baseline them.
Configuration status accounting is the ability to record and report on the configuration baselines
associated with each configuration item at any moment of time.
Configuration audits are broken into functional and physical configuration audits. They occur either
at delivery or at the moment of effecting the change. A functional configuration audit ensures that
functional and performance attributes of a configuration item are achieved, while a physical
configuration audit ensures that a configuration item is installed in accordance with the requirements
of its detailed design documentation.
Configuration management is widely used by many military organizations to manage the technical
aspects of any complex systems, such as weapon systems, vehicles, and information systems. The
discipline combines the capability aspects that these systems provide an organization with the issues of
management of change to these systems over time.
Q5. Explain various types of cohesion and coupling.
Ans: Coupling is a measure of independence of a module or component. Loose coupling means that
different system components have loose or less reliance upon each other. Hence, changes in one
component would have a limited affect on other components.
Strong cohesion implies that all parts of a component should have a close logical relationship with each
other. That means, in the case some kind of change is required in the software, all the related pieces are
found at one place. Hence, once again, the scope is limited to that component itself.
A component should implement a single concept or a single logical entity. All the parts of a component
should be related to each other and should be necessary for implementing that component. If a
component includes parts that are not related to its functionality, then the component is said to
have low cohesion.
Coupling and cohesion are contrasting concepts but are indirectly related to each other. Cohesion is an
internal property of a module whereas coupling is its relationship with other modules. Cohesion
describes the intra-component linkages while couple shows the inter-component linkages. Coupling
measures the interdependence of two modules while cohesion measures the independence of a module. If
modules are more independent, they will be less dependent upon others. Therefore, a highly cohesive
system also implies less coupling.
Example of Coupling
The modules that interact with each other through message passing have low coupling while those who
interact with each other through variables that maintain information about the state have high coupling.
The following diagram shows examples of two such systems.
Example of Cohesion
As mentioned earlier, strong cohesion implies that all parts of a component should have a close logical
relationship with each other. That means, in case some kind of change is required in the software, all the
related pieces are found at one place.
A class will be cohesive if most of the methods defined in a class use most of the data members most of
the time. If we find different subsets of data within the same class being manipulated by separate groups
of functions then the class is not cohesive and should be broken down as shown below.
Class X1
Loose coupling Loose coupling means designing so that you hold connections among different
parts of a program to a minimum. This means; encapsulation, and information hiding [read] and
good abstractions in class interfaces. This also makes the stuff easier to test, which is a Good
Thing.
Extensibility You can change a piece of the system without affecting other pieces.
Reusability Designing the system so that you can use pieces of it in other systems.
High fan-in This refers to having a high number of classes that use a given class. This is good,
the opposite on the other hand
Low-to-medium fan-out Refers to how many classes a given class use. If a class have a high fanout (Code Complete says this is more than 7 classes, I dont want to be that specific) this is often
an indication of that the class may be overly complex. And complexity is bad, remember?
Portability How easy would it be to move the system to another environment?
Leanness - I guess this could be compared to KISS. Voiltaire said that a book is finished not when
nothing more can be added but when nothing more can be taken away. Extra code will have to be
developed, reviewed, tested, and considered when the other code is modified.
Stratification designing in layers. Can you view one layer of the code without thinking about
the underlying layer? An example giving in Code Complete is if youre writing a modern system
that has to use a lot of older, poorly designed code you would want to write a layer of the new
system that is responsible for interfacing with the old code.
Standard techniques this means using design patterns whenever it is appropiate to do so. This
way, if you say to another coder Here I use the Factory pattern he will instantly know what
youre talking about if he knows the pattern. You do not want to be one of those valued
employees who only write code that the valued employee can understand.
Q8. Explain architectural design process.
Ans: The design process for identifying the subsystems making up a system and the framework for subsystem control and communication is architectural design.
The output of this design process is a description of the software architecture. Architectural design
An early stage of the system design process.
Represents the link between specificationand design processes.
Often carried out in parallel with some specification activities.
It involves identifying major system components and their communications.
Advantages of explicit architecture
Stakeholder communication
Architecture may be used as a focus of discussion by system stakeholders.
System analysis
Means that analysis of whether the system can meet its non-functional requirements is
possible.
Large-scale reuse
The architecture may be reusable across a range of system.
Architecture and system characteristics
Performance
Localise critical operations and minimise communications. Use large rather than fine-grain
components.
Security
Use a layered architecture with critical assets in the inner layers.
Safety
Localise safety-critical features in a small number of subsystems.
Availability
Include redundant components and mechanisms for fault tolerance.
Maintainability
Use fine-grain, replaceable components.
Q9. Explain interface design.
Ans: User interface design or user interface engineering is the design of computers, appliances,
machines, mobile communication devices,software applications, and websites with the focus on the user's
experience and interaction. The goal of user interface design is to make the user's interaction as simple
and efficient as possible, in terms of accomplishing user goalswhat is often called user-centered design.
Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to
itself. Graphic design may be utilized to support its usability. The design process must balance technical
functionality and visual elements (e.g., mental model) to create a system that is not only operational but
also usable and adaptable to changing user needs.
UI Design Principles
Lets start with the fundamentals of user interface design. Constantine and Lockwood describe a
collection of principles for improving the quality of your user interface design. These principles are
1.
2.
3.
4.
5.
6.
The structure principle. Your design should organize the user interface purposefully, in
meaningful and useful ways based on clear, consistent models that are apparent and recognizable
to users, putting related things together and separating unrelated things, differentiating
dissimilar things and making similar things resemble one another. The structure principle is
concerned with your overall user interface architecture.
The simplicity principle. Your design should make simple, common tasks simple to do,
communicating clearly and simply in the users own language, and providing good shortcuts that
are meaningfully related to longer procedures.
The visibility principle. Your design should keep all needed options and materials for a given
task visible without distracting the user with extraneous or redundant information. Good designs
dont overwhelm users with too many alternatives or confuse them with unneeded information.
The feedback principle. Your design should keep users informed of actions or interpretations,
changes of state or condition, and errors or exceptions that are relevant and of interest to the
user through clear, concise, and unambiguous language familiar to users.
The tolerance principle. Your design should be flexible and tolerant, reducing the cost of
mistakes and misuse by allowing undoing and redoing, while also preventing errors wherever
possible by tolerating varied inputs and sequences and by interpreting all reasonable actions
reasonable.
The reuse principle. Your design should reuse internal and external components and behaviors,
maintaining consistency with purpose rather than merely arbitrary consistency, thus reducing
the need for users to rethink and remember.
- Identify the transaction center and the flow characteristics along each of the action paths.
- Map the data flow diagram in a program structure amenable to transaction processing.
- Factor and refine the transaction structure and the structure of each action path.
- Refine the first iteration architecture using design heuristics for improved software quality.
Q11. Explain DFD.
Ans:ow Diagrams - Introduction
Data flow diagrams can be used to provide a clear representation of any business function. The
technique starts with an overall picture of the business and continues by analyzing each of the
functional areas of interest. This analysis can be carried out to precisely the level of detail required.
The technique exploits a method called top-down expansion to conduct the analysis in a targeted
way.
The result is a series of diagrams that represent the business activities in a way that is clear and
easy to communicate. A business model comprises one or more data flow diagrams (also known as
business process diagrams). Initially a context diagram is drawn, which is a simple representation
of the entire system under investigation. This is followed by a level 1 diagram; which provides an
overview of the major functional areas of the business. Don't worry about the symbols at this stage,
these are explained shortly. Using the context diagram together with additional information from
the area of interest, the level 1 diagram can then be drawn.
Data Flow Diagrams Diagram Notation
There are only five symbols that are used in the drawing of business process diagrams (data flow
diagrams). These are now explained, together with the rules that apply to them.
This diagram represents a banking process, which maintains customer accounts. In this example,
customers can withdraw or deposit cash, request information about their account or update their
account details. The five different symbols used in this example represent the full set of symbols
required to draw any business process diagram.
External Entity
An external entity is a source or destination of a data flow which is outside the area of study. Only
those entities which originate or receive data are represented on a business process diagram. The
symbol used is an oval containing a meaningful and unique identifier.
Process
A process shows a transformation or manipulation of data flows within the system. The symbol
used is a rectangular box which contains 3 descriptive elements:
Firstly an identification number appears in the upper left hand corner. This is allocated arbitrarily
at the top level and serves as a unique reference.
Data Flow
A data flow shows the flow of information from its source to its destination. A data flow is
represented by a line, with arrowheads showing the direction of flow. Information always flows to
or from a process and may be written, verbal or electronic.
Data Store
A resource flow shows the flow of any physical material from its source to its destination. For this
reason they are sometimes referred to as physical flows.
The physical material in question should be given a meaningful name.
Q12. Explain version control in context with SCM.
Ans: Version Control
Version control is simply the automated act of tracking the changes of a particular file over time. This is
typically accomplished by maintaining one copy of the file in a repository, then tracking the changes to
that file (rather than maintaining multiple copies of the file itself). The concepts of check-in and check-out
make this possible. Each time someone needs to edit a file, they check it out for exclusive editing and then
check it back in to the repository when finished, thus creating a new version. This paradigm of check-in
and check-out can be as simple as I describe above or much more complex depending on the amount of
parallel development and branching you have in your process.
Version control buys you a number of benefits including the ability to:
Roll back to a previous version of a given file
Compare two versions of a file, highlighting differences
Provide a mechanism of locking, forcing serialized change to any given file
Create branches that allow for parallel concurrent development
Maintain an instant audit trail on each and every file: versions, modified date, modifier, and any
additional amount of metadata your system provides for and you choose to implement.
Unit 4
Q1. What is difference between white box testing and Black box testing ?
Ans: White box testing or unit testing : To do white box testing knowledge of internal logic code is
required. Mostly done by developers.
Black box testing: To do black box testing , we should know the functionality of application,logic code is
not required. This testing is done by testers.
Block box or system testing : testing the application without knowledge of underline code of the
application. it is done by the testers.
White box testing or unit testing : To do white box testing knowledge of internal logic code is required.
Mostly done by developers.
Black box testing: To do black box testing , we should know the functionality of application,logic code is
not required. This testing is done by testers.
Block box or system testing : testing the application without knowledge of underline code of the
application. it is done by the testers.
Q2. Define software testing? Why it is important ?
Ans: Software testing is an investigation conducted to provide stakeholders with information about the
quality of the product or service under test.[1] Software testing also provides an objective, independent
view of the software to allow the business to appreciate and understand the risks of software
implementation. Test techniques include, but are not limited to, the process of executing a program or
application with the intent of finding software bugs (errors or other defects).
Q3. State Pareto principle applies to software testing.
Ans: Pareto Principle It states that 80 percent of the uncovered errors in testing is from the 20 percent
of the software components.
Q4. Explain different types of testing.
Ans: Software Testing Types:
Black box testing Internal system design is not considered in this type of testing. Tests are based on
requirements and functionality.
White box testing This testing is based on knowledge of the internal logic of an applications code. Also
known as Glass box Testing. Internal software and code working should be known for this type of testing.
Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing Testing of individual software components or modules. Typically done by the programmer
and not by testers, as it requires detailed knowledge of the internal program design and code. may
require developing test driver modules or test harnesses.
Incremental integration testing Bottom up approach for testing i.e continuous testing of an
application as new functionality is added; Application functionality and modules should be independent
enough to test separately. done by programmers or by testers.
Integration testing Testing of integrated modules to verify combined functionality after integration.
Modules are typically code modules, individual applications, client and server applications on a network,
etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing This type of testing ignores the internal parts and focus on the output is as per
requirement or not. Black-box type testing geared to functional requirements of an application.
System testing Entire system is tested as per the requirements. Black-box type testing that is based on
overall requirements specifications, covers all combined parts of a system.
End-to-end testing Similar to system testing, involves testing of a complete application environment in
a situation that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept it for
a major testing effort. If application is crashing for initial use then system is not stable enough for further
testing and build or application is assigned to fix.
Regression testing Testing the application as a whole for the modification in any module or
functionality. Difficult to cover all the system in regression testing so typically automation tools are used
for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified
requirements. User or customer do this testing to determine whether to accept application.
Load testing Its a performance testing to check system behavior under load. Testing an application
under heavy loads, such as testing of a web site under a range of loads to determine at what point the
systems response time degrades or fails.
Stress testing System is stressed beyond its specifications to check how and when it fails. Performed
under heavy load like putting large number beyond storage capacity, complex database queries,
continuous input to system or database load.
Performance testing Term often used interchangeably with stress and load testing. To check
whether system meets performance requirements. Used different performance and load tools to do this.
Usability testing User-friendliness check. Application flow is tested, Can new user understand the
application easily, Proper help documented whenever user stuck at any point. Basically system navigation
is checked in this testing.
Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different
operating systems under different hardware, software environment.
Recovery testing Testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
Security testing Can system be penetrated by any hacking way. Testing how well the system protects
against unauthorized internal or external access. Checked if system, database is safe from external
attacks.
Compatibility
testing
Testing
how
well
software
performs
in
a
particular
hardware/software/operating system/network environment and different combination s of above.
Comparison testing Comparison of product strengths and weaknesses with previous versions or other
similar products.
Alpha testing In house virtual user environment can be created for this type of testing. Testing is done
at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing Testing typically done by end-users or others. Final testing before releasing application for
commercial purpose.
Q5. Difference between top-down integration and bottom-up integration.
Ans:
Criteria
Architectural Validation
System Demonstration
Test Implementation
Test Observation
Comparison
Top-down testing is better suited than bottom-up testing for
early detection of system architecture errors and high-level
design errors. Early detection reduces the cost of fixing the
errors.
A top-down approach to testing allows the organization to
quickly gain confidence in a skeletal system that can then be
used for demonstration purposes. A bottom-up approach uses
drivers at the highest system levels which would likely be more
cumbersome to demonstrate.
Top-down testing will generally place more of a burden on the
development team since meaningful stub behavior will be
required for the system to be tested. Stubs can become quite
complex in order to provide the necessary behavioral
characteristics. Reusable components, on the other hand, provide
stable behavior and therefore developers do not need to be quite
as creative when creating the drivers that drive those low-level
components.
Top-down and bottom-up testing are about equal on this criteria.
High-level components arent necessarily meant to generate
observable output and must be made to do so using an artificial
environment. Likewise, low-level components may need an
artificial environment in order to probe their internal behavior.
Which relies on stressing the system by going beyond its specified limits and hence testing how well the
system can cope with over-load situations.
5. Back-to-back testing
Which is used when versions of a system are available. The systems are tested together and their outputs
are compared. 6. Performance testing.
This is used to test the run-time performance of software.
7. Security testing.
This attempts to verify that protection mechanisms built into system will protect it from improper
penetration.
8. Recovery testing.
This forces software to fail in a variety ways and verifies that recovery is properly performed.
the following five issues The software under test doesn't do something that the product specification says it should do.
The software under test does something that the product specification says it shouldnt do.
The software under test does something that the product specification does not mention.
The software doesn't do something that the product specification doesnt mention but should do.
The software is difficult to understand, hard to use, slow, or in users eye it is not just right.
Q8. Explain
1) Regression testing
Regression testing means rerunning test cases from existing test suites to build confidence that software
changes have no unintended side-effects. The ideal process would be to create an extensive test suite
and run it after each and every change. Unfortunately, for many projects this is just impossible because
test suites are too large, because changes come in too fast, because humans are in the testing loop,
because scarce, highly in-demand simulation laboratories are needed, or because testing must be done on
many
different
hardware
and
OS
platforms.
Researchers have tried to make regression testing more effective and efficient by developing regression
test selection (RTS) techniques, but many problem remain, such as:
Unpredictable performance. RTS techniques sometimes save time and money, but they
sometimes select most or all of the original test cases. Thus, developers using RTS techniques can
find themselves worse off for having done so.
Incompatible process assumptions. Testing time is often limited (e.g., must be done overnight).
RTS techniques do not consider such constraints and, therefore, can and do select more test cases
than can be run.
Inappropriate evaluation models. RTS techniques try to maximize average regression testing
performance rather than optimize aggregate performance over many testing sessions. However,
companies that test frequently might accept less effective, but cheaper individual testing sessions
if the system would, nonetheless, be well-tested over some short period of time.
These and other issues have not been adequately considered in current research, yet they
strongly affect the applicability of proposed regression testing processes. Moreover, we believe
that solutions to these problems can be exploited, singly and in combination, to dramatically
improve
the
costs
and
benefits
of
the
regression testing process.
2) Black box testing
Also known as functional testing. Asoftware testing technique whereby the internal workings of the item
being tested are not known by the tester. For example, in a black box test on a software design the tester
only knows the inputs and what the expected outcomes should be and not how the program arrives at
those outputs. The tester does not ever examine the programmingcode and does not need any further
knowledge of the program other than its specifications.
Advantages of Black Box Testing
1.
2.
3.
4.
5.
Test cases are created with the help of business analysts, business customers (end users),
developers, test specialists etc.
Test cases suites are run against the input data provided by the user and for the number of
iterations that the customer sets as base/minimum required test runs.
The outputs of the test cases run are evaluated against the criterion/requirements specified by
user.
Depending upon the outcome if it is as desired by the user or consistent over the number of test
suites run or non conclusive, user may call it successful/unsuccessful or suggest some more test
case runs.
Based on the outcome of the test runs, the system may get rejected or accepted by the user with
or without any specific condition.
Reduced Integration Risk : Since smoke testing is carried out the integration problems are uncovered at a
much earlier stage than late in the cycle.
Finds Major Problems: A good designed smoke test can increase the probability of finding a major
problem when a software is built early in the cycle. Thus you catch bugs earlier in the cycle.
Can save time and cost If a major problem is detected at the stage when the software is ready built, it
can save huge time and cost if the same error was discovered late in the cycle.
Q9. Define test cases. Explain basic path testing.
Ans: A test case in software engineering is a set of conditions or variables under which a tester will
determine whether an application or software system is working correctly or not. The mechanism for
determining whether a software program or system has passed or failed such a test is known as a test
oracle. In some settings, an oracle could be a requirement or use case, while in others it could be
a heuristic. It may take many test cases to determine that a software program or system is considered
sufficiently scrutinized to be released. Test cases are often referred to as test scripts, particularly when
written. Written test cases are usually collected into test suites.
Basis Path Testing
Basis path testing is a white-box technique. It allows the design and definition of a basis set of execution
paths. The test cases created from the basis set allow the program to be executed in such a way as to
examine each possible path through the program by executing each statement at least once.
To be able to determine the different program paths, the engineer needs a representation of the logical
flow of control. The control structure can be illustrated by a flow graph. A flow graph can be used to
represent any procedural design.
Next a metric can be used to determine the number of independent paths. It is called cyclomatic
complexity and it provides the number of test cases that have to be designed. This insures coverage of all
program statements.
Q10. Explain data flow mechanism.
Ans: A data flow diagram (DFD) is a significant modeling technique for analyzing and constructing
information processes. DFD literally means an illustration that explains the course or movement of
information in a process. DFD illustrates this flow of information in a process based on the inputs and
outputs. A DFD can be referred to as a Process Model.
Additionally, a DFD can be utilized to visualize data processing or a structured design. A DFD illustrates
technical or business processes with the help of the external data stored, the data flowing from a
process to another, and the results.
A designer usually draws a context-level DFD showing the relationship between the entities inside and
outside of a system as one single step. This basic DFD can be then disintegrated to a lower level
diagram demonstrating smaller steps exhibiting details of the system that is being modelled. Numerous
levels may be required to explain a complicated system.
Ans: Integration testing is a logical extension of unit testing. In its simplest form, two units that have
already been tested are combined into a component and the interface between them is tested. A
component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario,
many units are combined into components, which are in turn aggregated into even larger parts of the
program. The idea is to test combinations of pieces and eventually expand the process to test your
modules with those of other groups. Eventually all the modules making up a process are tested together.
Beyond that, if the program is composed of more than one process, they should be tested in pairs rather
than all at once.
Integration testing identifies problems that occur when units are combined. By using a test plan that
requires you to test each unit and ensure the viability of each before combining units, you know that any
errors discovered when combining units are likely related to the interface between units. This method
reduces the number of possibilities to a far simpler level of analysis.
You can do integration testing in a variety of ways but the following are three common strategies:
The top-down approach to integration testing requires the highest-level modules be test and
integrated first. This allows high-level logic and data flow to be tested early in the process and it
tends to minimize the need for drivers. However, the need for stubs complicates test
management and low-level utilities are tested relatively late in the development cycle. Another
disadvantage of top-down integration testing is its poor support for early release of limited
functionality.
The bottom-up approach requires the lowest-level units be tested and integrated first. These
units are frequently referred to as utility modules. By using this approach, utility modules are
tested early in the development process and the need for stubs is minimized. The downside,
however, is that the need for drivers complicates test management and high-level logic and data
flow are tested late. Like the top-down approach, the bottom-up approach also provides poor
support for early release of limited functionality.
The third approach, sometimes referred to as the umbrella approach, requires testing along
functional data and control-flow paths. First, the inputs for functions are integrated in the
bottom-up pattern discussed above. The outputs for each function are then integrated in the topdown manner. The primary advantage of this approach is the degree of support for early release
of limited functionality. It also helps minimize the need for stubs and drivers. The potential
weaknesses of this approach are significant, however, in that it can be less systematic than the
other two approaches, leading to the need for more regression testing.
Types
Top- down Integration testing
A top-down approach (also known as step-wise design) is essentially the breaking down of a system to
gain insight into its compositional sub-systems. In a top-down approach an overview of the system is
formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in yet
greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to
base elements. A top-down model is often specified with the assistance of "black boxes", these make it
easier to manipulate. However, black boxes may fail to elucidate elementary mechanisms or be detailed
enough to realistically validate the model.
Bottom up Integration testing
A bottom-up approach is the piecing together of systems to give rise to grander systems, thus making the
original systems sub-systems of the emergent system. In a bottom-up approach the individual base
elements of the system are first specified in great detail. These elements are then linked together to form
larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level
system is formed. This strategy often resembles a "seed" model, whereby the beginnings are small but
eventually grow in complexity and completeness
Unit 5
Q1. Define risk management.
Ans: Risk management is a systematic approach to minimizing an organization's exposure to risk. A risk
management system includes various policies, procedures and practices that work in unison to identify,
analyze, evaluate, address and monitor risk. Risk management information is used along with other
corporate information, such as feasibility, to arrive at a risk management decision. Transferring risk to
another party, lessening the negative affect of risk and avoiding risk altogether are considered risk
management strategies.
Q2. What is measure and measurement?
Ans: Software measures are used to quantify software products, software development resources, and
the software development process. There are many characteristics of software and software projects that
can be measured, such as size, complexity, reliability, quality, adherence to process, and profitability.
Measurement of both the product and development processes has long been recognized as a critical
activity for successful software development. Good measurement practices and data enable realistic
project planning, timely monitoring of project progress and status, identification of project risks, and
effective process improvement. Appropriate measures and indicators of software artifacts such as
requirements, designs, and source code can be analyzed to diagnose problems and identify solutions
during project execution and reduce defects, rework (effort, resources, etc.), and cycle time.
Q4. What do you mean by software complexity?
Ans: Complexity is everywhere in the software life cycle: requirements, analysis, design, and of course,
implementation. Complexity is usually an undesired property of software because complexity makes
software harder to read and understand, and therefore harder to change; also, it is believed to be one
cause of the presence of defects. Of all the artifacts produced in a software project, source code is the
easiest option to measure complexity.
Q5. explain COCOMO model
Ans: Introduction:
The structure of empirical estimation models is a formula, derived from data collected from past software
projects, that uses software size to estimate effort. Size, itself, is an estimate, described as either lines of
code (LOC) or function points (FP). No estimation model is appropriate for all development
environments, development processes, or application types. Models must be customised (values in the
formula must be altered) so that results from the model agree with the data from the particular
environment.
The typical formula of estimation models is:
where;
E = a + b(S)c
The relationship seen between development effort and software size is generally:
S
This graph demonstrates that the amount of effort accelerates as size increases, i.e., the value c in the
typical formula above is greater than 1.
COCOMO:
When Barry Boehm wrote 'Software Engineering Economics', published in 1981, he introduced an
empirical effort estimation model (COCOMO - COnstructive COst MOdel) that is still referenced by the
software engineering community.
The original COCOMO model was a set of models; 3 development modes (organic, semi-detached, and
embedded) and 3 levels (basic, intermediate, and advanced).
COCOMO model levels:
Basic - predicted software size (lines of code) was used to estimate development effort.
Intermediate - predicted software size (lines of code), plus a set of 15 subjectively assessed 'cost drivers'
was used to estimate development effort.
Advanced - on top of the intermediate model, the advanced model allows phase-based cost driver
adjustments and some adjustments at the module, component, and system level.
COCOMO development modes:
Organic - small relatively small, simple software projects in which small teams with good application
experience work to a set of flexible requirements.
Embedded - the software project has tight software, hardware and operational constraints.
Semi-detached - an intermediate (in size and complexity) software project in which teams with mixed
experience levels must meet a mix of rigid and less than rigid requirements.1
COCOMO model:
The general formula of the basic COCOMO model is:
E = a(S)b
Where:
development mode:
organic
semi-detached
embedded
a = 2.4
a = 3.0
a = 3.6
b = 1.05
b = 1.12
b = 1.20
The intermediate and advanced COCOMO models incorporate 15 'cost drivers'. These 'drivers' multiply
the effort derived for the basic COCOMO model. The importance of each driver is assessed and the
corresponding value multiplied into the COCOMO equation, which becomes:
E = a(S)b x product(cost drivers)
Q6. Explain Delphi model.
Ans:
Q7. Discuss characteristics of software Engg.
Ans: To properly satisfy the basic goals, an SRS should have certain properties and should contain
different types of requirements. Some of the desirable characteristics of an SRS are :
1.
Correct
2. Complete
3. Unambiguous
4. Verifiable
5. Consistent
6. Ranked for importance and/or stability
An SRS is correct if every requirement included in the SRS represents something required in the final
system. It is complete if everything the software is supposed to do and the responses of the software to all
classes of input data are specified in the SRS. It is unambiguous if and only if every requirement stated has
one and only one interpretation. Requirements are often written in natural language, which is inherently
ambiguous. If the requirements are specified in a natural language, the SRS writer has to be especially
careful to ensure that there are no ambiguities.
An SRS is verifiable if and only if every stated requirement is verifiable. A requirement is verifiable if
there exists some cost-effective process that can check whether the final software meets that
requirement. It is consistent if there is no requirement that conflicts with another. Terminology can cause
inconsistencies; for example, different requirements may use different terms to refer to the same object.
There may be logical or temporal conflict between requirements that causes inconsistencies. This occurs
if the SRS contains two or together by any software system. For example, suppose a requirement states
that an event e is to occur before another event f. But then another set of requirements states (directly or
indirectly by transitivity) that event f should occur before event e.Inconsistencies in an SRS can reflect
some major problems.
Generally, all the requirements for software are not of equal importance. Some are critical, others are
important but not critical, and there are some which are desirable but not very important. Similarly, some
requirements are core requirements which are not likely to change as time passes, while others are
more dependent on time. Some provide more value to the users than others. An SRS is ranked for
importance and/or stability if for each requirement the importance and the stability of the requirement
are indicated. Stability of a requirement reflects the chances of it changing in the future. It can be reflected
in terms of the expected change volume. This understanding of value each requirement provides is
essential for iterative developmentselection of requirements for an iteration is based on this
evaluation.
Of all these characteristics, completeness is perhaps the most important and also the most difficult
property to establish. One of the most common defects in requirements specification is incompleteness.
Missing requirements necessitate additions and modifications to the requirements later in the
development cycle, which are often expensive to incorporate. Incompleteness is also a major source of
disagreement between the client and the supplier.
Some, however, believe that completeness in all details may not be desirable. The pursuit of completeness
can lead to specifying details and assumptions that may be commonly understood. (For example,
specifying in detail what a common operation like add a record means.) And specifying these details can
result in a large requirements document, which has its own problems including making validation harder.
On the other hand, if too few details are given, the chances of developers understanding being different
from others increases, which can lead to defects in the software.
For completeness, a reasonable goal is to have sufficient detail for the project at hand. For example, if
the waterfall model is to be followed in the project, it is better to have detailed specifications so the need
for changes is minimized. On the other hand, for iterative development, as feedback is possible and
opportunity for change is also there, the specification can be less detailed. And if an agile approach is
being followed, then completeness should be sought only for top-level requirements, as details may not
be required in written form, and are elicited when the requirement is being implemented. Together the
performance and interface requirements and design constraints can be called nonfunctional
requirements.
Q8. Difference between milestones and deliverable.
Ans: MILESTONES AND DELIVERABLES:
Managers need information. As software is intangible, this information can only be provided
as document that describes the state of the software being developed. Without this information, it is
important tojudge progress and cost estimates and schedules cannot be updated.
When planning a project, a series of milestones should be established where a milestone is an end-point
of a software process activity. At each milestone, there should be a formal output, such as a report, they
can be represented to management. Milestone reports don't need large documents. They may simply be a
short report of achievements in a project activity. Milestones should represent the end of a distinct,
logical stage in the project. Indefinite milestones such as "Coding 80% complete" which are impossible to
validate are useless for project management.
A "deliverable" is a project result that is delivered to the customer. It is usually delivered at the end of
some major project phase such as specification, design, etc.
Deliverables are usually milestones but milestones need not to be deliverables. Milestones may be
internal project results that are used by the major project manager to check project progress but which
are not delivered to the customers.
To establish milestones, the software process must be broken down into basic activities with associated
outputs.