0% found this document useful (0 votes)
32 views84 pages

Software Engineering: UNIT-1

Uploaded by

gtmtmba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views84 pages

Software Engineering: UNIT-1

Uploaded by

gtmtmba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Gate Degree & PG College o To manage Large software

o For more Scalability


SOFTWARE ENGINEERING o Cost Management
Syllabus
o To manage the dynamic nature of software
o For better quality Management
UNIT-1 Need of Software Engineering:

Introduction to Software Engineering: The necessity of software engineering appears because of a higher rate of progress in user
requirements and the environment on which the program is working.
o Huge Programming: It is simpler to manufacture a wall than to a house or building,
The term software engineering is the product of two words, software, and engineering. similarly, as the measure of programming become extensive engineering has to step to give
it a scientific process.
The software is a collection of integrated programs.
o Adaptability: If the software procedure were not based on scientific and engineering ideas,
Software subsists of carefully-organized instructions and code written by developers on any it would be simpler to re-create new software than to scale an existing one.
of various particular computer languages.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let
Computer programs and related documentation such as requirements, design models and user down the cost of computer and electronic hardware. But the cost of programming remains
manuals. high if the proper process is not adapted.
Engineering is the application of scientific and practical knowledge to invent, design, o Dynamic Nature: The continually growing and adapting nature of programming hugely
build, maintain, and improve frameworks, processes, etc. depends upon the environment in which the client works. If the quality of the software is
continually changing, new upgrades need to be done in the existing one.
o Quality Management: Better procedure of software development provides a better and
quality software product.
Characteristics of a good software engineer:
The features that good software engineers should possess are as follows:
Exposure to systematic methods, i.e., familiarity with software engineering principles.
Good technical knowledge of the project range (Domain knowledge).
Good programming abilities.
Good communication skills. These skills comprise of oral, written, and interpersonal skills.
High motivation.
Sound knowledge of fundamentals of computer science.
Software Engineering is required: Intelligence.
Software Engineering is required due to the following reasons: Ability to work in a team
given four months of a company to the task, and the project is still in its first stage. Because
the company has provided many resources to the plan and it should be completed. So to
handle a big project without any problem, the company has to go for a software engineering
method.
5. Reliable software: Software should be secure, means if you have delivered the software,
then it should work for at least its given time or subscription. And if any bugs come in the
software, the company is responsible for solving all these bugs. Because in software
Importance of Software Engineering: engineering, testing and maintenance are given, so there is no worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the standards.
Software standards are the big target of companies to make it more effective. So Software
becomes more effective in the act with the help of software engineering

size factors of Software Engineering:


In software engineering, size factors play a crucial role in project estimation, planning, and
management. These factors help determine the scope, complexity, and effort required to
complete a software project. Here are some key size factors:
1. Lines of Code (LOC):
o Measures the total number of lines in the source code.
o Often used for productivity and quality metrics.

The importance of Software engineering is as follows: 2. Function Points (FP):


1. Reduces complexity: Big software is always complicated and challenging to progress. o Measures the functionality provided to the user based on inputs, outputs, user interactions,
Software engineering has a great solution to reduce the complication of any project. Software files, and external interfaces.
engineering divides big problems into various small issues. And then start solving each small
o Helps in comparing different software projects.
issue one by one. All these small problems are solved independently to each other.
3. Use Case Points (UCP):
2. To minimize software cost: Software needs a lot of hardwork and software engineers are
highly paid experts. A lot of manpower is required to develop software with a large number o Based on the number and complexity of use cases in the system.
of codes. But in software engineering, programmers project everything and decrease all those
o Considers actors and use case scenarios to estimate effort.
things that are not needed. In turn, the cost for software productions becomes less as
compared to any software that does not use software engineering method. 4. Object Points (OP):

3. To decrease time: Anything that is not made according to the project always wastes time. o Measures the number of objects or classes in object-oriented design.
And if you are making great software, then you may need to run many codes to get the o Factors in object complexity and their interactions.
definitive running code. This is a very time-consuming procedure, and if it is not well
handled, then this can take a lot of time. So if you are making your software according to the 5. Feature Points:
software engineering method, then it will decrease a lot of time. o An extension of function points, taking into account additional factors like algorithm
4. Handling big projects: Big projects are not done in a couple of days, and they need lots of complexity.
patience, planning, and management. And to invest six and seven months of any company, it 6. Story Points:
requires heaps of planning, direction, testing, and maintenance. No one can say that he has
o Used in Agile methodologies. o The ease with which the software can be modified to correct faults, improve performance, or
adapt to a changed environment.
o Measures the effort required to implement a user story based on complexity, risks, and
uncertainties. o Metrics: Change request frequency, defect density, code complexity.
7. Effort (Person-Months): 3. Usability:
o Measures the amount of work required in terms of person-months or person-hours. o The degree to which the software can be used by specified users to achieve specified goals
with effectiveness, efficiency, and satisfaction.
o Derived from other size factors to plan resources.
o Metrics: User satisfaction surveys, task completion time, error rates.
8. Software Size Metrics:
4. Efficiency:
o Kilo Line of Code (KLOC): Thousands of lines of code.
o The capability of the software to provide appropriate performance relative to the amount of
o Effective Lines of Code (eLOC): Lines of code excluding comments and blank lines.
resources used.
9. Complexity Metrics:
o Metrics: Response time, throughput, resource utilization.
o Cyclomatic Complexity: Measures the number of linearly independent paths through a
5. Portability:
program’s source code.
o The ease with which the software can be transferred from one environment to another.
o Halstead Complexity Measures: Based on the number of operators and operands in the
code. o Metrics: Number of environments supported, effort required for porting.
10. Work Breakdown Structure (WBS): 6. Security:
o Divides the project into smaller, manageable sections or tasks. o The software's ability to protect information and data so that unauthorized persons or systems
cannot read or modify them and authorized persons or systems are not denied access.
o Each section’s size can be estimated to sum up to the total project size.
o Metrics: Number of security incidents, time to detect and respond to security threats.
These size factors are often used in conjunction with estimation models such as COCOMO
(Constructive Cost Model), which uses size factors to predict the effort, cost, and duration of 7. Functionality:
a software project.
o The degree to which the software performs its intended functions.
Quality and productivity Factors: o Metrics: Compliance with requirements, number of features implemented.
Quality and productivity are two critical aspects of software engineering that significantly 8. Interoperability:
influence the success of a project. Here are some important factors that affect quality and
productivity in software engineering: o The ability of the software to interact with other systems or software.

Quality Factors o Metrics: Number of supported integrations, ease of integration.

1. Reliability: Productivity Factors

o The ability of the software to perform its required functions under stated conditions for a 1. Development Process:
specified period. o The methodologies and practices used during software development.
o Metrics: Mean Time Between Failures (MTBF), Mean Time to Repair (MTTR). o Metrics: Development time, defect rates, adherence to schedule.
2. Maintainability: 2. Team Skills and Experience:
o The knowledge, experience, and skills of the development team.
o Metrics: Team experience levels, training hours, developer productivity. Managerial issues in software engineering:
3. Tools and Technologies: Managerial issues in software engineering encompass a wide range of challenges that
o The effectiveness of development tools, programming languages, and frameworks used. managers face while planning, executing, and controlling software projects. Addressing these
issues effectively is crucial for the successful delivery of software products. Here are some
o Metrics: Tool usage frequency, defect rates, development speed. of the key managerial issues:
4. Project Management: 1. Project Planning and Scheduling:
o The practices related to planning, tracking, and managing software projects. • Estimating Project Size and Effort: Accurately estimating the size and effort required for a
o Metrics: Schedule adherence, budget adherence, project success rates. project can be challenging, leading to over- or under-estimation.

5. Communication: • Resource Allocation: Ensuring that the right resources (e.g., developers, testers) are available
and efficiently utilized throughout the project lifecycle.
o The effectiveness of communication within the development team and with stakeholders.
• Scheduling: Creating realistic timelines and milestones, and adapting to changes in scope or
o Metrics: Frequency of meetings, communication clarity, feedback loop efficiency. unexpected delays.
6. Requirements Management: 2. Risk Management:
o The process of eliciting, documenting, and managing software requirements. • Identifying Risks: Recognizing potential risks early in the project, such as technical
o Metrics: Requirements stability, requirements clarity, changes in requirements. challenges, resource shortages, or changes in requirements.

7. Code Quality: • Mitigating Risks: Developing strategies to minimize the impact of identified risks and
preparing contingency plans.
o The overall quality of the source code, including readability, maintainability, and complexity.
3. Scope Management:
o Metrics: Code reviews, static code analysis results, refactoring frequency.
• Scope Creep: Managing changes to the project scope to prevent uncontrolled growth and
8. Testing and Quality Assurance: ensure that new requirements are properly evaluated and integrated.
o The processes and practices used to ensure software quality through testing and validation. • Requirements Management: Ensuring that requirements are well-defined, documented, and
o Metrics: Test coverage, defect detection rate, defect resolution time. agreed upon by all stakeholders.

9. Automation: 4. Quality Management:

o The extent to which development and testing processes are automated. • Quality Assurance: Implementing processes to ensure that the software meets defined quality
standards and is free of defects.
o Metrics: Build frequency, deployment frequency, automated test coverage.
• Testing: Planning and executing comprehensive testing strategies to identify and fix issues
10. Work Environment: before deployment.
o The physical and psychological conditions under which the development team works. 5. Team Management:
o Metrics: Team morale, turnover rates, work-life balance. • Team Dynamics: Managing diverse teams, addressing conflicts, and fostering a collaborative
Balancing these quality and productivity factors is crucial for delivering high-quality and productive work environment.
software within time and budget constraints. Effective management practices, continuous • Skill Development: Ensuring that team members have the necessary skills and providing
improvement, and adopting the right tools and methodologies can significantly enhance both training or mentoring as needed.
quality and productivity in software engineering.
• Motivation and Retention: Keeping the team motivated and engaged, and addressing factors *Requirements Gathering: Engaging clients and users in the requirements-gathering process
that contribute to employee turnover. to ensure that the final product meets their needs.
6. Communication and Collaboration: *Feedback: Soliciting and incorporating feedback from clients and users throughout the
project lifecycle.
• Stakeholder Communication: Ensuring clear and consistent communication with all
stakeholders, including clients, team members, and management. 13. Compliance and Legal Issues:
• Collaboration Tools: Utilizing effective tools and practices to facilitate collaboration and *Regulatory Compliance: Ensuring that the software complies with relevant regulations and
information sharing among team members. standards.
7. Budget Management: *Intellectual Property: Managing intellectual property rights and ensuring that the software
does not infringe on third-party rights.
• Cost Estimation: Accurately estimating project costs and creating realistic budgets.
Effectively managing these issues requires a combination of strong leadership, effective
• Budget Control: Monitoring expenditures and ensuring that the project stays within budget.
communication, and the ability to adapt to changing circumstances. By addressing these
8. Process Management: managerial challenges, software engineering managers can increase the likelihood of
delivering successful projects on time and within budget.
• Adopting
Methodologies: Choosing and implementing the appropriate software development
methodologies (e.g., Agile, Waterfall) that fit the project needs.
• ProcessImprovement: Continuously evaluating and improving development processes to
enhance efficiency and quality.
Planning a Software Project:
9. Technology Management:
The objective of software project planning is to provide a framework that enables the
• Tool
Selection: Choosing the right tools and technologies that align with project requirements manager to make reasonable estimates of resources, cost, and schedule. These estimates are
and team capabilities. made within a limited time frame at the
• Keeping Up with Trends: Staying updated with the latest industry trends and advancements
to ensure that the project benefits from modern practices and technologies.
10.Change Management:
*Handling Changes: Managing changes in project scope, technology, team composition, and
other factors effectively.
*Adaptability: Ensuring that the team and processes are flexible enough to adapt to changes
without significant disruption.

11. Performance Monitoring:


*Tracking Progress: Regularly monitoring project progress against planned milestones and
performance metrics.
*Performance Metrics: Using key performance indicators (KPIs) to assess productivity,
quality, and other critical aspects.
12. Client and User Involvement:
hence some degree of decomposition is often useful. Performance considerations encompass
processing and response time requirements. Constraints identify limits placed on the software
by external hardware, available memory, or other existing systems. To define scope, it is
necessary to obtain the relevant information and hence get the communication process started
between the customer and the developer. To accomplish this, a preliminary meeting or
interview is to be conducted. The analyst may start by asking context free questions. That is,
a set of questions that will lead to a basic understanding of the problem, the people who want
a solution, the nature of the solution that is desired, and the effectiveness of the first encounter
itself. The next set of questions enable the analyst to gain a better understanding of the
problem and the customer to voice his or her perceptions about a solution. The final set of
questions, known as Meta questions, focus on the effectiveness of the meeting. A team-
oriented approach, such as the Facilitated Application Specification Techniques (FAST),
helps to establish the scope of a project.
Resources :
The development resources needed are:
1. Development Environment (Hardware/Software Tools)
2. Reusable Software Components
3. Human Resources (People)
Each resource is specified with four characteristics – description of the resource, a statement
of availability, chronological time that the resource will be required, and duration of time that
the resource will be applied. The last two characteristics can be viewed as a time window.
Availability of the resource for a specified window must be established at the earliest
practical time.
Human Resources:
Both organizational positions (e.g. Manager, senior software engineer, etc.) and specialty
(e.g. Telecommunications, database, etc.) are specified. The number of people required varies
for every software project and it can be determined only after an estimate of development
effort is made.
Reusable Software Resources:
beginning of a software project. Also they should be updated regularly as the project These are the software building blocks that can reduce development costs and speed up the
progresses. The planning objective is achieved through a process of information discovery product delivery. The four software resource categories that should be considered as planning
that leads to reasonable estimates. proceeds are:
Software Scope: 1.Off-the-shelf components –
The first activity in software project planning is the determination of software scope. Existing software that can be acquired from a third-party or that has been developed
Software scope describes function, performance, constraints, interfaces, and reliability. internally for past project. These are ready for use and have been fully validated. Generally,
Functions are evaluated and in some cases refined to provide more detail prior to the
beginning of estimation. Both cost and schedule estimates are functionally oriented and
the cost for acquisition and integration of such components will be less than the cost to Software life cycle: a time span in which a software product is developed and used, extending
develop equivalent software.
2. Full-experience components :
Existing specifications, designs, code, or test data developed for past projects that are similar
to the current project. Members of the current software team have had full experience in the
application area represented by these components. Therefore modifications will be relatively
low risk.
3. Partial-experience components:
Existing specifications, designs, code, or test data developed for past projects that are related
to the current project, but will require substantial modification. Members of the current
software team have only limited experience in the application area represented by these
components. Therefore modifications will have a fair degree of low risk and hence their use
for the current project must be analyzed in detail.
4. New components:
to its retirement.
Software components that must be built by the software team specifically for the needs of
the current project. The cyclical nature of the model expresses the fact that the phases can be carried out
repeatedly in the development of a software product.
Environmental Resources:
Requirements analysis and planning phase
The environment that supports the software project, often called Software Engineering
Goal:
Environment (SEE), incorporates hardware and software. Hardware provides a platform
that supports the tools required to produce the work products. A project planner must ➢ Determining and documenting:
prescribe the time window required for hardware and software and verify that these resources ❖ Which steps need to be carried out,
will be available.
❖ The nature of their mutual effects,
The phases of a software project :
❖ Which parts are to be automated, and
Software projects are divided into individual phases. These phases collectively and their
❖ Which recourses are available for the realization of the project.
chronological sequence are termed the software life cycle (see Fig. 2.2).
Important activities:
➢ Completing the requirements analysis,
➢ Delimiting the problem domain,
➢ Roughly sketching the components of the target system,
➢ Making an initial estimate of the scope and the economic feasibility of
the planned project, and
➢ Creating a rough project schedule.
Products:
➢ User requirements,
➢ Project contract, and ➢ Description of the algorithmic structure of the system components, and
➢ Rough project ➢ Documentation of the design decisions.
schedule. System Implementation and component test
specification phase Goal:
Goal: ➢ Transforming the products of the design phase into a form that is
➢ a contract between the client and the software producer (precisely executable on a computer.
specifies what the target software system must do and the premises for its Important activities:
realization.)
➢ Refining the algorithms for the individual components,
Important activities:
➢ Transferring the algorithms into a programming language (coding),
➢ Specifying the system,
➢ Translating the logical data model into a physical one,
➢ Compiling the requirements definition,
➢ Compiling and checking the syntactical correctness of the algorithm, and
➢ Establishing an exact project schedule,
➢ Testing, and syntactically and semantically correcting erroneous
➢ Validating the system specification, and system components.
➢ Justifying the economic feasibility of the project. Products:
Products: ➢ Program code of the system components,
➢ Requirements definition, and ➢ Logs of the component tests, and
➢ Exact project ➢ Physical data model.
schedule. System and System test
components design Goal: Goal:
➢ Determining which system components will cover which requirements in ➢ Testing the mutual effects of system components under conditions close
the system specification, and to reality,
➢ How these system components will work together. ➢ Detecting as many errors as possible in the software system, and
Important activities: ➢ Assuring that the system implementation fulfills the system specification.

➢ Designing system architecture, Operation and maintenance


➢ Designing the underlying logical data model, Task of software maintenance:
➢ Designing the algorithmic structure of the system components, and ➢ Correcting errors that are detected during actual operation, and
➢ Validating the system architecture and the algorithms to realize the ➢ Carrying out system modifications and extensions.
individual system components.
This is normally the longest phase of the software
Products:
life cycle. Two important additional aspects:
➢ Description of the logical data model,
➢ Documentation, and
➢ Description of the system architecture,
➢ Quality assurance.
During the development phases the documentation should enable b. Prototype Model
communication among the persons involved in the development;
c. Spiral Model
upon completion of the development phases it supports the
utilization and maintenance of the software product. d .Object-oriented life-cycle mode

Quality assurance encompasses analytical, design and Waterfall Model:


organizational measures for quality planning and for fulfilling quality
The Waterfall Model was the first Process Model to be introduced. It is also referred to as a
criteria such as correctness, reliability, user friendliness,
linear- sequential life cycle model or classic model. It is very simple to understand and use.
maintainability, efficiency and portability.
In a waterfall model, each phase must be completed before the next phase can begin and there
DEFINING THE PROBLEM: is no overlapping in the phases.
Most software projects are undertaken to provide solution to The Waterfall model is the earliest SDLC approach that was used for software development.
business needs. In the beginning of a software project the business
needs are often expressed informally as part of a meeting or a casual The waterfall Model illustrates the software development process in a linear sequential flow.
This means that any phase in the development process begins only if the previous phase is
conversation. In a more formal approach, a customer could send
complete. In this waterfall model, the phases do not overlap.
Request For Information (RFI) to organizations to know their area of
expertise and domain specifications. The customer puts up a Request Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
For Proposal (RFP) stating the business needs. Organizations will to success of the project. In "The Waterfall" approach, the whole process of software
provide their services will send proposals and one of the proposals is development is divided into separate phases. In this Waterfall model, typically, the outcome
accepted by the customer. of one phase acts as the input for the next phase sequentially.
The following illustration is a representation of the different phases of the Waterfall Model.

DEVELOPING A SOLUTION STRATEGY:


The business needs have to be understood and the role of
software in providing the solution has to be identified. Software
development requires a model to be used to drive it and tract it to
completion. The model will provide an effective roadmap for the
software team.
PLANNING THE DEVELOPMENT PROCESS:

Planning the software development process involves several


important considerations. The first consideration is to define a
product life-cycle model. A software project goes through various The sequential phases in Waterfall model are −
phases before it is ready to be used for practical purposes. For every
project, a framework must be used to define the flow of activities such • Requirement Gathering and analysis − All possible requirements of the system to
as define, develop, test, deliver, operate, and maintain a software be developed are captured in this phase and documented in a requirement
product. There are many well define models that can be use. There specification document.
could be variations to these models also, depending on the
deliverables and milestones for the project. A model has to be • System Design − The requirement specifications from first phase are studied in this
selected and finalized to start a project. phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system
The following section discusses the various models such as:
architecture.
a. Waterfall Model
• Implementation − With inputs from the system design, the system is first developed
in small programs called units, which are integrated in the next phase. Each unit is The spiral model has four phases. A software project repeatedly passes through these
developed and tested for its functionality, which is referred to as Unit Testing. phases initerations called Spirals.

• Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system
is tested for any faults and failures.
• Deployment of system − Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.
• Maintenance − There are some issues which come up in the client environment. To
fix those issues, patches are released. Also to enhance the product some better
versions are released. Maintenance is done to deliver these changes in the customer
environment.
All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the
defined set of goals areachieved for previous phase and it is signed off, so the name "Waterfall
Model". In this model, phasesdo not overlap.
Advantages:

Some of the major advantages of the Waterfall Model are as follows −


• Simple and easy to understand and use
• Phases are processed and completed one at a time. Planning phase:

• Works well for smaller projects where requirements are very well understood. This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
• It is disciplined in approach.
requirements and unit requirements are all done in this phase.
Disadvantages:
This phase also includes understanding the system requirements by continuous
communication between the customer and the system analyst. At the end of the spiral, the
• No working software is produced until late during the life cycle.
product is deployed in theidentified market.
• High amounts of risk and uncertainty.
Risk Analysis:
• Not a good model for complex and object-oriented projects.
Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
• Poor model for long and ongoing projects.
management risks, such as schedule slippage and cost overrun. After testing the build, at the
• Not suitable for the projects where requirements are at a moderate to high risk end of first iteration, the customer evaluates the software and provides feedback.
of changing.So, risk and uncertainty is high with this process model.
Engineering or construct phase:
Spiral Model:
The Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral, when the product is just thought of and the design is being developed a POC
The spiral model, initially proposed by Boehm, it is the combination of waterfall and iterative
(Proof ofConcept) is developed in this phase to get customer feedback.
model,Using the spiral model, the software is developed in a series of incremental releases.
Each phase in spiral model begins with planning phase and ends with evaluation phase.
Evaluation Phase: Step 4: Initial user evaluation
This phase allows the customer to evaluate the output of the project to update before the In this stage, the proposed system is presented to the client for an initial evaluation. It helps
project continues to the next spiral. to findout the strength and weakness of the working model. Comment and suggestion are
Software project repeatedly passes through all these four phases. collected fromthe customer and provided to the developer.

Advantages: Step 5: Refining prototype


• Flexible model If the user is not happy with the current prototype, you need to refine the prototype according
• Project monitoring is very easy and effective to theuser's feedback and suggestions.

• Risk management This phase will not over until all the requirements specified by the user are met. Once the user
issatisfied with the developed prototype, a final system is developed based on the approved
• Easy and frequent feedback from users. final prototype.
Dis advantages:
Step 6: Implement Product and Maintain
• It doesn’t work for smaller projects
Once the final system is developed based on the final prototype, it is thoroughly tested and
• Risk analysis require specific expertise. deployedto production. The system undergoes routine maintenance for minimizing
downtime and prevent large-scale failures.
• It is costly model & complex.
• Project success is highly dependent on risk.
Prototype Model:

To overcome the disadvantages of waterfall model, this model is implemented with a special
factorcalled prototype. It is also known as revaluation model.

Step 1: Requirements gathering and analysis

A prototyping model starts with requirement analysis. In this phase, the requirements of the
systemare defined in detail. During the process, the users of the system are interviewed to
know what is their expectation from the system.

Step 2: Quick design Advantages:

The second phase is a preliminary design or a quick design. In this stage, a simple design of • Users are actively involved in development. Therefore, errors can be detected in the
the system is created. However, it is not a complete design. It gives a brief idea of the system initial stage of the software development process.
to the user.The quick design helps in developing the prototype. • Missing functionality can be identified, which helps to reduce the risk of failure as
Prototyping is also considered as a risk reduction activity.
Step 3: Build a Prototype
In this phase, an actual prototype is designed based on the information gathered from quick
• Helps team member to communicate effectively
design.It is a small working model of the required system. • Customer satisfaction exists because the customer can feel the product at a very early
stage.
Disadvantages: • During the test phase, the function of not only the new product but also of the
reused components is tested. Any deficiencies in the latter must be documented
• Prototyping is a slow and time taking process. exactly. The resulting modifications must be handled centrally in the class library
to ensure that they impact on other projects, both current and future.
• The cost of developing a prototype is a total waste as the prototype is
• Newly created classes must be tested for their general usability. If there is a
ultimately thrown away.
chance that a component could be used in other projects as well, it must be
• Prototyping may encourage excessive change requests. included in the class library and documented accordingly. This also means that
• After seeing an early prototype model, the customers may think that the actual the new class must be announced and made accessible to other programmers
product will be delivered to him soon. who might profit from it. This places new requirements on the in-house
communication structures.
• The client may lose interest in the final product when he or she is not happy with
the initial prototype.
The object-oriented life-cycle model:

• The usual division of a software project into phases remains intact with the use
of object-oriented techniques.
• The requirements analysis stage strives to achieve an understanding of the
client’s application domain.
• The tasks that a software solution must address emerge in the course of requirements
analysis.
• The requirements analysis phase remains completely independent of an
implementation technique that might be applied later.
• In the system specification phase the requirements definition describes what the The actual software life cycle recurs when new requirements arise in the
software product must do, but not how this goal is to be achieved.
company that initiates a new requirements analysis stage.
• One point of divergence from conventional phase models arises because The object and prototyping-oriented life-cycle model
implementation with object-oriented programming is marked by the assembly of
already existing components. The specification phase steadily creates new prototypes. Each time
we are confronted with the problem of having to modify or enhance
existing prototypes. If the prototypes were already implemented with
The advantages of object-oriented life-cycle model: object-oriented technology, then modifications and extensions are
particularly easy to carry out. This allows an abbreviation of the
• Design no longer is carried out independently of the later implementation because
specification phase, which is particularly important when proposed
during the design phase we must consider which components are available for the
solutions are repeatedly discussed with the client. With such an
solution of the problem. Design and implementation become more closely
approach it is not important whether the prototype serves solely for
associated, and even the choice of a different programming language can lead to
specification purposes or whether it is to be incrementally developed to
completely different program structures.
the final product. If no prototyping tools are available, object-oriented
• The duration of the implementation phase is reduced. In particular, (sub) products programming can serve as a substitute tool for modeling user
become available much earlier to allow testing of the correctness of the design. interfaces. This particularly applies if an extensive class library is
Incorrect decisions can be recognized and corrected earlier. This makes for closer available for user interface elements.
feedback coupling of the design and implementation phases.
For incremental prototyping (i.e. if the product prototype is to be used as the
• The class library containing the reusable components must be continuously basis for the implementation of the product), object-oriented programming
maintained. Saving at the implementation end is partially lost as they are also proves to be a suitable medium. Desired functionality can be added
reinvested in this maintenance. A new job title emerges, the class librarian, who stepwise to the prototypes without having to change the prototype itself.
is responsible for ensuring the efficient usability of the class library. These results in a clear distinction between the user interfaces modeled in the
specification phase and the actual functionality of the program. This is
particularly important for the following reasons: Quality Describes the quality procedures and standards
plan that will be used in a project.
• This assures that the user interface is not changed during the implementation
of the program functionality. The user interface developed in collaboration with
the client remains as it was defined in the specification phase. Validatio Describes the approach, resources and schedule
n plan used for system validation.
• In the implementation of the functionality, each time a subtask is completed, a
more functional prototype results, which can be tested (preferably together with
the client) and compared with the specifications? During test runs situations Configu Describes the configuration management
sometimes arise that require rethinking the user interface. In such cases the ration procedures and structures to be used.
software life cycle retreats one step and a new user interface prototype is manage
constructed. ment
plan
Since the user interface and the functional parts of the program are largely
Mai Predicts the maintenance requirements of the
nten system, maintenance costs and effort required.
ance
plan

Staff Describes how the skills and experience of the


t develop project team members will be developed. See
ment
plan

PLANNING AN ORGANIZATION STRUCTURE:


Completing a software project is a team effort. The following options
are available for applying human resources to a project that will require
‘n’ people working for ‘K’ years.
• ‘n’ individuals are assigned to ‘m’ different functional tasks.
• ‘n’ individuals are assigned to ‘m’ different functional tasks (m<n) so that
decoupled, two cycles result that shares a common core. The integration of informal teams are established and coordinated by project manager.
the functional classes and the user interface classes creates a prototype that
can be tested and validated. This places new requirements on the user • ‘n’ individuals are organized into ‘t’ teams and each team is assigned
interface and/or the functionality, so that the cycle begins. one/more functional tasks.

Types of project planning: Even though the above three approaches have their pros and cons,
option 3 is most productive.
Types of project planning:
There are several roles within each software project team. Some of
the roles in a typical software project are listed below:

Desi Job Profile


gnati
on
Proje Initiates, plans, tracks and manages resources of an
ct entire project
Gate Degree & PG College

functionality of the module

Anal A software engineer who analyzes the requirements


Mana yst gathered. Analysis of the requirements is done to get a
ger clear understanding of the requirements.

Modu A software engineer who manages and leads the team Dom An expert who knows the system of which the
le working ain software is a part. This would involve the technical
Lead on a particular module of the software project. The Cons knowledge of how the entities of the domain interface
er module leader will conduct reviews and has to ensure ultan with the software being developed. For example, a
the proper t banking domain consultant or a telecom domain
consultant.

Revie A software engineer who reviews artifacts like project


wer documents or code. The review can be a technical
review which scrutinizes the technical details of the
artifact. It could be a review where the reviewer
ascertains whether or not the artifact adheres to a
prescribed standard

Archi A software engineer involved in the design of the


tect solution after the analyst has clearly specified the
business requirements

Devel A software engineer, who writes the code, tests it and


oper delivers it error free

Teste A software engineer who conducts tests on the


r completed software or a unit of the software. If any
defects are found these defects are logged and a
report is sent to the owner of the tested unit.

Programming Team Structure


Every programming team must have an internal structure. The
best team structure for any particular project depends on the nature
of the project and the product, and on the characteristics of the
individual team members. Basic team structure includes:
a. Democratic team: Team Member participate in all decisions
b. Chief Programmer Team: A chief programmer is assisted and
supported by other team members.
c. Hierarchical Team: The project leader assigns tasks attend reviews and
walkthrough, detects problem areas, balances the workload and
participates in technical activities.
Democratic Team
This was first described by Weinberg as the “egoless team”. In
an egoless team goals are set and decisions made by group consensus.
Group leadership rotates from member to member based on the tasks

28
Gate Degree & PG College Gate Degree & PG College

to be performed and the differing abilities of the team members. Work programmers of administrative tasks.
products (requirements, design, source code, user manual, etc) are
➢ The project secretary administrates all programs and documents and
discussed openly and are freely examined by all team members.
assists in project progress checks.
Advantage:
➢ The main task of the project secretary is the administration of the project
❖ Opportunity for each team member to contribute to decision library.

❖ Opportunity for team members to learn from one another ➢ The chief programmer determines the number of specialists needed.

❖ Increased Job satisfaction that results from good communication in ➢ Specialists select the implementation language, implement individual
open, non-threatening work environments. system components, choose and employ software tools, and carry out
tests.
Disadvantages
Advantages
❖ Communication overhead required in reaching decision,
• The chief programmer is directly involved in system development and can
❖ All team members must work well together, better exercise the control function.
❖ Less individual responsibility and authority can result in less initiative • Communication difficulties of pure hierarchical
and less personal drive from team members. organization are ameliorated. Reporting concerning project
The chief programmer team progress is institutionalized.

Baker's organizational model ([Baker 1972]) • Small teams are generally more productive than large teams.

➢ Important characteristics: Disadvantages

• The lack of a project manager who is not personally involved in • It is limited to small teams. Not every project can be handled by a small
system development team.

• The use of very good specialists • Personnel requirements can hardly be met. Few software engineers can
meet the qualifications of a chief programmer or a project assistant.
• The restriction of team size
• The project secretary has an extremely difficult and responsible job,
➢ The chief programmer team consists of: although it consists primarily of routine tasks, which gives it a
• The chief programmer subordinate position. This has significant psychological disadvantages.
Due to the central position, the project secretary can easily become a
• The project assistant bottleneck.
• The project secretary The organizational model provides no replacement for the project
secretary. The loss of the project secretary would have failing
• Specialists (language specialists, programmers, test specialists).
consequences for the remaining course of the project.
➢ The chief programmer is actively involved in the planning, Hierarchical organizational model
specification and design process and, ideally, in the implementation
process as well. ➢ There are many ways to organize the staff of a project. For a long time the
organization of software projects oriented itself to the hierarchical
➢ The chief programmer controls project progress, decides all organization common to other industrial branches. Special importance
important questions, and assumes overall responsibility. is
➢ The qualifications of the chief programmer need to be accordingly high.
➢ The project assistant is the closest technical coworker of the chief
programmer.
➢ The project assistant supports the chief programmer in all important
activities and serves as the chief programmer's representative in the latter's
absence. This team member's qualifications need to be as high as those
of the chief programmer.
➢ The project secretary relieves the chief programmer and all other

29 30
Gate Degree & PG College

31
UNIT-II
Boehm three levels
SOFTWARE COST FACTORS PM=programmer months
KDSI= number of thousands of delivered instructions
1. Programmer Ability Application
Experiment sackman and colleagues. The goal was to determine the relativeinfluence of batch and time- PM=2.4*(KDSI)**1.05 Program cost
shared access on programmers productivity. Util Programmer effort
Ex: 12 programmers given 2 programmes each
PM=3.0*(KDSI)**1.12
11 years experience ity
productivity variation 16:1 PM=3.6*(KDSI)**1.20
Individual differences in ability can be significant. Syst
2. Product complexity
em
3 categories of software product
✓ Application programs
✓ Utility programs
✓ System programs
3. Product size:
A large software product is obviously more expensive to develop the small one. Boehm’s equations
Brooks states that utility programs are 3 times as difficult to write as application programs and that systems
indicate that the rate of increase in required effortgrows with the number of source instructions at an
programs and that systems programsare 3 times as difficult to write utility programs
exponential rate slightlygreater than one.
1(App)-3(utility)-9(System)

2 3
4.Available time
Total project effort is sensitive to the calendar time available for project completion. Most of them agree 4. Level of technology
that software projects require more total effort if development time is compressed or expanded from the Software development project is reflected by the programming language, theabstract machine, the
optimal time. programming practices and software tools used. The number of source instructions written per day is
5.Required level of reliability largely dependent of the language used, written in HLL, expand into several machine level statements.
Software reliability can be defined as the probability that a program willperform a required function under
stated conditions for a stated period of time. It can be expressed in terms of accuracy, robustness, SOFTWARE COST ESTIMATION TECHNIQUES
completeness, consistency of the source code. ✓ Software cost estimates are based on past performance.
Boehm describes five categories
✓ Historical data are used to identify cost factors and determine the relativeimportance and various
factors with in the environment of that organization.
Category Effect of failure
✓ Cost estimates can be either top-down or bottom-up.
Very low Slight inconvenience
✓ Top-down estimation first focuses on system-level costs(such as personalrequired to develop the
Low Losses easily recovered system)

Nominal Moderately difficult to recover losses ✓ Bottom-up cost estimation first estimates the cost of develop each module
or subsystem.
High High financial loss

Very high Risk to human life

4 5
1. Expert judgement
➢ They complete their estimates. They may ask questions of the co-
ordinator but they do not discuss their estimates with one another.
➢ The most widely used cost estimation technique is expert judgement,which is an inherently
top-down estimation technique . ➢ The coordinator prepared distributes a summary of the estimators, responses and includes any
➢ Expert judgement relies on the experience background and business senseof one or more key people unusual rationales, noted by the estimators.
in the organization.
➢ Estimators complete another estimate, using the results from the previousestimate.
Advantages
✓ Experience can also be a liability.
➢ The process is iterated for as many rounds as required . No groupdiscussion is allowed
during the entire process.
✓ The expert may be confident that the project is similar to a previous one,but may have overlooked
some factors that make the new project significantly different.
3. Work breakdown structure
2. Delphi cost estimation ✓ The work break down chart can indicate either product hierarchy orprocess hierarchy.
Developed by Rand corporation in 1948 to gain expert consensus withoutintroducing the adverse side
effects. ✓ Product hierarchy identifies the product components and indicates themanner in which the
Delphi technique can be adapted to software cost estimation in the followingmanner. components and interconnected.
✓ Process hierarchy identifies the work activities and relationship among
➢ Coordinator provides each estimator with the system definition documentwith the system definition activities.
document and a form for recording a cost estimate.

7
Product hierarchy

✓ Some planners use both product and process hierarchy.


Advantages
Process of work breakdown structure
➢ Work break down structure technique are identifying and accounting for various process and
product factors, and is making explicit exactly whichcosts are included in the estimate.

8 9
approximated by the Rayleigh distribution curve. Norden represented the Rayleigh curve by the following
equation:
4. Algorithmic cost models E = K/t²d * t * e-t² / 2 t² d
➢ Bottom-up estimators Where E is the effort required at time t. E is an indication of the number of engineers (or the staffing level)
at any particular time during the duration of the project, K is the area under the curve, and td is the time at
➢ Constructive cost model(COCOMO) is an algorithmic cost model describedby bohem which the curve attains its maximum value. It must be remembered that the results of Norden are applicable
➢ COCOMO effort multipliers to general R & D projects and were not meant to model the staffing pattern of software development
projects.
a. Product attributes
b. computer attributes
c. personal attributes
d. project attributes
Ex:normal organic mode equations apply in the following types of situations.Small to medium size
projects(2k to 32k) familiar applications area.
Stable, well-understood virtual machine in house development effort.

Staffing level estimation :


Once the effort required to develop a software has been determined, it is necessary to determine the staffing
requirement for the project. Putnam first studied the problem of what should be a proper staffing pattern for
software projects. He extended the work of Norden who had earlier investigated the staffing pattern of
research and development (R&D) type of projects. In order to appreciate the staffing pattern of software
projects, Norden’s and Putnam’s results must beunderstood. Putnam’s Work :
Putnam studied the problem of staffing of software projects and found that the software development
Norden’s Work : has characteristics very similar to other R & D projects studied by Norden and that the Rayleigh-Norden
Norden studied the staffing patterns of several R & D projects. He found that the staffing pattern can be curve can be used to relate the number of delivered lines of code to the effort and the time required to
10 11
develop the project. By analyzing a large number of army projects, Putnam derived the following Where, K is the total effort expended (in PM) in the product development and L is the product size in
expression: KLOC, td corresponds to the time of system and integration testing and Ck is the state of technology
L = Ck K 1/3td 4/3 constant and reflects constraints that impede the progress of the programmer
The various terms of this expression are as follows:
• K is the total effort expended (in PM) in the product development and L is the product size in Now by using the above expression it is obtained that,
KLOC. K = L 3 /C k 3 td 4
• td corresponds to the time of system and integration testing. Therefore, td can be approximately OR
considered as the time required to develop the software. K = C/td 4
• Ck is the state of technology constant and reflects constraints that impede the progress of the programmer. For the same product size,
Typical values of Ck = 2 for poor development environment (no methodology, poor documentation, and C = L3 / C k 3 is a constant
review, etc.), Ck = 8 for good software development environment (software engineering principles are or,
adhered to), Ck = 11 for an excellent environment (in addition to following software engineering principles, K1/K2 = td24 /t d1 4
automated tools and techniques are used). The exact value of Ck for a specific project can be computed or,
from the historical data of the organization developing it. K ∝ 1/td 4
Putnam suggested that optimal staff build-up on a project should follow the Rayleigh curve. Only a small or, cost ∝ 1/td
number of engineers are needed at the beginning of a project to carry out planning and specification tasks. (as project development effort is equally proportional to project development cost)
As the project progresses and more detailed work is required, the number of engineers reaches a peak. After From the above expression, it can be easily observed that when the schedule of a project is
implementation and unit testing, the number of project staff falls. However, the staff build-up should not be compressed, the required development effort as well as project development cost increases in proportion to
carried out in large installments. The team size should either be increased or decreased slowly whenever the fourth power of the degree of compression. It means that a relatively small compression in delivery
required to match the Rayleigh-Norden curve. Experience shows that a very rapid build up of project staff schedule can result in substantial penalty of human effort as well as development cost. For example, if the
any time during the project development correlates with schedule slippage. estimated development time is 1 year, then in order to develop the product in 6 months, the total effort
It should be clear that a constant level of manpower through out the project duration would lead to wastage required to develop the product (and hence the project cost) increases 16 times
of effort and increase the time and effort required to develop the product. If a constant number of engineers
are used over all the phases of a project, some phases would be overstaffed and the other phases would be Software Maintenance Cost Factors:
understaffed causing inefficient use of manpower, leading to schedule slippage and increase in cost. There are two types of cost factors involved in software maintenance.
Effect of schedule change on cost: These are
By analyzing a large number of army projects, Putnam derived the following expression: o Non-Technical Factors
L = CkK 1/3td 4/3 o Technical Factors

12 13
o If the application of the program is defined and well understood, the system requirements may be
definitive and maintenance due to changing needs minimized.
o If the form is entirely new, it is likely that the initial conditions will be modified frequently, as user gain
experience with the system.

2. Staff Stability
o It is simple for the original writer of a program to understand and change an application rather than some
other person who must understand the program by the study of the reports and code listing.
Non-Technical Factors: o If the implementation of a system also maintains that systems, maintenance costs will reduce.
o In practice, the feature of the programming profession is such that persons change jobs regularly. It is
unusual for one user to develop and maintain an application throughout its useful life.
3. Program Lifetime
o Programs become obsolete when the program becomes obsolete, or their original hardware is replaced,
and conversion costs exceed rewriting costs.
4. Dependence on External Environment
o If an application is dependent on its external environment, it must be modified as the climate changes.
o For example:
o Changes in a taxation system might need payroll, accounting, and stock control programs to be modified.
o Taxation changes are nearly frequent, and maintenance costs for these programs are associated with the
frequency of these changes.
o A program used in mathematical applications does not typically depend on humans changing the
assumptions on which the program is based.
5. Hardware Stability
o If an application is designed to operate on a specific hardware configuration and that configuration does
not changes during the program's lifetime, no maintenance costs due to hardware changes will be
incurred.
1.Application Domain o Hardware developments are so increased that this situation is rare.

14 15
o The application must be changed to use new hardware that replaces obsolete equipment. Programming Style
The method in which a program is written contributes to its understandability and hence, the ease with
Technical Factors: which it can be modified.
Technical Factors include the following: Program Validation and Testing
o Generally, more the time and effort are spent on design validation and program testing, the fewer bugs in
the program and, consequently, maintenance costs resulting from bugs correction are lower.
o Maintenance costs due to bug's correction are governed by the type of fault to be repaired.
o Coding errors are generally relatively cheap to correct, design errors are more expensive as they may
include the rewriting of one or more program units.
o Bugs in the software requirements are usually the most expensive to correct because of the drastic design
which is generally involved.
Documentation
o If a program is supported by clear, complete yet concise documentation, the functions of understanding
the application can be associatively straight-forward.
o Program maintenance costs tends to be less for well-reported systems than for the system supplied with
inadequate or incomplete documentation.
Configuration Management Techniques
o One of the essential costs of maintenance is keeping track of all system documents and ensuring that
these are kept consistent.
o Effective configuration management can help control these costs.

Module Independence Software Requirement Specifications:


It should be possible to change one program unit of a system without affecting any other unit. The production of the requirements stage of the software development process is Software Requirements
Specifications (SRS) (also called a requirements document). This report lays a foundation for software
engineering activities and is constructing when entire requirements are elicited and analyzed. SRS is a
Programming Language formal report, which acts as a representation of software that enables the customers to review whether it
Programs written in a high-level programming language are generally easier to understand than programs (SRS) is according to their requirements. Also, it comprises user requirements for a system as well as
written in a low-level language. detailed specifications of the system requirements.
16 17
The SRS is a specification for a specific software product, program, or set of applications that perform
particular functions in a specific environment. It serves several goals depending on who is writing it. First, 1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS. SRS
the SRS could be written by the client of a system. Second, the SRS could be written by a developer of the is said to be perfect if it covers all the needs that are truly expected from the system.
system. The two methods create entirely various situations and establish different purposes for the
document altogether. The first case, SRS, is used to define the needs and expectation of the users. The Completeness: The SRS is complete if, and only if, it includes the following elements:
second case, SRS, is written for various purposes and serves as a contract document between customer and
developer. (1). All essential requirements, whether relating to functionality, performance, design, constraints, attributes,
or external interfaces.
Characteristics of good SRS:
(2). Definition of their responses of the software to all realizable classes of input data in all available
categories of situations.

(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all terms and
units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements described in its
conflict. There are three types of possible conflict in the SRS:

(1). The specified characteristics of real-world objects may conflicts. For example,

(a) The format of an output report may be described in one requirement as tabular but in another as textual.

(b) One condition may state that all lights shall be green while another states that all lights shall be blue.

(2). There may be a reasonable or temporal conflict between the two specified actions. For example,

(a) One requirement may determine that the program will add two inputs, and another may determine that
the program will multiply them.
Following are the features of a good SRS document:
(b) One condition may state that "A" must always follow "B," while other requires that "A and B" co-occurs.
18 19
(3). Two or more requirements may define the same real-world object but use different terms for that object. 1. Backward Traceability: This depends upon each requirement explicitly referencing its source in earlier
For example, a program's request for user input may be called a "prompt" in one requirement's and a "cue" documents.
in another. The use of standard terminology and descriptions promotes consistency.
2. Forward Traceability: This depends upon each element in the SRS having a unique name or reference
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one interpretation. This number.
suggests that each element is uniquely interpreted. In case there is a method used with multiple definitions,
the requirements report should determine the implications in the SRS so that it is clear and simple to The forward traceability of the SRS is especially crucial when the software product enters the operation and
understand. maintenance phase. As code and design document is modified, it is necessary to be able to ascertain the
complete set of requirements that may be concerned by those modifications.
. Ranking for importance and stability: The SRS is ranked for importance and stability if each
requirement in it has an identifier to indicate either the significance or stability of that particular 9. Design Independence: There should be an option to select from multiple design alternatives for the final
requirement. system. More specifically, the SRS should not contain any implementation details.

Typically, all requirements are not equally important. Some prerequisites may be essential, especially for 10. Testability: An SRS should be written in such a method that it is simple to generate test cases and test
life-critical applications, while others may be desirable. Each element should be identified to make these plans from the report.
differences clear and explicit. Another way to rank requirements is to distinguish classes of items as
essential, conditional, and optional. 11. Understandable by the customer: An end user may be an expert in his/her explicit domain but might
not be trained in computer science. Hence, the purpose of formal notations and symbols should be avoided
6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly obtain too as much extent as possible. The language should be kept simple and clear.
changes to the system to some extent. Modifications should be perfectly indexed and cross-referenced.
12. The right level of abstraction: If the SRS is written for the requirements stage, the details should be
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-effective system explained explicitly. Whereas,for a feasibility study, fewer analysis can be used. Hence, the level of
to check whether the final software meets those requirements. The requirements are verified with the help of abstraction modifies according to the objective of the SRS.
reviews.
Properties of a good SRS document
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it facilitates the
referencing of each condition in future development or enhancement documentation. The essential properties of a good SRS document are the following:

There are two types of Traceability:


20 21
Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and complete. 1. Algebraic Specifications:
Verbose and irrelevant descriptions decrease readability and also increase error possibilities. o Description: Specify software components in terms of algebraic equations that define the
relationships between operations.
Structured: It should be well-structured. A well-structured document is simple to understand and modify. o Use Case: Typically used for abstract data types and interfaces.
In practice, the SRS document undergoes several revisions to cope up with the user requirements. Often, o Example: Specifying a stack using operations like push, pop, and top with their corresponding
user requirements evolve over a period of time. Therefore, to make the modifications to the SRS document axioms.
easy, it is vital to make the report well-structured. 2. Model-Based Specifications:
o Description: Use state-based models to define the system's behavior through states and transitions.
Black-box view: It should only define what the system should do and refrain from stating how to do these. o Use Case: Suitable for reactive and stateful systems.
This means that the SRS document should define the external behavior of the system and not discuss the o Example: VDM (Vienna Development Method) and Z notation.
implementation issues. The SRS report should view the system to be developed as a black box and should 3. Petri Nets:
define the externally visible behavior of the system. For this reason, the SRS report is also known as the o Description: Graphical and mathematical modeling tool applicable to distributed systems.
black-box specification of a system. o Use Case: Modeling concurrent, asynchronous, and distributed systems.
o Example: Describing workflow processes, communication protocols.
Conceptual integrity: It should show conceptual integrity so that the reader can merely understand it. 4. Finite State Machines (FSMs):
Response to undesired events: It should characterize acceptable responses to unwanted events. These are o Description: Model the system as a finite number of states and transitions between those states
called system response to exceptional conditions. based on inputs.
o Use Case: Systems with well-defined states and events.
Verifiable: All requirements of the system, as documented in the SRS document, should be correct. This
o Example: Describing the behavior of a user interface or protocol.
means that it should be possible to decide whether or not requirements have been met in an implementation.
5. Temporal Logic:
o Description: Use temporal operators to specify the timing of events within a system.
Formal specification techniques:
o Use Case: Real-time systems and systems where timing constraints are critical.
Formal specification techniques in software engineering involve the use of mathematical models to define o Example: Linear Temporal Logic (LTL) and Computation Tree Logic (CTL).
the behavior, functionality, and constraints of software systems. These techniques aim to provide a precise 6. Process Algebras:
and unambiguous description of what a system is supposed to do, which helps in verifying and validating o Description: Algebraic techniques to model and analyze the behaviors of concurrent systems.
software through formal methods. o Use Case: Analyzing complex interactions in distributed systems.
o Example: CSP (Communicating Sequential Processes) and Pi-calculus.
Key Techniques
22 23
Benefits Formal specification techniques are powerful tools in software engineering, particularly for systems where
reliability, security, and correctness are critical. However, their adoption requires balancing the benefits
• Precision: Formal specifications eliminate ambiguity, providing a clear and precise description of system against the challenges and ensuring appropriate expertise and resources are available.
behavior.
• Verification: Enable rigorous proofs of correctness and other properties using formal verification tools. n software engineering, languages and processors for requirements specification are essential tools to define,
• Validation: Help in validating requirements by providing a formal model that can be analyzed and analyze, and manage the requirements of a software system accurately and efficiently. They ensure that the
simulated. requirements are clear, unambiguous, and traceable throughout the software development lifecycle.
• Documentation: Serve as unambiguous documentation that can be referred to throughout the
development process. Key Languages for Requirements Specification

Challenges 1. Natural Language:


o Description: Uses everyday language to describe requirements.
• Complexity: Creating formal specifications can be complex and time-consuming. o Use Case: Widely used due to its accessibility to all stakeholders.
• Skill Requirements: Requires specialized knowledge in mathematical logic and formal methods. o Challenges: Ambiguity and lack of precision can lead to misunderstandings.
• Scalability: May be challenging to apply to very large and complex systems. 2. Structured Natural Language:
• Tool Support: Limited availability of user-friendly tools for formal methods. o Description: Imposes structure on natural language to reduce ambiguity.
o Use Case: Balances readability and precision.
Examples of Formal Specification Languages o Examples: Use case specifications, user stories, and structured templates like the Volere
Requirements Specification Template.
1. Z Notation: Used for describing and modeling computing systems, particularly in terms of their data and 3. Use Case Diagrams:
operations. o Description: Visual representation of the interactions between users (actors) and the system.
2. B-Method: Focuses on the specification, design, and verification of software through a mathematical o Use Case: Capturing functional requirements and user interactions.
approach. o Example: UML (Unified Modeling Language) use case diagrams.
3. VDM (Vienna Development Method): Provides a framework for developing precise and abstract 4. User Stories:
models of software systems. o Description: Short, simple descriptions of a feature from the user's perspective.
4. Alloy: A lightweight modeling language for software design that uses a relational model to describe o Use Case: Common in agile methodologies for capturing requirements.
structures and behaviors. o Format: "As a [type of user], I want [a goal] so that [a reason]."
5. Formal Specification Languages:
o Description: Use mathematical notation to define requirements rigorously.
24 25
o Use Case: Critical systems requiring precision and unambiguity. ▪ Rodin Platform: An Eclipse-based toolset for developing and verifying models in the B-
o Examples: Z Notation, B-Method, VDM (Vienna Development Method), Alloy. Method.
6. Graphical Models: ▪ Z/Eves: Supports Z notation for formal specification and verification.
o Description: Use diagrams to represent requirements and system behaviors. 4. Agile Tools:
o Use Case: Enhances understanding through visual representation. o Description: Tools supporting agile methodologies, managing user stories and backlog items.
o Examples: Statecharts, activity diagrams, sequence diagrams (UML, SysML). o Examples:
▪ JIRA: Popular agile project management tool for user stories, sprints, and backlog
Processors and Tools for Requirements Specification management.
▪ VersionOne: Comprehensive agile project management tool supporting user stories, tasks,
1. Requirements Management Tools: and sprints.
o Description: Software tools designed to manage, trace, and analyze requirements.
o Examples: Benefits of Using Specification Languages and Tools
▪ IBM DOORS: For complex systems, supports traceability and collaboration.
▪ Jama Software: Comprehensive requirements management with collaboration and impact • Clarity and Precision: Reduce ambiguity and ensure a common understanding among stakeholders.
analysis features. • Traceability: Track requirements throughout the development process, ensuring that all requirements are
▪ Helix RM: Focuses on managing requirements and ensuring compliance and traceability. met.
2. Model-Based Tools: • Verification and Validation: Facilitate early detection of inconsistencies, errors, and omissions in
o Description: Tools that use models to specify, analyze, and verify requirements. requirements.
o Examples: • Collaboration: Enhance communication and collaboration among project teams and stakeholders.
▪ Enterprise Architect: Supports UML, SysML, BPMN for comprehensive requirements
modeling. Challenges
▪ MagicDraw: Offers UML modeling and collaboration features for specifying and managing
requirements. • Complexity: Some formal specification languages and tools require specialized knowledge and
3. Formal Methods Tools: expertise.
o Description: Tools for creating, analyzing, and verifying formal specifications. • Cost: High-quality tools and training can be expensive.
o Examples: • Adoption: Resistance to change from traditional methods to formal or structured approaches.
▪ Alloy Analyzer: For creating and analyzing models in the Alloy language.
Examples of Formal Specification Languages

26 27
1. Z Notation:
o Description: A formal specification language used for describing and modeling computing
systems.
o Use Case: Defining data and operations of systems rigorously.
2. B-Method:
o Description: Focuses on the specification, design, and verification of software through a
mathematical approach.
o Use Case: Formal development and verification of software components.
3. VDM (Vienna Development Method):
o Description: Provides a framework for developing precise and abstract models of software
systems.
o Use Case: Specifying and modeling software and system requirements.
4. Alloy:
o Description: A lightweight modeling language for software design that uses a relational model to
describe structures and behaviors.
o Use Case: Analyzing complex system structures and their properties.

28 29
UNIT-III Interface design is the specification of the interaction between a system and its
environment. This phase proceeds at a high level of abstraction with respect to the
Software Design: inner workings of the system i.e, during interface design, the internal of the systems
are completely ignored, and the system is treated as a black box. Attention is focused
The design phase of software development deals with transforming the customer on the dialogue between the target system and the users, devices, and other systems
requirements as described in the SRS documents into a form implementable using a with which it interacts. The design problem statement produced during the problem
programming language. The software design process can be divided into the following three analysis step should identify the people, other systems, and devices which are
levels or phases of design: collectively called agents.
1. Interface Design Interface design should include the following details:
2. Architectural Design 1. Precise description of events in the environment, or messages from agents to which
3. Detailed Design the system must respond.

Elements of a System 2. Precise description of the events or messages that the system must produce.

1. Architecture: This is the conceptual model that defines the structure, behavior, and 3. Specification of the data, and the formats of the data coming into and going out of the
views of a system. We can use flowcharts to represent and illustrate the architecture. system.

2. Modules: These are components that handle one specific task in a system. A 4. Specification of the ordering and timing relationships between incoming events or
combination of the modules makes up the system. messages, and outgoing events or outputs.

3. Components: This provides a particular function or group of related functions. They Architectural Design
are made up of modules. Architectural design is the specification of the major components of a system, their
4. Interfaces: This is the shared boundary across which the components of a system responsibilities, properties, interfaces, and the relationships and interactions between
exchange information and relate. them. In architectural design, the overall structure of the system is chosen, but the
internal details of major components are ignored. Issues in architectural design
5. Data: This is the management of the information and data flow. includes:
1. Gross decomposition of the systems into major components.
2. Allocation of functional responsibilities to components.
3. Component Interfaces.
4. Component scaling and performance properties, resource consumption properties,
reliability properties, and so forth.
5. Communication and interaction between components.
The architectural design adds important details ignored during the interface design.
Design of the internals of the major components is ignored until the last phase of the
design.
Detailed Design
Detailed design is the specification of the internal elements of all major system
components, their properties, relationships, processing, and often their algorithms and
the data structures. The detailed design may include:

Interface Design 1. Decomposition of major system components into program units.


2. Allocation of functional responsibilities to units.
3. User interfaces. empty) without concern for the algorithmic details of the routine. The 3 types of
abstraction mechanisms used are Functional abstraction, Data abstraction, Control
4. Unit states and state changes.
abstraction.
5. Data and control interaction between units.
▪ Functional Abstraction – Functional abstraction involves the use of
6. Data packaging and implementation, including issues of scope and visibility of parameterized subprograms. Functional abstraction allows to bind different
program elements. parameter values on different invocations of the subprogram. Functional
abstraction can be generalized to collections of subprograms called groups
7. Algorithms and data structures. which may be visible or hidden.
▪ Data abstraction or Abstract data hiding – Data abstraction involves
specifying a data type or a data object by specifying legal operation on objects,
representation and manipulation details are suppressed. Thus, we may define
the type ‘stack’ abstractly as a LIFO mechanism in which the routines New,
Push, Pop, Top and Empty are defined abstractly. The term abstract data type
is used to denote declaration of a datatype or object like ‘stack’ from which
numerous instances can be created.
2. Information Hiding – While using information hiding approach, each module/function in
the system hides the internal details of its processing activities and modules communicate
only through well-defined interfaces. Information hiding may be applied to hide –
▪ Data structure, Internal linkage and the implementation details of the
classes/interfaces that manipulate it
▪ The format of central blocks such as queues in an Operating System
▪ Character codes and their implementation details
▪ Shifting, masking and other machine dependent details
3. Structure – Structure is a fundamental characteristic of computer software and its most
1. Correctness: Software design should be correct as per requirement. general form the Network structure. A computing network can be represented as a directed
2. Completeness: The design should have all components like data structures, modules, graph consisting of nodes and arcs. The nodes representing the processing elements that
and external interfaces, etc. transforms data and the arcs can be used to represent data links between nodes. Also, nodes
can represent data stores and the arcs data transformations.
3. Efficiency: Resources should be used efficiently by the program.
In its simplest form, a network may specify data flow and processing steps within a single
4. Flexibility: Able to modify on changing needs. subprogram or data flow among a collection of sequential subprograms. The process network
view of software structures is as shown below –
5. Consistency: There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be easily maintainable
by other designers.

Fundamental Concepts in Software Design:


Fundamental concepts of Software design include Abstraction, Structure, Information
hiding, Modularity, Concurrency and Verification.
1. Abstraction – Abstraction is the intellectual tool which enables us to separate the
conceptual aspects of the System. For eg: we may specify the FIFO property of a
source or stack and functional characteristics of the routines (new, push, pop, top,
▪ Functions/methods those share global data structures selectively may be grouped
together
▪ Functions/methods that manipulate instances of abstract data types are encapsulated
with the data structure being manipulated.
5. Modularization criteria – There are various modularization criteria to represent the
Software as modules and depending on the criteria, various system structures may result.
Various modularization criteria include –
▪ Conventional criteria refers to modularization in which each module and its
submodules correspond to a processing step in the program execution sequence.
▪ Information hiding criteria refers to modularization in which each module hides a
difficult or changeable design decision from other modules.
▪ Data abstraction criterion refers to modularization in which each modules hides the
representation details of major data structure behind functions that access and modify
the data structure.
▪ Levels of abstraction in which modules and collections of modules provide a
hierarchical set of increasingly complex services.
▪ Coupling and Cohesion in which a system is structured to maximize the cohesion of
elements in each modules and to minimize the coupling between modules.
As shown above, the Network consists of Nodes and Links. The Network consists of ▪ Problem Modelling in which the modular structure of the system matches the
concurrently and sequentially executing process involved in synchronous or asynchronous structure of problem being solved.
message passing. Each process consists of various groups or objects and various processing
routines. Each group consists of visible part, static area and hidden part which refers to its 6. Concurrency – Concurrent processes are those processes executing concurrently or in
representation details. parallel utilizing the concurrent and scheduling mechanism of the processor. Like concurrent
processes, we may implement concurrent threads or code segments of execution which may
4. Modularity – Modularity refers to the desirable property of software to be represented as execute concurrently utilizing the concurrent processing power of the processor and the
modules or classes or interfaces each having operational characteristics that could be reused concurrency mechanism of the programming language used (eg: C#, Java).
in different locations of software so we can reduce code duplication and reduce code
dependency when a huge amount of code need to be modified when something changes. For 7. Coupling and Cohesion – Coupling and Cohesion are applied to support the fundamental
eg:, we may save a set of code as a function/method, a group of similar functions/methods as goal of software design to structure the Software product so that the number and complexity
a compiled module (like the Header .H files in C) or the VehicleControllers.class in C#. Also of interconnections between various modules and classes is minimized.
similar classes/interfaces may be compiled as an assembly or a set of modules can be
▪ Coupling – Coupling is defined between two modules and it relates to the degree of
compiled as a software package or a Visual Studio project file or as .PRJ files as in C
linkage between the coupled module s. The strength of coupling between two module
programming.
is influenced by the complexity of the interface, type of connection and type of
Modularity enhances design clarity and eases implementation, debugging, testing, communication. If all modules communicate only by parameter passing, then the
documentation, debugging, testing, documentation and maintenance of the Software product. internal details of the modules can be modified without modifying the functions used
Some of the desirable properties of a modular Software System include – in each modules. Module references by their content is stronger than those by
modules names and in the former case, the entire content will have to be taken into
▪ Each processing abstraction should be well defined so that it may be applicable in account while referring to it. Communication between modules involves passing of
various situations. data, passing elements of control like flags, events, switches, labels, objects etc and
▪ Each function/method in each abstraction has well defined purpose modification of one module/interface’s code by another. The degree of coupling is
highest for modules that modify other modules, higher for control communication and
▪ Each function/method manipulates no more than one major data structure lowest for data communication system. The major types of coupling are as follows –
– Content Coupling – Occurs when one module modifies local data values or Architectural design has the goal of producing well structured, modular software system. The
instructions in another modules and usually occurs in assembly language programs. software module named entity has the following characteristics:
– Common Coupling – Occurs when a set of routines reference a common data Modules contain instructions, processing logic and data structures.
block.
Modules can be separately compiled and stored in a library.
– Stamp Coupling – Stamp coupling is similar to common coupling except that the
Modules can be included in a program.
global data items are shared collectively among the routines. Stamp coupling is more
desirable than common coupling since fewer modules will have to be modified if a shared Module segments can be used by invoking a name and some parameters.
data structure is modified.
Modules can use other modules.
– Data coupling – involves the use of parameters lists to pass data items between
routines. Ex: procedures, subroutines, functions.

The most desirable form of coupling between modules is combination of Stamp and Coupling and Cohesion:
Data coupling. The fundamental goal of software designs to structure the software product so that the
▪ Cohesion – Cohesion or internal Cohesion between two modules is measured in terms number of complexity of interconnection between modules is minimized.
of the strength of binding of elements within the module. The following are the The strength of coupling between two modules is influenced by the complexity of the
various cohesion mechanisms – interface, the type of connection, and the type of communication.
– Coincidental Cohesion – occurs when the elements with a module have no apparent Obvious relationships results in less complexity.
relationship to one another
Ex: common control blocks, common data blocks, common overlay regions in memory.
– Logical Cohesion – refers to some relationship among the elements of the module
such as those perform all input / output operation or those edit or validate data. Logically Loosely coupled= connections established by referring to other module.
bound modules often combines several related functions in a complex and interrelated Connections between modules involves, passing of data, passing of elements(flags, switches,
fashion. labels and procedure names) degree of coupling
– Temporal Cohesion – forms complex connections as logical cohesion but are on the lowest- data communication
higher scale of binding since all elements are executed at one time and no parameter logic are
required to determine which elements to execute such as in the case of a module performing higher- control communication
program initialization. highest- modify other modules.
– Communicational Cohesion -refers to same set of input / output data and the Coupling can be ranked as follows:
binding is higher on the binding scale than temporal binding since the elements are executed
at one time and also refer to the same data. a. Content coupling: when one module modifies local data values or instructions in another
module.
– Sequential Cohesion – occurs when the output of one element is the input of
another element. Sequential cohesion has higher binding levels since the module structure b. Common coupling: are bound together by global data structures
usually represents the problem structure. c. Control coupling: involves passing of control flags between modules so that one module
– Functional Cohesion – is a situation in which every elements function towards the controls the sequence of processing steps in another module.
performance of a single method such as in the case of data elements of a method performing d. Stamp coupling: similar to common coupling except that global data items are shared
sqrt(). selectively among routines that require the data.
e. Data coupling: involves the use of parameter lists to pass data items between routines.
– Informational Cohesion – of elements in a module occurs when a complex data • Internal cohesion of a module is measured in terms of the strength of binding of element
structure manipulated by several routines in that module. Also each routine in the module within the module.
exhibits functional binding.
• Cohesion elements occur on the scale of weakest to strongest as follows:
Modules and modularization Criteria:
a. Coincidental cohesion: Module is created from a group of unrelated instructions that There are extensions for real-time systems that distinguish control flow from data flow.
appear several times in other modules.
DFDs: Diagrammatic elements
b. Logical cohesion: implies some relationship among the elements of the module.
ex: module performs all i/o operations.
c. Temporal cohesion: all elements are executed at one time and no parameter logic are
required to determine which elements to execute. A producer or consumer of information that resides outside the bounds of the system
to be modeled.
d. Communication cohesion: refer to same set of input or output data
Ex: ‘print and punch’ the output file is communicationally bound.
e. Sequential cohesion: of elements occurs when the output of one element is the input for
the next element.
A transformation of information (a function) that resides within the bounds of the system to
ex: ‘read next transaction and update master file’ be modeled.
f. Functional Cohesion: is strong type of binding of elements in a module because all
elements are related to the performance of a single function.
Ex: computer square root, obtain random number etc.,
g. Informational cohesion: occurs when the module contains a complex data structure and
several routines to manipulate the data structure. A data object; the arrowhead indicates the direction of data flow.

Design notations:
Dynamic
Data flow diagrams (DFDs). A repository of data that is to be stored for use by one or more processes; may be as simple as
a buffer or queue or as sophisticated as a relational database
State transition diagrams (STDs).
State charts.
Structure diagrams.
Static
Entity Relationship Diagrams (ERDs).
Class diagrams.
Structure charts.
Object diagrams.

Data Flow Diagrams (DFDs):


A notation developed in conjunction with structured systems analysis/structured design
(SSA/SD).
Used primarily for pipe-and-filter styles of architecture.
Graph–based diagrammatic notation.
Used for capturing state transition behavior in cases where there is an intuitive finite
collections of states.
Derives from the notion of a finite state automaton.
Graph–based diagrammatic notation.
Labeled nodes correspond to states.
Arcs correspond to transitions.
Arcs are labeled with events and actions (actions can cause further events to occur).
E.g.: a telephone call!
Describes a single underlying process.

State charts:
Developed by David Harel.
A generalization of STDs:
States can have zero, one, two or more STDs contained within.
Related to Petri nets.
Higraph–based diagrammatic notation.
Labeled nodes correspond to states.
State Transition Diagrams (STDs) : Arcs correspond to transitions.
Arcs are labeled with events and actions (actions can cause further events to occur).
Describes one or more underlying processes.

Structure Diagrams :
Used in Jackson Structured Programming.
Used to describe several kinds of things.
Ordered hierarchical structure.
Sequential processing.
Based on the idea of regular languages.
Sequencing.
Selection.
Iteration.
Entity Relationship Diagrams (ERDs):
Structure Charts
Based on the fundamental notion of a module.
Used in structured systems analysis/structured design (SSA/SD).
Graph–based diagrammatic notation:
a structure chart is a collection of one or more node labeled rooted directed acyclic graphs.
Each graph is a process.
Nodes and modules are synonymous.
A directed edge from module M1 to module M2 captures the fact that M1 directly uses in
some way the services provided by M2.
Definitions: The fan-in of a module is the count of the number of arcs directed toward the
module. The fan-out of a module is the count of the number of arcs outgoing from the
module.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an
existing system.
The bottom up design model starts with most specific and basic components. It
proceeds with composing higher level of components by using basic or lower level
Strategy of Design: components. It keeps creating higher level components until the desired system is not
evolved as one single component. With each higher level, the amount of abstraction is
A good system design strategy is to organize the program modules in such a method that are
increased.
easy to develop and latter too, change. Structured design methods help developers to deal
Bottom-up strategy is more suitable when a system needs to be created from some
with the size and complexity of programs. Analysts generate instructions for the developers
existing system, where the basic primitives can be used in the newer system.
about how code should be composed and how pieces of code should fit together to form a
program.
To design a system, there are two possible approaches:
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.
We know that a system is composed of more than one sub-systems and it contains a number Both, top-down and bottom-up approaches are not practical individually. Instead, a
of components. Further, these sub-systems and components may have their on set of sub- good combination of both is used.
system and components and creates hierarchical structure in the system. Walkthrough
Top-down design takes the whole software system as one entity and then decomposes it to Walkthrough is a method of conducting informal group/individual review. In a
achieve more than one sub-system or component based on some characteristics. Each walkthrough, author describes and explain work product in a informal meeting to his
subsystem or component is then treated as a system and decomposed further. This process peers or supervisor to get feedback. Here, validity of the proposed solution for work
keeps on running until the lowest level of system in the top-down hierarchy is achieved. product is checked.
• It is cheaper to make changes when design is on the paper rather than at time of
Top-down design starts with a generalized model of system and keeps on defining the more conversion. Walkthrough is a static method of quality assurance. Walkthrough are
specific part of it. When all components are composed the whole system comes into informal meetings but with purpose.
existence.
INSPECTION
Top-down design is more suitable when the software solution needs to be designed from
scratch and specific details are unknown.
• An inspection is defined as formal, rigorous, in depth group review designed to
identify problems as close to their point of origin as possible. Inspections improve
UNIT-IV
reliability, availability, and maintainability of software product. USER INTERFACE DESIGN:
• Anything readable that is produced during the software development can be
inspected. Inspections can be combined with structured, systematic testing to provide The user interface is the front-end application view to which the user
a powerful tool for creating defect-free programs. interacts to use the software. User can manipulate and control the software as
• Inspection activity follows a specified process and participants play welldefined well as hardware by means of user interface.
roles. An inspection team consists of three to eight members who plays roles of
moderator, author, reader, recorder and inspector. User interface design creates an effective communication medium
between a human and a computer. UI provides fundamental platform for human
computer interaction.
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
1. Command Line Interface: The Command Line Interface provides a
command prompt, where the user types the command and feeds it to the
system. The user needs to remember the syntax of the command and its
use.
2. Graphical User Interface: Graphical User Interface provides a simple
interactive interface to interact with the system. GUI can be a
combination of both hardware and software. Using GUI, the user
interprets the software.
User Interface Design Process
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user
interface consists of four framework activities.
stated by Theo Mandel. Design issues such as response time, command and
action structure, error handling, and help facilities are considered as the design
model is refined. This phase serves as the foundation for the implementation
phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model) that
enables usage scenarios to be evaluated. As iterative design process continues a
User Interface toolkit that allows the creation of windows, menus, device
interaction, error messages, commands, and many other elements of an
interactive environment can be used for completing the construction of an
interface.
4. Interface Validation
1. User, Task, Environmental Analysis, and Modeling
This phase focuses on testing the interface. The interface should be in such a
Initially, the focus is based on the profile of users who will interact with the way that it should be able to perform tasks correctly, and it should be able to
system, i.e., understanding, skill and knowledge, type of user, etc., based on the handle a variety of tasks. It should achieve all the user’s requirements. It should
user’s profile users are made into categories. From each category requirements be easy to use and easy to learn. Users should accept the interface as a useful
are gathered. Based on the requirement’s developer understand how to develop one in their work.
the interface. Once all the requirements are gathered a detailed analysis is
User Interface Design Golden Rules
conducted. In the analysis part, the tasks that the user performs to establish the
goals of the system are identified, described and elaborated. The analysis of the The following are the golden rules stated by Theo Mandel that must be followed
user environment focuses on the physical work environment. Among the during the design of the interface. Place the user in control:
questions to be asked are:
1. Define the interaction modes in such a way that does not force the
1. Where will the interface be located physically? user into unnecessary or undesired actions: The user should be able to
easily enter and exit the mode with little or no effort.
2. Will the user be sitting, standing, or performing other tasks unrelated to
the interface? 2. Provide for flexible interaction: Different people will use different
interaction mechanisms, some might use keyboard commands, some
3. Does the interface hardware accommodate space, light, or noise
might use mouse, some might use touch screen, etc., Hence all interaction
constraints?
mechanisms should be provided.
4. Are there special human factors considerations driven by environmental
3. Allow user interaction to be interruptible and undoable: When a user
factors?
is doing a sequence of actions the user must be able to interrupt the
2. Interface Design sequence to do some other work without losing the work that had been
The goal of this phase is to define the set of interface objects and actions i.e., done. The user should also be able to do undo operation.
control mechanisms that enable the user to perform desired tasks. Indicate how 4. Streamline interaction as skill level advances and allow the
these control mechanisms affect the system. Specify the action sequence of interaction to be customized: Advanced or highly skilled user should be
tasks and subtasks, also called a user scenario. Indicate the state of the system provided a chance to customize the interface as user wants which allows
when the user performs a particular task. Always follow the three golden rules
different interaction mechanisms so that user doesn’t feel bored while 2. Maintain consistency across a family of applications: in The
using the same interaction mechanism. development of some set of applications all should follow and implement
the same design, rules so that consistency is maintained among
5. Hide technical internals from casual users: The user should not be
applications.
aware of the internal technical details of the system. He should interact
with the interface just to do his work. 3. If past interactive models have created user expectations do not make
changes unless there is a compelling reason: once a particular
6. Design for direct interaction with objects that appear on-screen: The
interactive sequence has become standard (eg: ctrl+s to save file) the user
user should be able to use the objects and manipulate the objects that are
expects this in every application she encounters.
present on the screen to perform a necessary task. By this, the user feels
easy to control over the screen. User interface design is a crucial aspect of software engineering, as it is
the means by which users interact with software applications. A well-
Reduce the User’s Memory Load
designed user interface can improve the usability and user experience of
1. Reduce demand on short-term memory: When users are involved in an application, making it easier to use and more effective.
some complex tasks the demand on short-term memory is significant. So
Key Principles for Designing User Interfaces
the interface should be designed in such a way to reduce the remembering
of previously done actions, given inputs and results. 1. User-centered design: User interface design should be focused on the
needs and preferences of the user. This involves understanding the user’s
2. Establish meaningful defaults: Always an initial set of defaults should
goals, tasks, and context of use, and designing interfaces that meet their
be provided to the average user, if a user needs to add some new features
needs and expectations.
then he should be able to add the required features.
2. Consistency: Consistency is important in user interface design, as it helps
3. Define shortcuts that are intuitive: Mnemonics should be used by the
users to understand and learn how to use an application. Consistent
user. Mnemonics means the keyboard shortcuts to do some action on the
design elements such as icons, color schemes, and navigation menus
screen.
should be used throughout the application.
4. The visual layout of the interface should be based on a real-world
3. Simplicity: User interfaces should be designed to be simple and easy to
metaphor: Anything you represent on a screen if it is a metaphor for a
use, with clear and concise language and intuitive navigation. Users
real-world entity then users would easily understand.
should be able to accomplish their tasks without being overwhelmed by
5. Disclose information in a progressive fashion: The interface should be unnecessary complexity.
organized hierarchically i.e., on the main screen the information about the
4. Feedback: Feedback is significant in user interface design, as it helps
task, an object or some behavior should be presented first at a high level
users to understand the results of their actions and confirms that they are
of abstraction. More detail should be presented after the user indicates
making progress towards their goals. Feedback can take the form of
interest with a mouse pick.
visual cues, messages, or sounds.
Make the Interface Consistent
5. Accessibility: User interfaces should be designed to be accessible to all
1. Allow the user to put the current task into a meaningful users, regardless of their abilities. This involves considering factors such
context: Many interfaces have dozens of screens. So it is important to as color contrast, font size, and assistive technologies such as screen
provide indicators consistently so that the user know about the doing readers.
work. The user should also know from which page has navigated to the
6. Flexibility: User interfaces should be designed to be flexible and
current page and from the current page where it can navigate.
customizable, allowing users to tailor the interface to their own
preferences and needs.
Real-time systems: 3. Release time of a job: It is the time at which the job becomes ready for
execution.
A real-time system means that the system is subjected to real-time, i.e.,
the response should be guaranteed within a specified timing constraint or 4. Execution time of a job: It is the time taken by the job to finish its
the system should meet the specified deadline. For example flight control execution.
systems, real-time monitors, etc.
5. Deadline of a job: It is the time by which a job should finish its
Types of real-time systems based on timing constraints: execution. Deadline is of two types: absolute deadline and relative
deadline.
1. Hard real-time system: This type of system can never miss its deadline.
Missing the deadline may have disastrous consequences. The usefulness 6. Response time of a job: It is the length of time from the release time of a
of results produced by a hard real-time system decreases abruptly and job to the instant when it finishes.
may become negative if tardiness increases. Tardiness means how late a
7. The maximum allowable response time of a job is called its relative
real-time system completes its task with respect to its deadline. Example:
deadline.
Flight controller system.
8. The absolute deadline of a job is equal to its relative deadline plus its
2. Soft real-time system: This type of system can miss its deadline
release time.
occasionally with some acceptably low probability. Missing the deadline
have no disastrous consequences. The usefulness of results produced by a 9. Processors are also known as active resources. They are essential for the
soft real-time system decreases gradually with an increase in tardiness. execution of a job. A job must have one or more processors in order to
Example: Telephone switches. execute and proceed towards completion. Example: computer,
transmission links.
3. Firm Real-Time Systems: These are systems that lie between hard and
soft real-time systems. In firm real-time systems, missing a deadline is 10.Resources are also known as passive resources. A job may or may not
tolerable, but the usefulness of the output decreases with time. Examples require a resource during its execution. Example: memory, mutex
of firm real-time systems include online trading systems, online auction 11.Two resources are identical if they can be used interchangeably else they
systems, and reservation systems. are heterogeneous.
Reference model of the real-time system:
Our reference model is characterized by three elements: Advantages:
1. A workload model: It specifies the application supported by the system. • Real-time systems provide immediate and accurate responses to external
events, making them suitable for critical applications such as air traffic
2. A resource model: It specifies the resources available to the application.
control, medical equipment, and industrial automation.
3. Algorithms: It specifies how the application system will use resources.
• They can automate complex tasks that would otherwise be impossible to
Terms related to real-time system: perform manually, thus improving productivity and efficiency.
1. Job: A job is a small piece of work that can be assigned to a processor • Real-time systems can reduce human error by automating tasks that
and may or may not require resources. require precision, accuracy, and consistency.
2. Task: A set of related jobs that jointly provide some system • They can help to reduce costs by minimizing the need for human
functionality. intervention and reducing the risk of errors.
• Real-time systems can be customized to meet specific requirements, The engineering discipline for designers and developers must focus on
making them ideal for a wide range of applications. the following:
Disadvantages: • Users and their psychology
• Real-time systems can be complex and difficult to design, implement, and • Amount of work that the user must do, including task goals,
test, requiring specialized skills and expertise. performance requirements and group communication requirements.
• Quality and performance.
• They can be expensive to develop, as they require specialized hardware • Information required by users and their job.
and software components.
Benefits:
• Real-time systems are typically less flexible than other types of computer
systems, as they must adhere to strict timing requirements and cannot be • Elevated user satisfaction.
easily modified or adapted to changing circumstances. • Decreased training time and costs.
• Reduced operator stress.
• They can be vulnerable to failures and malfunctions, which can have
• Reduced product liability.
serious consequences in critical applications.
• Decrement of operating costs.
• Real-time systems require careful planning and management, as they • Lesser operational error.
must be continually monitored and maintained to ensure they operate
correctly.
Based approach to human factors:
HUMAN FACTORS:
It is often that people take human factors not too seriously because
The essentially of human factors are imperative for the design and it is often regarded as common sense. Many companies heavily channel their
development of any software work. It presents the underlying idea for resources and time towards factors of software development like planning,
incorporating these factors into the software life cycle. Many giant management, control. They often neglect the fact that they must present their
companies came to recognise that the success of a product depends upon product in such a way that it is easy to learn and implement and that it should be
a solid Human factors design. Human factors discovers and applies aesthetic in nature.
information about human behaviour, abilities, limitations and other
characteristics to the design of tools machines, systems, tasks, jobs and Interface designers and engineering psychologies apply systematic human factors
environment for productive, safe, comfortable and effective human use. technique to produce designs for hardware and software.

Study of human factors is essential for every software manager since A systematic approach is required in the design process in human factors design
he/she must be acquitted with low his/her staff members interact with and thus usability is required.
each other .Generally ,software products are used by variety of populace Usability is a software quality characteristics that surveys on software usability
and its necessary to take account the abilities of such a group to make the cost and benefits and it can be simply be defined as the external attributes of
software more useful and popular. software quality. The process involving users in the development life cycle
Objective of human factors design: ensures that the product is user friendly and is widely accepted.

The purpose of human factors design into create products that meet Usability aims at the following:
the operability and learn ability goals. This design should meet the user’s • Shortening the time to accomplish tasks.
needs by being effective. Efficient but also high quality keeping an eye on • Reducing the no. of mistakes made.
the major concern of the customer in most cases, that is affordability. • Reducing learning time.
• Improving people’s satisfication with a system. Long term memory: Magnetic and optical disks capacity limitation
related to
Benefits of usability: Document and wide storage.
• Elevated sales and consumer satisfaction.
• Increased productivity and efficiency. Processing:
• Decreased training costs and time. The effects when systems run too slow too fast, the myth of the
• Lesser support and maintenance costs. infinitely fast machine.
• Reduced documentation and support costs.
Limitations and processing speed.
• Increased satisfaction, performance and productivity.
Instead of workstations, computer may be in the form of
For software product to be successful with the customer, a software embedded computational machines, such as parts of microwave ovens.
engineer needs to develop his/her product in such a way that it is easy to Because the technique for designing these interfaces bear so much
understand, learn and use human factors play a very important role in the relationship to the techniques for designing workstations interfaces, they
software life cycle. can be profitably treated together. Human computer interaction, by
A software engineer must always keep in mind the end user who is contrast, studies both the mechanism side and the human side, but of a
going to use the product and should make things as simple as possible and narrower class of devices.
provide the best, at the same time not being too hard at his/her pocket.
Usability testing deals with the effective designing of a product. Human:
Humans are limited in their capacity to process information. This
Human-computer Interaction: has important implications for design. Information is received and response
given via a no of input and output channels.
The Human-computer interaction (HCI) program will play a leading
➢ Visual channel.
role in the creation of tomorrow’s exciting new user interface design
➢ Auditory channel
software and technology, by supporting the broad spectrum of fundamental
➢ Movement
research that will ultimately transform the human computer interaction
experience so the computer is no longer a distracting focus of attention. Information is stored in memory:
➢ Sensory memory.
Computer: ➢ Short term memory.
A Computer system comprises various elements, each of ➢ Long term memory.
which affects the user of the system. Input devices for interactive use,
allowing text entry, drawing and selection from the screen. Information is processed applied:
➢ Text entry: Traditional keyboard, phone text entry. ➢ Reasoning.
➢ Pointing: Mouse, but also touch pads. ➢ Problem solving.
Output display devices for interactive use ➢ Error.
➢ Different types of screen mostly using same form of bitmap display.
➢ Large displays and situated displays for shared and public use. Interaction:

Memory: The communication between the user and the system their interaction
framework has four parts:
Short term memory: RAM
1.User Command line interface:
2.Input Way of expressing instructions to the computer directly, can be
function keys, single characters, short abbreviations.
3.System
➢ Suitable for repetitive tasks.
4.Output
➢ Better for expert users than invoices.
Interaction models help us to understand what is going on in the interaction ➢ Offer direct access to system functionality.
between user and system. They address the translations between what the user
Menus:
wants and what the system does.
Set of options displayed on the screen options visible so demand less recall-rely
Human-Computer interaction is concerned with the joint performance of tasks by
on recognition so names should be meaningful select by using mouse, numeric or
humans and machines; the structure of communication between human and
alphabetic keys.
machine, human capabilities to use machines.
Menu system can be
The goals of HCI are to produce usable and safe system as well as functional
systems. In order to produce computer system with good usability develops must ➢ Purely text based, with options presented as numbered choices or
attempt to: ➢ Can have graphical component with menu appearing in box and choices
made either by typing initial letter or moving around arrow keys.
➢ Understand the factors that determines how people use technology.
➢ Develop tools and technique to enable building suitable system. Form filling interfaces:
➢ Achieve efficient, effective and safe interactive.
➢ Primarily for data entry or data retrieval.
➢ Put people first.
➢ Screen like paper form.
➢ Data put in relevant place.
HCI arise as a field from inter wined roots in computer graphics,
operating systems, human factors, ergonomics, cognitive WIMP interface:
psychology and the systems part of computer science. ➢ Windows
A key aim of HCI is to understand how human interface with ➢ Icon
computers, and to represent how knowledge is passed between the ➢ Menus
two. ➢ Pointers
Interaction styles: Windows: Areas of the screen that behave as if they were independent terminals.
Interaction can be seen as a dialogue between the computer and the user.
• Can contain text bro graphics.
Some applications have very distinct styles of interaction.
• Can be moved or resized.
We can identify some common styles. • Scroll bars allow user to move the contents of the window up and down or
from side to side.
• Command line interface
• Title bars describe the name of the window.
• Menus
• Natural language Icon: Small picture or image, used to represent same object in the interface, often
• Form-fills and spread sheets a window. Windows can be closed down to this small representation allowing
• WIMP many windows to be accessible. Icons can be many and various highly stylized
or realistic representations.
Pointers: Important component, since WIMP style relies on pointing and Human- Computer Interface Design:
selecting things such as icons and menu items.
The overall process for designing a user interface begins with
➢ Usually achieved with mouse. the creation of different models. The intention of computer interface design is to
➢ Joystick, track ball, cursor keys or keyboard shortcuts are also used wide learn the ways of designing user-friendly interfaces or interactions.
variety.
Interface Design Models:
Menus: Choice of operations or services that can be performed offered on the
Four different models come into play when a human-computer
screen, Required option selected with pointer.
interface (HCI) is to be designed.
➢ Problem – menus an take up a lot of screen space.
The software engineering creates a design model, a human engineer (or the
➢ Solution – Use pull-down or pop-up menus.
software engineer) establishes a user model, the end user develops a mental image
➢ Pull-down menus are dragged down from single title at the top of the
that is often called the user’s model or the system perception, and the implements
screen.
of the system create a system image.
➢ Popup menus appear when a particular region of the screen is clicked on.
Task Analysis and Modelling:
Interaction devices: Task analysis and modelling can be applied to understand the tasks that
people currently perform and map these into similar set of tasks.
Different tasks, different types of data and different types of users
all require different user interface devices. In most cases, interface devices are For example, assume that a small software company wants build a
either input or output devices. For example: A touch screen combines both. computer-aided design system explicitly for interior designers. By of serving a
designer at work, the engineer notices that the interior design is comprised of a
➢ Interface devices correlate to the human senses.
number of activities : furniture layout, fabric and material selection, wall and
➢ Now a day, a device usually is designed either for input or for output.
window covering selection, presentation costing and shopping. Each of these
Input devices: major tasks can be elaborated into subtasks. For example, furniture layout can be
refined into the following tasks:
Most commonly, personal computers are equipped with text input and
pointing devices. For text input, the QWERTY keyboard is the standard solution, (1) Draw floor plan based on room dimensions;
but depending on the purpose of the system. At the same time, the mouse is not (2) Place windows and doors at appropriate locations;
only imaginable pointing device. Alternative for similar but slightly different (3) Use furniture templates to draw scaled furniture outlines on floor
purposes include touchpad, track ball, joystick. plan;
(4) Move furniture outlines to get best placement;
Output devices:
(5) Label all furniture outlines;
Output from a personal computer in most cases means output of visual data. (6) Draw dimensions to show location; and
Devices for ‘dynamic visualisation’ include the traditional cathode ray tube (7) Draw perspective view for customer.
(CRT), liquid crystal display (LCD). Printers are also a very important device for
visual output but are substantially different from screens in that output is static.
Subtask 1 to 7 each be refined further. Subtask 1 to 6 will be performed by
The subject of HCI is very rich both terms of the disciplines it draws from manipulating information and performing actions with the user interface. On the
as well as opportunities for research. The study of user interface provides a other hand, subtask 7 can be performed automatically in software and will result
double-sided approach to understand how human and machines interact. From in little direct user interaction.
studying how human psychology, We can design better into for people to interact
with computer.
Desing issues: routines or objects that facilitate certain of windows, menus, device interaction,
error messages, commands, and many other elements of an interactive
As the design of a user interface evolves, four common design issues
environment.
almost all ways surface: system response time, user help facilities, error
information handling, and command labelling. Design Evaluation:
System response time is the primarily complaint for many interactive After the preliminary design has been completed, an operational user
systems. In general, system response time is measured from the point at which interface prototype has been created. The protype is evaluated by the user, who
the user performs some control action until the software responds with desired provides the designer with direct comments about the efficiency of the interface.
output or action.
In addition, if formal evaluation techniques are used (eg.
System response has two important characteristics length and variability. Questionaires, rating sheets), the designers may extract information from this
If the system response time too long, user frustration and stress is the inevitable information (eg. 80 percent of all users did not like the mechanism for saving data
result. files).
Variability refers to the deviation from average response time, and in many Design modifications are made based on user input and the next-
ways, it is the important of the response time characteristics. level prototype is created. The evaluation cycle continues until no further
modifications to the interface design are necessary.
In many cases, however, modern software provides on-line help facilities
that enable a user to get a question answered or resolve a problem without leaving
the interface.
Two different types of help facilities are encountered: integrated and add
on. An integrated help facility is designed into the software from the beginning.
An add-on help facility is added to the software after the system has been built.
In many ways, it is really an on-line user’s manual with limited query capability.
There is little doubt that the integrated help facility is preferable to the add-on
approach.
The error message provides no real indication of what is wrong or where
to look to get additional information. An error message presented in the manner
shown above does nothing to assuage user anxiety or to help correct the problem.
• The message should describe the problem in jargon that the user can
understand.
• The message should provide constructive advice for recovering from the
error.
• The message should indicate any negative consequences of the error. Interface design :
Implementation Tools: Interface design is one of the most important part of software design. It is crucial
The process of user interface design is iterative. That is, a design model is in a sense that user interaction with the system takes place through various
implemented as a prototype, and modified based on their comments. To interfaces provided by the software product.
accommodate this iterative design approach a board class of interface design and Think of the days of text based system where user had to type command on the
prototyping tools has evolved, called user interface toolkits, these tools provide command line to execute a simple task.
Example of a command line interface: • Expert user with adequate knowledge of the system and application
• run prog1.exe /i=2 message=on • Average user with reasonable knowledge
The above command line interface executes a program prog1.exe with a input i=2 • Novice user with little or no knowledge.
with message during execution set to on. Although such command line interface
The following are the elements of good interface design:
gives liberty to the user to run a program with a concise command. It is difficult
for a novice user and is error prone. This also requires the user to remember the • Goal and the intension of task must be identified.
command for executing various commands with various details of options as • The important thing about designing interfaces is all about maintaining
shown above. Example of Menu with option being asked from the user (refer to consistency. Use of consistent color scheme, message and terminologies helps.
Figure 3.11).
• Develop standards for good interface design and stick to it.
• Use icons where ever possible to provide appropriate message.
• Allow user to undo the current command. This helps in undoing mistake
committed by the user.
• Provide context sensitive help to guide the user.
• Use proper navigational scheme for easy navigation within the application.
• Discuss with the current user to improve the interface.
• Think from user prospective.
This simple menu allow the user to execute the program with option available as • The text appearing on the screen are primary source of information exchange
a selection and further have option for exiting the program and going back to between the user and the system. Avoid using abbreviation. Be very specific in
previous screen. Although it provide grater flexibility than command line option communicating the mistake to the user. If possible provide the reason for error.
and does not need the user to remember the command still user can’t navigate to
the desired option from this screen. At best user can go back to the previous screen • Navigation within the screen is important and is specially useful for data entry
to select a different option. screen where keyboard is used intensively to input data.
Modern graphical user interface provides tools for easy navigation and • Use of color should be of secondary importance. It may be kept in mind about
interactivity to the user to perform different tasks. user accessing application in a monochrome screen.
The following are the advantages of a Graphical User Interface (GUI): • Expect the user to make mistake and provide appropriate measure to handle
such errors through proper interface design.
• Various information can be display and allow user to switch to different
task directly from the present screen. • Grouping of data element is important. Group related data items accordingly.

• Useful graphical icons and pull down menu reduces typing effort by the user. • Justify the data items.

• Provides key-board shortcut to perform frequently performed tasks. • Avoid high density screen layout. Keep significant amount of screen blank.

• Simultaneous operations of various task without loosing the present context. • Make sure an accidental double click instead of a single click may does some
thing unexpected.
Any interface design is targeted to users of different categories.
• Provide file browser. Do not expect the user to remember the path of the required Example:
file.
1.Modelling a system which has user controlled display options.
• Provide key-board shortcut for frequently done tasks. This saves time.
2.User can select from one of three choices.
• Provide on-line manual to help user in operating the software.
3.choices determine the size of the current window display.
• Always allow a way out (i.e., cancellation of action already completed).
4.so they came up with schema and present first prototype.
• Warn user about critical task, like deletion of file, updating of critical
Select screen display
information.
FULL
• Programmers are not always good interface designer. Take help of expert
professional who understands human perception better than programmers. HALF
• Include all possible features in the application even if the feature is available in PANEL
operating system.
Word the message carefully in a user understandable manner.
• Develop navigational procedure prior to developing the user interface. Problem:
Interface standards: ➢ User testing shows the system breaks when a user selects more than one
A user interface is the system by which people (user) interact with machine. option.
➢ Designer fixes it and present second prototype.
Why we need standards? ➢ But isn’t this the original prototype?
➢ Despite the best efforts of HCI, we are still getting if wrong. ➢ Designer has ‘improved it’.
➢ We specify the system the system behaviour. ➢ User can now only select one checkbox.
➢ We validate our specification. ➢ Designer has broken guidelines regarding selection controls.
➢ We test the code and prove the correctness of our system. Guidelines for using selection controls:
➢ It is not just design issue or usability testing issue.
➢ Use radio buttons to indicate one or more options that must be either on or
History of user interface standards off, but which are mutually exclusive.
• In 1965, human factors specialists worked to make user interfaces- it is, ➢ Use checkboxes to indicate one or more options that must be either on or
accurate and easy to learn. off, but which are not mutually exclusive.
• In 1985, We realised that usability was not enough. We needed consistency Extending the specification:
standards become important.
➢ Design must satisfy our specification.
• User interface standards are very effective for when you are developing,
➢ Design must also satisfy guidelines.
testing or designing any new site or application or when you are revising
➢ Find a way to specify selection widget guidelines.
over so percent of the [pages in an existing application or site.
➢ Ensure the described property holds in our system.
Creating a user interface standard helps you to create user interface that are ➢ So, they extend specification and present revised prototype.
consistent and easy to understand
Types of standards: ➢ Present user interface document.
There are 3 types of standards • You present the UI document in electronic form or paper form.

Methodological standards: This is S checklist to remind developers of the tasks Benefits of standards:
to create usable systems such as user interview, task analysis and design etc. 1.The goal of UI design is to made the user interaction as simple as efficient as
Design standards: This is building code. A set of absolute legal requirements that possible.
ensure a consistent look and feel. 2.Your user or customers see a consistent UI within and between applications.
Design principles: Good design principles are specific and research – based and 3.Reduced costs for support, user training packages and job aids.
developers work well within the design standards rules.
4.Most important customer satisfaction, your users will reduce errors, training
Building the design standards: requirement, and frustration time per transaction.
Major activities when building these standards are 5.Reduced cost and effort for system maintenance.
➢ Project kick off and planning
• You collaborate with key members of the project team to define the
goals and scope of the user interface standards
• This includes whether the UI document is to be considered a
guideline, standard or style guide, which UI technology it will be
based on and who should participate in its development.
• You work closely with your team and other stake holders to identify
your key business need and business flows.
➢ Gather user interface samples
Based on the information and direction received from your team,
you begin by reviewing your major business applications and
extracting. Examples for the UI standard.
This is an iterative process that takes feedback from as wide
an audience as is appropriate.
➢ Develop user interface document
The document itself includes
• How to change and update the document.
• Common UI elements and when to use them.
• General navigation, graphic look and feel(or style), error handling,
messages.
➢ Review with team
• This is an iterative process that takes feedback from as wide an
audience as it is appropriate.
• The standard is reviewed and refined with your team and stake
holders in a consensus building process.
UNIT-V SQA process Specific quality assurance and quality control tasks (including
technical reviews and a multitiered testing strategy) Effective software
What is Software Quality? engineering practice (methods and tools) Control of all software work products
Software Quality shows how good and reliable a product is. To convey an and the changes made to them a procedure to ensure compliance with software
associate degree example, think about functionally correct software. It performs development standards (when applicable) measurement and reporting
all functions as laid out in the SRS document. But, it has an associate degree mechanisms
virtually unusable program. even though it should be functionally correct, we tend Elements of Software Quality Assurance (SQA)
not to think about it to be a high-quality product.
1. Standards: The IEEE, ISO, and other standards organizations have
Software Quality Assurance (SQA): produced a broad array of software engineering standards and related
Software Quality Assurance (SQA) is simply a way to assure quality in the documents. The job of SQA is to ensure that standards that have been
software. It is the set of activities that ensure processes, procedures as well as adopted are followed and that all work products conform to them.
standards are suitable for the project and implemented correctly. 2. Reviews and audits: Technical reviews are a quality control activity
Software Quality Assurance is a process that works parallel to Software performed by software engineers for software engineers. Their intent is to
Development. It focuses on improving the process of development of software so uncover errors. Audits are a type of review performed by SQA personnel
that problems can be prevented before they become major issues. Software (people employed in an organization) with the intent of ensuring that
Quality Assurance is a kind of Umbrella activity that is applied throughout the quality guidelines are being followed for software engineering work.
software process. 3. Testing: Software testing is a quality control function that has one primary
For those looking to deepen their expertise in SQA and elevate their professional goal—to find errors. The job of SQA is to ensure that testing is properly
skills, consider exploring a specialized training program – Manual to planned and efficiently conducted for primary goal of software.
Automation Testing: A QA Engineer’s Guide . This program offers practical, 4. Error/defect collection and analysis : SQA collects and analyzes error
hands-on experience and advanced knowledge that complements the concepts and defect data to better understand how errors are introduced and what
covered in. software engineering activities are best suited to eliminating them.
What is quality? 5. Change management: SQA ensures that adequate change management
Quality in a product or service can be defined by several measurable practices have been instituted.
characteristics. Each of these characteristics plays a crucial role in determining 6. Education: Every software organization wants to improve its software
the overall quality. engineering practices. A key contributor to improvement is education of
software engineers, their managers, and other stakeholders. The SQA
organization takes the lead in software process improvement which is key
proponent and sponsor of educational programs.
7. Security management: SQA ensures that appropriate process and
technology are used to achieve software security.
8. Safety: SQA may be responsible for assessing the impact of software
failure and for initiating those steps required to reduce risk.

Software Quality Assurance (SQA) encompasse s


9. Risk management : The SQA organization ensures that risk management • software’s maintainability: Maintainability of software refers to how
activities are properly conducted and that risk-related contingency plans easily it can be modified, updated, or extended over time. Well-maintained
have been established. software is structured and documented in a way that allows developers to
make changes efficiently without introducing errors or compromising its
Software Quality Assurance (SQA) focuses
stability.
The Software Quality Assurance (SQA) focuses on the following
• software’s error control: Error control in software involves
implementing mechanisms to detect, handle, and recover from errors or
unexpected situations gracefully. Effective error control ensures that the
software remains robust and reliable, minimizing disruptions to users and
providing a smoother experience overall.
Software Quality Assurance (SQA) Include
1. A quality management approach.
2. Formal technical reviews.
3. Multi testing strategy.
4. Effective software engineering technology.
• Software’s portability: Software’s portability refers to its ability to be
easily transferred or adapted to different environments or platforms without 5. Measurement and reporting mechanism.
needing significant modifications. This ensures that the software can run Major Software Quality Assurance (SQA) Activities
efficiently across various systems, enhancing its accessibility and
1. SQA Management Plan: Make a plan for how you will carry out the SQA
flexibility.
throughout the project. Think about which set of software engineering
• software’s usability: Usability of software refers to how easy and activities are the best for project. check level of SQA team skills.
intuitive it is for users to interact with and navigate through the application.
2. Set The Check Points: SQA team should set checkpoints. Evaluate the
A high level of usability ensures that users can effectively accomplish their
performance of the project on the basis of collected data on different check
tasks with minimal confusion or frustration, leading to a positive user
points.
experience.
3. Measure Change Impact: The changes for making the correction of an
• software’s reusability: Reusability in software development involves
error sometimes re introduces more errors keep the measure of impact of
designing components or modules that can be reused in multiple parts of
change on project. Reset the new change to check the compatibility of this
the software or in different projects. This promotes efficiency and reduces
fix with whole project.
development time by eliminating the need to reinvent the wheel for similar
functionalities, enhancing productivity and maintainability. 4. Multi testing Strategy: Do not depend on a single testing approach. When
you have a lot of testing approaches available use them.
• software’s correctness: Correctness of software refers to its ability to
produce the desired results under specific conditions or inputs. Correct 5. Manage Good Relations: In the working environment managing good
software behaves as expected without errors or unexpected behaviors, relations with other teams involved in the project development is
meeting the requirements and specifications defined for its functionality. mandatory. Bad relation of SQA team with programmers team will impact
directly and badly on project. Don’t play politics.
6. Maintaining records and reports: Comprehensively document and share • Resource Intensive : SQA requires skilled personnel with expertise in
all QA records, including test cases, defects, changes, and cycles, for testing methodologies, tools, and quality assurance practices. Acquiring
stakeholder awareness and future reference. and retaining such talent can be challenging and expensive.
7. Reviews software engineering activities: The SQA group identifies and • Resistance to Change : Some team members may resist the
documents the processes. The group also verifies the correctness of implementation of SQA processes, viewing them as bureaucratic or
software product. unnecessary. This resistance can hinder the adoption and effectiveness of
quality assurance practices within an organization.
8. Formalize deviation handling: Track and document software deviations
meticulously. Follow established procedures for handling variances. • Not Foolproof : Despite thorough testing and quality assurance efforts,
software can still contain defects or vulnerabilities. SQA cannot guarantee
Benefits of Software Quality Assurance (SQA)
the elimination of all bugs or issues in software products.
1. SQA produces high quality software.
• Complexity : SQA processes can be complex, especially in large-scale
2. High quality application saves time and cost. projects with multiple stakeholders, dependencies, and integration points.
3. SQA is beneficial for better reliability. Managing the complexity of quality assurance activities requires careful
planning and coordination.
4. SQA is beneficial in the condition of no maintenance for a long time.
Goals and Measures of Software Quality Assurance:
5. High quality commercial software increase market share of company. Software Quality simply means to measure how well software is designed i.e.
6. Improving the process of creating software. the quality of design, and how well software conforms to that design i.e. quality
of conformance. Software quality describes degree at which component of
7. Improves the quality of the software. software meets specified requirement and user or customers’ needs and
8. It cuts maintenance costs. Get the release right the first time, and your expectations.
company can forget about it and move on to the next big thing. Release a Software Quality Assurance (SQA) is a planned and systematic pattern of
product with chronic issues, and your business bogs down in a costly, time- activities that are necessary to provide a high degree of confidence regarding
consuming, never-ending cycle of repairs. quality of a product. It actually provides or gives a quality assessment of quality
Disadvantage of Software Quality Assurance (SQA) control activities and helps in determining validity of data or procedures for
determining quality. It generally monitors software processes and methods that
There are a number of disadvantages of quality assurance. are used in a project to ensure or assure and maintain quality of software.
• Cost: Some of them include adding more resources, which cause the more
budget its not, Addition of more resources For betterment of the product.
• Time Consuming: Testing and Deployment of the project taking more
time which cause delay in the project.
• Overhead : SQA processes can introduce administrative overhead,
requiring documentation, reporting, and tracking of quality metrics. This
additional administrative burden can sometimes outweigh the benefits,
especially for smaller projects.
performance of system under a heavy workload in terms of
responsiveness and stability.
3. Functionality –
It represents that system is satisfying main functional requirements. It
simply refers to required and specified capabilities of a system.
4. Supportability –
There are a number of other requirements or attributes that software
system must satisfy. These include- testability, adaptability,
maintainability, scalability, and so on. These requirements generally
enhance capability to support software.
5. Usability –
It is capability or degree to which a software system is easy to understand
Goals of Software Quality Assurance : and used by its specified users or customers to achieve specified goals
with effectiveness, efficiency, and satisfaction. It includes aesthetics,
• Quality assurance consists of a set of reporting and auditing functions.
consistency, documentation, and responsiveness.
• These functions are useful for assessing and controlling effectiveness and
Software Quality Assurance (SQA) SET2
completeness of quality control activities.
consists of a set of activities that monitor the software engineering
• It ensures management of data which is important for product quality.
processes and methods used to ensure quality.
• It also ensures that software which is developed, does it meet and
Software Quality Assurance (SQA) Encompasses
compiles with standard quality assurance.
1. A quality management approach.
• It ensures that end result or product meets and satisfies user and business
requirements. 2. Effective software engineering technology (methods and tools).
• It simply finds or identify defects or bugs, and reduces effect of these 3. Some formal technical reviews are applied throughout the software
defects. process.
Measures of Software Quality Assurance : 4. A multi-tiered testing strategy.
There are various measures of software quality. These are given below:
5. Controlling software documentation and the changes made to it.
1. Reliability –
6. Procedure to ensure compliance with software development standards
It includes aspects such as availability, accuracy, and recoverability of
(when applicable).
system to continue functioning under specific use over a given period of
time. For example, recoverability of system from shut-down failure is a 7. Measurement and reporting mechanisms.
reliability measure. Software Quality
2. Performance – Software quality is defined in different ways but here it means the
It means to measure throughput of system using system response time, conformance to explicitly stated functional and performance
recovery time, and start up time. It is a type of testing done to measure requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software.
Following are the quality management system models under which the Measuring Software Quality using Quality Metrics:
software system is created is normally based: In Software Engineering, Software Measurement is done based on
some Software Metrics where these software metrics are referred to as the
1. CMMI
measure of various characteristics of a Software.
2. Six Sigma
In Software engineering Software Quality Assurance (SAQ) assures the quality
3. ISO 9000 of the software. A set of activities in SAQ is continuously applied throughout
Note: There may be many other models for quality management, but the the software process. Software Quality is measured based on some software
ones mentioned above are the most popular. quality metrics.

Software Quality Assurance (SQA) Activities There is a number of metrics available based on which software quality is
measured. But among them, there are a few most useful metrics which are
Software Quality Assurance is composed of a variety of tasks associated essential in software quality measurement. They are –
with two different fields:
1. Code Quality
1. The software engineers who do technical work.
2. Reliability
2. SQA group that has responsibility for quality assurance planning,
oversight, record keeping, analysis, and reporting. 3. Performance

Basically, software engineers address quality (and perform quality 4. Usability


assurance and quality control activities) by applying solid technical 5. Correctness
methods and measures, conducting formal technical reviews, and
6. Maintainability
performing well-planned software testing.
7. Integrity
Prepares an SQA Plan for a Project
8. Security
This type of plan is developed during project planning and is reviewed by
all interested parties. The quality assurance activities performed by the Now let’s understand each quality metric in detail –
software engineering team and the SQA group are governed by the plan.
1. Code Quality – Code quality metrics measure the quality of code used for
The plan identifies:
software project development. Maintaining the software code quality by writing
• Evaluations to be performed. Bug-free and semantically correct code is very important for good software
project development. In code quality, both Quantitative metrics like the number
• Audits and reviews to be performed.
of lines, complexity, functions, rate of bugs generation, etc, and Qualitative
• Standards that are applicable to the project. metrics like readability, code clarity, efficiency, and maintainability, etc are
• Procedures for error reporting and tracking. measured.

• All the documents to be produced by the SQA group. 2. Reliability – Reliability metrics express the reliability of software in different
conditions. The software is able to provide exact service at the right time or not
• The total amount of feedback provided to the software project team. checked. Reliability can be checked using Mean Time Between Failure (MTBF)
and Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of
the software. Each software has been developed for some specific purposes.
Performance metrics measure the performance of the software by determining
whether the software is fulfilling the user requirements or not, by analyzing how
Factors Influencing Software Reliability
much time and resource it is utilizing for providing the service.
• A user’s perception of the reliability of a software depends upon two
4. Usability – Usability metrics check whether the program is user-friendly or
categories of information.
not. Each software is used by the end-user. So it is important to measure that the
end-user is happy or not by using this software. o The number of faults present in the software.
5. Correctness – Correctness is one of the important software quality metrics as o The way users operate the system. This is known as the operational
this checks whether the system or software is working correctly without any profile.
error by satisfying the user. Correctness gives the degree of service each • The fault count in a system is influenced by the following.
function provides as per developed.
o Size and complexity of code.
6. Maintainability – Each software product requires maintenance and up-
gradation. Maintenance is an expensive and time-consuming process. So if the o Characteristics of the development process used.
software product provides easy maintainability then we can say software quality o Education, experience, and training of development personnel.
is up to mark. Maintainability metrics include the time required to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing o Operational environment.
environments, etc. Applications of Software Reliability
7. Integrity – Software integrity is important in terms of how much it is easy to The applications of software reliability includes
integrate with other required software which increases software functionality
and what is the control on integration from unauthorized software’s which • Comparison of software engineering technologies.
increases the chances of cyberattacks. o What is the cost of adopting a technology?
8. Security – Security metrics measure how secure the software is. In the age of o What is the return from the technology — in terms of cost and
cyber terrorism, security is the most essential part of every software. Security quality?
assures that there are no unauthorized changes, no fear of cyber attacks, etc
when the software product is in use by the end-user. • Measuring the progress of system testing –The failure intensity
measure tells us about the present quality of the system: high intensity
means more tests are to be performed.
SOFTWARE RELIABILITY • Controlling the system in operation –The amount of change to a
Software reliability is defined as the probability of failure-free operation of a software for maintenance affects its reliability.
software system for a specified time in a specified environment. • Better insight into software development processes – Quantification of
DEFINITIONS OF SOFTWARE RELIABILITY quality gives us a better insight into the development processes.

Software reliability is defined as the probability of failure-free operation of a FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS
software system for a specified time in a specified environment. The key System functional requirements may specify error checking, recovery features,
elements of the definition include probability of failure-free operation, length of and system failure protection. System reliability and availability are specified as
time of failure-free operation and the given execution environment. Failure part of the non-functional requirements for the system.
intensity is a measure of the reliability of a software system operating in a given
environment. Example: An air traffic control system fails once in two years.
SYSTEM RELIABILITY SPECIFICATION SOFTWARE RELIABILITY METRICS
• Hardware reliability focuses on the probability a hardware component Reliability metrics are units of measure for system reliability. System reliability
fails. is measured by counting the number of operational failures and relating these to
demands made on the system at the time of failure. A long-term measurement
• Software reliability focuses on the probability a software component will
program is required to assess the reliability of critical systems.
produce an incorrect output.
PROBABILITY OF FAILURE ON DEMAND
• The software does not wear out and it can continue to operate after a bad
result. The probability system will fail when a service request is made. It is useful
when requests are made on an intermittent or infrequent basis. It is appropriate
• Operator reliability focuses on the probability when a system user makes
for protection systems where service requests may be rare and consequences
an error.
can be serious if service is not delivered. It is relevant for many safety-critical
FAILURE PROBABILITIES systems with exception handlers.
If there are two independent components in a system and the operation of the RELIABILITY METRICS
system depends on them both then, P(S) = P (A) + P (B)
• Probability of Failure on Demand (PoFoD)
If the components are replicated then the probability of failure is P(S) = P (A) n
o PoFoD = 0.001.
which means that all components fail at once.
o For one in every 1000 requests the service fails per time unit.
FUNCTIONAL RELIABILITY REQUIREMENTS
• Rate of Fault Occurrence (RoCoF)
• The system will check all operator inputs to see that they fall within their
required ranges. o RoCoF = 0.02.
• The system will check all disks for bad blocks each time it is booted. o Two failures for each 100 operational time units of operation.
• The system must be implemented in using a standard implementation of • Mean Time to Failure (MTTF)
Ada.
o The average time between observed failures (aka MTBF)
NON-FUNCTIONAL RELIABILITY SPECIFICATION
o It measures time between observable system failures.
The required level of reliability must be expressed quantitatively. Reliability is
o For stable systems MTTF = 1/RoCoF.
a dynamic system attribute. Source code reliability specifications are
meaningless (e.g. N faults/1000 LOC). An appropriate metric should be chosen o It is relevant for systems when individual transactions take lots of
to specify the overall system reliability. processing time (e.g. CAD or WP systems).
HARDWARE RELIABILITY METRICS • Availability = MTBF / (MTBF+MTTR)
Hardware metrics are not suitable for software since its metrics are based on o MTBF = Mean Time Between Failure
notion of component failure. Software failures are often design failures. Often o MTTR = Mean Time to Repair
the system is available after the failure has occurred. Hardware components can
wear out. • Reliability = MTBF / (1+MTBF)
TIME UNITS BUILDING RELIABILITY SPECIFICATION
Time units include: The building of reliability specification involves consequences analysis of
possible system failures for each sub-system. From system failure analysis,
• Raw Execution Time which is employed in non-stop system
partition the failure into appropriate classes. For each class send out the
• Calendar Time is employed when the system has regular usage patterns appropriate reliability metric.
• Number of Transactions is employed for demand type transaction SPECIFICATION VALIDATION
systems
It is impossible to empirically validate high reliability specifications. No
AVAILABILITY database corruption really means PoFoD class < 1 in 200 million. If each
Availability measures the fraction of time system is really available for use. It transaction takes 1 second to verify, simulation of one day’s transactions takes
takes repair and restart times into account. It is relevant for non-stop 3.5 days.
continuously running systems (e.g. traffic signal). Software testing:
FAILURE CONSEQUENCES – STUDY 1 Software testing is an important process in the software development
Reliability does not take consequences into account. Transient faults have no lifecycle . It involves verifying and validating that a software application is
real consequences but other faults might cause data loss or corruption. Hence it free of bugs, meets the technical requirements set by
may be worthwhile to identify different classes of failure, and use different its design and development , and satisfies user requirements efficiently and
metrics for each. effectively.

FAILURE CONSEQUENCES – STUDY 2 This process ensures that the application can handle all exceptional and
boundary cases, providing a robust and reliable user experience. By
When specifying reliability both the number of failures and the consequences systematically identifying and fixing issues, software testing helps deliver high-
of each matter. Failures with serious consequences are more damaging than quality software that performs as expected in various scenarios.
those where repair and recovery is straightforward. In some cases, different
reliability specifications may be defined for different failure types. Software Testing is a method to assess the functionality of the software
program. The process checks whether the actual software matches the expected
FAILURE CLASSIFICATION requirements and ensures the software is bug-free. The purpose of software
Failure can be classified as the following testing is to identify the errors, faults, or missing requirements in contrast to
actual requirements. It mainly aims at measuring the specification, functionality,
• Transient – only occurs with certain inputs. and performance of a software program or application.
• Permanent – occurs on all Software testing can be divided into two steps
• Recoverable – system can recover without operator help. 1. Verification: It refers to the set of tasks that ensure that the software
• Unrecoverable – operator has to help. correctly implements a specific function. It means “Are we building the
product right?”.
• Non-corrupting – failure does not corrupt system state or d
2. Validation: It refers to a different set of tasks that ensure that the
• Corrupting – system state or data are altered. software that has been built is traceable to customer requirements. It
means “Are we building the right product?”.
Different Types Of Software Testing Testing is used to re-run the test scenarios quickly and repeatedly, that
Explore diverse software testing methods were performed manually in manual testing.
including manual and automated testing for improved quality assurance .
Enhance software reliability and performance through functional and non- Apart from Regression testing , Automation testing is also used to test the
functional testing, ensuring user satisfaction. Learn about the significance of application from a load, performance, and stress point of view. It increases the
various testing approaches for robust software development. test coverage, improves accuracy, and saves time and money when compared
to manual testing.
Software Testing can be broadly classified into 3 types:
Different Types of Software Testing Techniques
1. Functional testing : It is a type of software testing that validates the
software systems against the functional requirements. It is performed to Software testing techniques can be majorly classified into two categories:
check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are 1. Black box Testing : Testing in which the tester doesn’t have access to
Unit testing, Integration testing, System testing, Smoke testing, and so the source code of the software and is conducted at the software
on. interface without any concern with the internal logical structure of the
software known as black-box testing.
2. Non-functional testing : It is a type of software testing that checks the
application for non-functional requirements like performance, 2. White box Testing : Testing in which the tester is aware of the internal
scalability, portability, stress, etc. Various types of non-functional workings of the product, has access to its source code, and is conducted
testing are Performance testing, Stress testing, Usability Testing, and so by making sure that all internal operations are performed according to
on. the specifications is known as white box testing.

3. Maintenance testing : It is the process of changing, modifying, and 3. Grey Box Testing : Testing in which the testers should have knowledge
updating the software to keep up with the customer’s needs. It of implementation, however, they need not be experts.
involves regression testing that verifies that recent changes to the code
have not adversely affected other previously working parts of the Software Testing can be broadly classified into 3 types:
software.
1. Functional testing : It is a type of software testing that validates the
Apart from the above classification software testing can be further divided into software systems against the functional requirements. It is performed to
2 more ways of testing: check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are
1. Manual testing : It includes testing software manually, i.e., without Unit testing, Integration testing, System testing, Smoke testing, and so
using any automation tool or script. In this type, the tester takes over the on.
role of an end-user and tests the software to identify any unexpected
behaviour or bug. There are different stages for manual testing such as 2. Non-functional testing : It is a type of software testing that checks the
unit testing, integration testing, system testing, and user acceptance application for non-functional requirements like performance,
testing. Testers use test plans, test cases, or test scenarios to test scalability, portability, stress, etc. Various types of non-functional
software to ensure the completeness of testing. Manual testing also testing are Performance testing, Stress testing, Usability Testing, and so
includes exploratory testing, as testers explore the software to identify on.
errors in it.
3. Maintenance testing : It is the process of changing, modifying, and
2. Automation testing : It is also known as Test Automation, is when the updating the software to keep up with the customer’s needs. It
tester writes scripts and uses another software to test the product. This involves regression testing that verifies that recent changes to the code
process involves the automation of a manual process. Automation
Different Levels of Software Testing find a set of linearly independent paths of execution. In this method,
Cyclomatic Complexity is used to determine the number of linearly
Software level testing can be majorly classified into 4 levels: independent paths and then test cases are generated for each path.
It gives complete branch coverage but achieves that without covering all
1. Unit testing : It a level of the software testing process where individual possible paths of the control flow graph. McCabe’s Cyclomatic
units/components of a software/system are tested. The purpose is to Complexity is used in path testing. It is a structural testing method that
validate that each unit of the software performs as designed. uses the source code of a program to find every possible executable
path.
2. Integration testing : It is a level of the software testing process where
individual units are combined and tested as a group. The purpose of this
level of testing is to expose faults in the interaction between integrated
units.

3. System testing : It is a level of the software testing process where a


complete, integrated system/software is tested. The purpose of this test
is to evaluate the system’s compliance with the specified requirements.

4. Acceptance testing : It is a level of the software testing process where a


system is tested for acceptability. The purpose of this test is to evaluate
the system’s compliance with the business requirements and assess
whether it is acceptable for delivery.

Benefits of Software Testing


• Control Flow Graph:
• Product quality: Testing ensures the delivery of a high-quality product Draw the corresponding control flow graph of the program in which all
as the errors are discovered and fixed early in the development cycle. the executable paths are to be discovered.

• Cyclomatic Complexity:
• Customer satisfaction: Software testing aims to detect the errors or
vulnerabilities in the software early in the development phase so that the After the generation of the control flow graph, calculate the cyclomatic
detected bugs can be fixed before the delivery of the product. Usability complexity of the program using the following formula .
testing is a type of software testing that checks the application for how
easily usable it is for the users to use the application.

• Cost-effective: Testing any project on time helps to save money and


time for the long term. If the bugs are caught in the early phases of
software testing, it costs less to fix those errors.

• Security: Security testing is a type of software testing that is focused on


testing the application for security vulnerabilities from internal or
external sources.
• Make Set:
Path Testing:
Make a set of all the paths according to the control flow graph and
Path Testing is a method that is used to design the test cases. In the
calculate cyclomatic complexity. The cardinality of the set is equal to
path testing method, the control flow graph of a program is designed to
the calculated cyclomatic complexity.
• Create Test Cases: 1. Condition Testing
Create a test case for each path of the set obtained in the above step.
2. Data Flow Testing
Path Testing Techniques
3. Loop Testing
• Control Flow Graph:
The program is converted into a control flow graph by representing the 1. Condition Testing: Condition testing is a test cased design method, which
code into nodes and edges. ensures that the logical condition and decision statements are free from errors.
The errors present in logical conditions can be incorrect boolean operators,
• Decision to Decision path: missing parenthesis in a booleans expression, error in relational operators,
The control flow graph can be broken into various Decision to Decision arithmetic expressions, and so on. The common types of logical conditions
paths and then collapsed into individual nodes. that are tested using condition testing are-

• Independent paths: 1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic
An Independent path is a path through a Decision to Decision path expressions and ‘OP’ is an operator.
graph that cannot be reproduced from other paths by other methods. 2. A simple condition like any relational expression preceded by a NOT
(~) operator. For example, (~E1) where ‘E1’ is an arithmetic expression
Advantages of Path Testing and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions,
1. The path testing method reduces the redundant tests. Boolean operator, and parenthesis. For example, (E1 & E2)|(E2 & E3)
where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote
2. Path testing focuses on the logic of the programs. AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like
3. Path testing is used in test case design.
‘AND’, OR, NOT. For example, ‘A|B’ is a Boolean expression where
Disadvantages of Path Testing ‘A’ and ‘B’ denote operands and | denotes OR operator.

1. A tester needs to have a good understanding of programming knowledge 3. Data Flow Testing: The data flow test method chooses the test path of a
or code knowledge to execute the tests. program based on the locations of the definitions and uses all the
variables in the program. The data flow test approach is depicted as
2. The test case increases when the code complexity is increased. follows suppose each statement in a program is assigned a unique
statement number and that theme function cannot modify its parameters
3. It will be difficult to create a test path if the application has a high or global variables. For example, with S as its statement number.
complexity of code.
DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
4. Some test paths may skip some of the conditions in the code. It
may not cover some conditions or scenarios if there is an error in If statement S is an if loop statement, them its DEF set is empty and its USE
the specific paths. set depends on the state of statement S. The definition of the variable X at
statement S is called the line of statement S’ if the statement is any way from
Control structure testing: S to statement S’ then there is no other definition of X. A definition use (DU)
chain of variable X has the form [X, S, S’], where S and S’ denote statement
Control structure testing is used to increase the coverage area by testing numbers, X is in DEF(S) and USE(S’), and the definition of X in statement S
various control structures present in the program. The different types of testing is line at statement S’. A simple data flow test approach requires that each DU
performed under control structure testing are as follows chain be covered at least once. This approach is known as the DU test
approach. The DU testing does not ensure coverage of all branches of a 2. Unstructured loops – This type of loops should be redesigned,
program. However, a branch is not guaranteed to be covered by DU testing whenever possible, to reflect the use of unstructured the structured
only in rare cases such as then in which the other construct does not have any programming constructs.
certainty of any variable in its later part and the other part is not present. Data
flow testing strategies are appropriate for choosing test paths of a program
containing nested if and loop statements.

3. Loop Testing: Loop testing is actually a white box testing technique. It


specifically focuses on the validity of loop construction. Following are the
types of loops.

1. Simple Loop – The following set of test can be applied to simple loops,
where the maximum allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
2. Concatenated Loops – If loops are not dependent on each other,
contact loops can be tested using the approach used in simple loops. if Black Box Testing:
the loops are interdependent, the steps are followed in nested loops. Black Box Testing is an important part of making sure software works
as it should. Instead of peeking into the code, testers check how the software
behaves from the outside, just like users would. This helps catch any issues or
bugs that might affect how the software works.
This simple guide gives you an overview of what Black Box Testing is all
about and why it matters in software development.
Black-box testing is a type of software testing in which the tester is not
concerned with the software’s internal knowledge or implementation details
but rather focuses on validating the functionality based on the provided
specifications or requirements.

1. Nested Loops – Loops within loops are called as nested loops. when
testing nested loops, the number of tested increases as level nesting
increases. The following steps for testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
Types Of Black Box Testing
The following are the several categories of black box testing: Nonfunctional Testing

1. Functional Testing • Non-functional testing is a software testing technique that checks the
non-functional attributes of the system.
2. Regression Testing
• Non-functional testing is defined as a type of software testing to check
3. Nonfunctional Testing (NFT) non-functional aspects of a software application.
Before we move in depth of the Black box testing do you known that their are • It is designed to test the readiness of a system as per nonfunctional
many different type of testing used in industry and some automation testing parameters which are never addressed by functional testing.
tools are there which automate the most of testing so if you wish to learn the
latest industry level tools then you check-out our manual to automation testing • Non-functional testing is as important as functional testing.
course in which you will learn all these concept and tools
• Non-functional testing is also known as NFT. This testing is not
Functional Testing functional testing of software. It focuses on the software’s performance,
usability, and scalability.
• Functional testing is defined as a type of testing that verifies that each
function of the software application works in conformance with the Advantages of Black Box Testing
requirement and specification.
• The tester does not need to have more functional knowledge or
• This testing is not concerned with the source code of the application. programming skills to implement the Black Box Testing.
Each functionality of the software application is tested by providing
appropriate test input, expecting the output, and comparing the actual • It is efficient for implementing the tests in the larger system.
output with the expected output.
• Tests are executed from the user’s or client’s point of view.
• This testing focuses on checking the user interface, APIs, database,
security, client or server application, and functionality of the • Test cases are easily reproducible.
Application Under Test. Functional testing can be manual or
• It is used to find the ambiguity and contradictions in the functional
automated. It determines the system’s software functional requirements.
specifications.
Regression Testing
Disadvantages of Black Box Testing
• Regression Testing is the process of testing the modified parts of the
• There is a possibility of repeating the same tests while implementing the
code and the parts that might get affected due to the modifications to
ensure that no new errors have been introduced in the software after the testing process.
modifications have been made.
• Without clear functional specifications, test cases are difficult to
• Regression means the return of something and in the software field, it implement.
refers to the return of a bug. It ensures that the newly added code is
• It is difficult to execute the test cases because of complex inputs at
compatible with the existing code.
different stages of testing.
• In other words, a new software update has no impact on the
• Sometimes, the reason for the test failure cannot be detected.
functionality of the software. This is carried out after a system
maintenance operation and upgrades. • Some programs in the application are not tested.
• It does not reveal the errors in the control structure. 3. Boundary value analysis – Boundaries are very good places for errors to
occur. Hence, if test cases are designed for boundary values of the input
• Working with a large sample space of inputs can be exhaustive and domain then the efficiency of testing improves and the probability of finding
consumes a lot of time. errors also increases. For example – If the valid range is 10 to 100 then test for
10,100 also apart from valid and invalid inputs.
Ways of Black Box Testing Done
4. Cause effect graphing – This technique establishes a relationship between
1. Syntax-Driven Testing – This type of testing is applied to systems that can logical input called causes with corresponding actions called the effect. The
be syntactically represented by some language. For example, language can be causes and effects are represented using Boolean graphs. The following steps
represented by context-free grammar. In this, the test cases are generated so are followed:
that each grammar rule is used at least once.
1. Identify inputs (causes) and outputs (effect).
2. Equivalence partitioning – It is often seen that many types of inputs work
similarly so instead of giving all of them separately we can group them and 2. Develop a cause-effect graph.
test only one input of each group. The idea is to partition the input domain of
the system into several equivalence classes such that each member of the class 3. Transform the graph into a decision table.
works similarly, i.e., if a test case in one class results in some error, other
members of the class would also result in the same error. 4. Convert decision table rules to test cases.

The technique involves two steps:

1. Identification of equivalence class – Partition any input domain into a


minimum of two sets: valid values and invalid values . For example, if
the valid range is 0 to 100 then select one valid input like 49 and one For example, in the following cause-effect graph:
invalid like 104.

2. Generating test cases – (i) To each valid and invalid class of input
assign a unique identification number. (ii) Write a test case covering all
valid and invalid test cases considering that no two invalid inputs mask
each other. To calculate the square root of a number, the equivalence
classes will be (a) Valid inputs:

• The whole number which is a perfect square-output will be an


integer.

• The entire number which is not a perfect square-output will be a


decimal number.

• Positive decimals

• Negative numbers(integer or decimal).

• Characters other than numbers like “a”,”!”,”;”, etc.


Each column corresponds to a rule which will become a test case for testing. 1. Independent testing: Black box testing is performed by testers who are
So there will be 4 test cases. not involved in the development of the application, which helps to
ensure that testing is unbiased and impartial.
5. Requirement-based testing – It includes validating the requirements given
in the SRS of a software system. 2. Testing from a user’s perspective: Black box testing is conducted
from the perspective of an end user, which helps to ensure that the
6. Compatibility testing – The test case results not only depends on the application meets user requirements and is easy to use.
product but is also on the infrastructure for delivering functionality. When the
infrastructure parameters are changed it is still expected to work properly. 3. No knowledge of internal code: Testers performing black box testing
Some parameters that generally affect the compatibility of software are: do not have access to the application’s internal code, which allows them
to focus on testing the application’s external behaviour and
1. Processor (Pentium 3, Pentium 4) and several processors. functionality.
2. Architecture and characteristics of machine (32-bit or 64-bit). 4. Requirements-based testing: Black box testing is typically based on
the application’s requirements, which helps to ensure that the
3. Back-end components such as database servers. application meets the required specifications.
4. Operating System (Windows, Linux, etc). 5. Different testing techniques: Black box testing can be performed using
various testing techniques, such as functional testing, usability testing,
Tools Used for Black Box Testing: acceptance testing, and regression testing.
1. Appium 6. Easy to automate: Black box testing is easy to automate using various
automation tools, which helps to reduce the overall testing time and
2. Selenium
effort.
3. Microsoft Coded UI
7. Scalability: Black box testing can be scaled up or down depending on
4. Applitools the size and complexity of the application being tested.

5. HP QTP . 8. Limited knowledge of application: Testers performing black box


testing have limited knowledge of the application being tested, which
What can be identified by Black Box Testing helps to ensure that testing is more representative of how the end users
will interact with the application.
1. Discovers missing functions, incorrect function & interface errors
Integration testing:
2. Discover the errors faced in accessing the database
Integration testing is the process of testing the interface between two
3. Discovers the errors that occur while initiating & terminating any software units or modules. It focuses on determining the correctness of the
functions. interface. The purpose of integration testing is to expose faults in the
interaction between integrated units. Once all the modules have been unit-
4. Discovers the errors in performance or behaviour of software. tested, integration testing is performed.

Features of black box testing What is Integration Testing?


Integration testing is a software testing technique that focuses on verifying the There are four types of integration testing approaches. Those approaches are
interactions and data exchange between different components or modules of a the following:
software application. The goal of integration testing is to identify any
problems or bugs that arise when different components are combined and
interact with each other. Integration testing is typically performed after unit
testing and before system testing. It helps to identify and resolve integration
issues early in the development cycle, reducing the risk of more severe and
costly problems later on.

Integration testing is one of the basic type of software testing and there are
many other basic and advance software testing. If you are interested in
learning all the testing concept and other more advance concept in the field of
the software testing

1. Big-Bang Integration Testing

• It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of
individual module testing.

• In simple words, all the modules of the system are simply put together
and tested.

• This approach is practicable only for very small systems. If an error is


found during the integration testing, it is very difficult to localize the
• Integration testing can be done by picking module by module. This can error as the error may potentially belong to any of the modules being
be done so that there should be a proper sequence to be followed. integrated.

• So, debugging errors reported during Big Bang integration testing is


• And also if you don’t want to miss out on any integration scenarios then
you have to follow the proper sequence. very expensive to fix.

• Big-bang integration testing is a software testing approach in which all


• Exposing the defects is the major focus of the integration testing and the
time of interaction between the integrated units. components or modules of a software application are combined and
tested at once.
Why is Integration Testing Important?
• This approach is typically used when the software components have a
Integration testing is important because it verifies that individual
software modules or components work together correctly as a whole system. low degree of interdependence or when there are constraints in the
This ensures that the integrated software functions as intended and helps development environment that prevent testing individual components.
identify any compatibility or communication issues between different parts of
• The goal of big-bang integration testing is to verify the overall
the system. By detecting and resolving integration problems early, integration
functionality of the system and to identify any integration problems that
testing contributes to the overall reliability, performance, and quality of the
arise when the components are combined.
software product.
Integration test approaches
• While big-bang integration testing can be useful in some situations, it • This can lead to system failure and decreased user satisfaction.
can also be a high-risk approach, as the complexity of the system and
the number of interactions between components can make it difficult to 2. Bottom-Up Integration Testing
identify and diagnose problems.
In bottom-up testing, each module at lower levels are tested with higher
Advantages of Big-Bang Integration Testing modules until all modules are tested. The primary purpose of this integration
testing is that each subsystem tests the interfaces among various modules
• It is convenient for small systems. making up the subsystem. This integration testing uses test drivers to drive and
pass appropriate data to the lower-level modules.
• Simple and straightforward approach.
Advantages of Bottom-Up Integration Testing
• Can be completed quickly.
• In bottom-up testing, no stubs are required.
• Does not require a lot of planning or coordination.
• A principal advantage of this integration testing is that several disjoint
• May be suitable for small systems or projects with a low degree of subsystems can be tested simultaneously.
interdependence between components.
• It is easy to create the test conditions.
Disadvantages of Big-Bang Integration Testing
• Best for applications that uses bottom up design approach.
• There will be quite a lot of delay because you would have to wait for all
the modules to be integrated. • It is Easy to observe the test results.

• High-risk critical modules are not isolated and tested on priority since
all modules are tested at once.

• Not Good for long projects.


Disadvantages of Bottom-Up Integration Testing
• High risk of integration problems that are difficult to identify and
diagnose. • Driver modules must be produced.

• This can result in long and complex debugging and troubleshooting • In this testing, the complexity that occurs when the system is made up of
efforts. a large number of small subsystems.

• This can lead to system downtime and increased development costs. • As Far modules have been created, there is no working model can be
represented.
• May not provide enough visibility into the interactions and data
exchange between components. 3. Top-Down Integration Testing

• This can result in a lack of confidence in the system’s stability and Top-down integration testing technique is used in order to simulate the
reliability. behaviour of the lower-level modules that are not yet integrated. In this
integration testing, testing takes place from top to bottom. First, high-level
• This can lead to decreased efficiency and productivity. modules are tested and then low-level modules and finally integrating the low-
level modules to a high level to ensure the system is working as intended.
• This may result in a lack of confidence in the development team.
Advantages of Top-Down Integration Testing • This Sandwich approach overcomes this shortcoming of the top-down
and bottom-up approaches.
• Separately debugged module.
• Parallel test can be performed in top and bottom layer tests.
• Few or no drivers needed.
Disadvantages of Mixed Integration Testing
• It is more stable and accurate at the aggregate level.
• For mixed integration testing, it requires very high cost because one part
• Easier isolation of interface errors. has a Top-down approach while another part has a bottom-up approach.
• In this, design defects can be found in the early stages. • This integration testing cannot be used for smaller systems with huge
interdependence between different modules.
Disadvantages of Top-Down Integration Testing
Applications of Integration Testing
• Needs many Stubs.
1. Identify the components: Identify the individual components of your
• Modules at lower level are tested inadequately. application that need to be integrated. This could include the frontend,
backend, database, and any third-party services.
• It is difficult to observe the test output.
2. Create a test plan: Develop a test plan that outlines the scenarios and
• It is difficult to stub design. test cases that need to be executed to validate the integration points
between the different components. This could include testing data flow,
communication protocols, and error handling.

3. Set up test environment: Set up a test environment that mirrors the


production environment as closely as possible. This will help ensure that
the results of your integration tests are accurate and reliable.
4. Mixed Integration Testing
4. Execute the tests: Execute the tests outlined in your test plan, starting
A mixed integration testing is also called sandwiched integration testing. A with the most critical and complex scenarios. Be sure to log any defects
mixed integration testing follows a combination of top down and bottom-up or issues that you encounter during testing.
testing approaches. In top-down approach, testing can start only after the top-
level module have been coded and unit tested. In bottom-up approach, testing 5. Analyze the results: Analyze the results of your integration tests to
can start only after the bottom level modules are ready. This sandwich or identify any defects or issues that need to be addressed. This may
mixed approach overcomes this shortcoming of the top-down and bottom-up involve working with developers to fix bugs or make changes to the
approaches. It is also called the hybrid integration testing. also, stubs and application architecture.
drivers are used in mixed integration testing.
6. Repeat testing: Once defects have been fixed, repeat the integration
Advantages of Mixed Integration Testing testing process to ensure that the changes have been successful and that
the application still works as expected.
• Mixed approach is useful for very large projects having several sub
projects. Test Cases For Integration Testing
• Interface Testing : Verify that data exchange between modules occurs 1. The function or performance characteristics conform to specification
correctly. Validate input/output parameters and formats. Ensure proper and are accepted or
error handling and exception propagation between modules.
2. a deviation from specification is uncovered and a deficiency list is
• Functional Flow Testing : Test end-to-end functionality by simulating created. Deviation or error discovered at this stage in a project can rarely be
user interactions. Verify that user inputs are processed correctly and corrected prior to scheduled delivery.
produce expected outputs. Ensure seamless flow of data and control
between modules. Alpha and Beta Testing:

• Data Integration Testing : Validate data integrity and consistency It is virtually impossible for a software developer to foresee how the
across different modules. Test data transformation and conversion customer will really use a program:
between formats. Verify proper handling of edge cases and boundary
conditions. • Instructions for use may misinterpreted.

• Dependency Testing : Test interactions between dependent modules. • strange combinations of data may be regularly used
Verify that changes in one module do not adversely affect others.
Ensure proper synchronization and communication between modules. • output that seemed clear to the tester may be unintelligible to a user in
the field. When custom software is built for one customer, a series of
• Error Handling Testing : Validate error detection and reporting acceptance tests are conducted to enable the customer to validate all
mechanisms. Test error recovery and fault tolerance capabilities. Ensure requirements. If software is developed as a product to be used by many
that error messages are clear and informative. customers, it is impractical to perform acceptance tests with each one. 

• Performance Testing : Measure system performance under integrated alpha and beta tests are used to uncover errors that only the end-user seems
conditions. Test response times, throughput, and resource utilization. able to find.
Verify scalability and concurrency handling between modules.
The Alpha Test is conducted at the developer’s site by a customer. The
• Security Testing : Test access controls and permissions between software is used in a natural setting with the developer “looking over the
integrated modules. Verify encryption and data protection mechanisms. shoulder” of the user and recording errors and usage problems. Alpha tests
Ensure compliance with security standards and regulations. are conducted in a controlled environment.

• Compatibility Testing : Test compatibility with external systems, APIs, The Beta test is conducted at one or more customer sites by the end-user of
and third-party components. Validate interoperability and data exchange the software. Unlike alpha testing, the developer is generally not present.
protocols. Ensure seamless integration with different platforms and
environments. Unlike alpha testing, the developer is generally not present. Therefore, the
beta test is a "live" application of the software in an environment that
Validation and System Testing: cannot be controlled by the developer. The customer records all problems
(real or imagined) that are encountered during beta testing and reports these
At the end of integration testing, software is completely assembled as a to the developer at regular intervals. As a result of problems reported
package, interfacing errors have been uncovered and corrected and now during beta tests, software engineers make modifications and then prepare
validation testing is performed. Software validation is achieved through a for release of the software product to the entire customer base
series of black-box tests that demonstrate conformity with requirements.
System Testing:
After each validation test case has been conducted, one of two possible
condition exist:
System testing is actually a series of different tests whose primary which tools and a human analyst work together, and the directionality of the
purpose is to fully exercise the computer-based system. Although each test has process are highly variable.
a different purpose, all work to verify that system elements have been properly
integrated and perform allocated functions. Objective of Reverse Engineering:

System Testing is basically performed by a testing team that is 1. Reducing Costs: Reverse engineering can help cut costs in product
independent of the development team that helps to test the quality of the development by finding replacements or cost-effective alternatives for
system impartial. systems or components.

System Testing is carried out on the whole system in the context of 2. Analysis of Security: Reverse engineering is used in cybersecurity to
either system requirement specifications or functional requirement examine exploits, vulnerabilities, and malware. This helps in
specifications or in the context of both. System testing tests the design and understanding of threat mechanisms and the development of practical
behavior of the system and also the expectations of the customer. defenses by security experts.

Types of System Testing: 3. Integration and Customization: Through the process of reverse
engineering, developers can incorporate or modify hardware or software
• Performance Testing: Performance Testing is a type of software components into pre-existing systems to improve their operation or
testing that is carried out to test the speed, scalability, stability and reliability tailor them to meet particular needs.
of the software product or application.
4. Recovering Lost Source Code: Reverse engineering can be used to
• Load Testing: Load Testing is a type of software testing which is recover the source code of a software application that has been lost or is
carried out to determine the behavior of a system or software product under inaccessible or at the very least, to produce a higher-level representation
extreme load. of it.

• Stress Testing: Stress Testing is a type of software testing performed 5. Fixing bugs and maintenance: Reverse engineering can help find and
to check the robustness of the system under the varying loads. repair flaws or provide updates for systems for which the original source
code is either unavailable or inadequately documented.
• Scalability Testing: Scalability Testing is a type of software testing
which is carried out to check the performance of a software application or Reverse Engineering Goals:
system in terms of its capability to scale up or scale down the number of user
request load. 1. Cope with Complexity: Reverse engineering is a common tool used to
understand and control system complexity. It gives engineers the ability
Reverse Engineering: to analyze complex systems and reveal details about their architecture,
relationships and design patterns.
Software Reverse Engineering is a process of recovering the design,
requirement specifications, and functions of a product from an analysis of its 2. Recover lost information: Reverse engineering seeks to retrieve as
code. It builds a program database and generates information from this. This much information as possible in situations where source code or
article focuses on discussing reverse engineering in detail. documentation are lost or unavailable. Rebuilding source code,
analyzing data structures and retrieving design details are a few
What is Reverse Engineering? examples of this.

Reverse engineering can extract design information from source code, 3. Detect side effects: Understanding a system or component’s behavior
but the abstraction level, the completeness of the documentation, the degree to requires analyzing its side effects. Unintended implications,
dependencies, and interactions that might not be obvious from the
system’s documentation or original source code can be found with the Re-engineering, also known as software re-engineering, is the process of
use of reverse engineering. analyzing, designing, and modifying existing software systems to
improve their quality, performance, and maintainability.
4. Synthesis higher abstraction: Abstracting low-level features in order
to build higher-level representations is a common practice in reverse 1. This can include updating the software to work with new hardware or
engineering. This abstraction makes communication and analysis easier software platforms, adding new features, or improving the software’s
by facilitating a greater understanding of the system’s functionality. overall design and architecture.

5. Facilitate Reuse: Reverse engineering can be used to find reusable 2. Software re-engineering, also known as software restructuring or
parts or modules in systems that already exist. By understanding the software renovation, refers to the process of improving or upgrading
functionality and architecture of a system, developers can extract and existing software systems to improve their quality, maintainability, or
repurpose components for use in other projects, improving efficiency functionality.
and decreasing development time.
3. It involves reusing the existing software artifacts, such as code, design,
and documentation, and transforming them to meet new or updated
requirements.

Objective of Re-engineering

The primary goal of software re-engineering is to improve the quality


and maintainability of the software system while minimizing the risks
and costs associated with the redevelopment of the system from scratch.
Software re-engineering can be initiated for various reasons, such as:

1. To describe a cost-effective option for system evolution.

2. To describe the activities involved in the software maintenance process.

3. To distinguish between software and data re-engineering and to explain


the problems of data re-engineering.

Overall, software re-engineering can be a cost-effective way to improve


the quality and functionality of existing software systems, while
Re-engineering: minimizing the risks and costs associated with starting from scratch.

Re-engineering is a process of software development that is done to Process of Software Re-engineering


improve the maintainability of a software system. Re-engineering is the
examination and alteration of a system to reconstitute it in a new form. The process of software re-engineering involves the following steps:
This process encompasses a combination of sub-processes like reverse
engineering, forward engineering, reconstructing, etc.

What is Re-engineering?
2. Document Reconstruction

3. Reverse Engineering

4. Code Reconstruction

5. Data Reconstruction

6. Forward Engineering

1. Planning: The first step is to plan the re-engineering process, which


involves identifying the reasons for re-engineering, defining the scope,
and establishing the goals and objectives of the process.

2. Analysis: The next step is to analyze the existing system, including the
code, documentation, and other artifacts. This involves identifying the
system’s strengths and weaknesses, as well as any issues that need to be
addressed.

3. Design: Based on the analysis, the next step is to design the new or
updated software system. This involves identifying the changes that
need to be made and developing a plan to implement them. Re-engineering Cost Factors

4. Implementation: The next step is to implement the changes by 1. The quality of the software to be re-engineered.
modifying the existing code, adding new features, and updating the
documentation and other artifacts. 2. The tool support available for re-engineering.

5. Testing: Once the changes have been implemented, the software system 3. The extent of the required data conversion.
needs to be tested to ensure that it meets the new requirements and
4. The availability of expert staff for re-engineering.
specifications.
Advantages of Re-engineering
6. Deployment: The final step is to deploy the re-engineered software
system and make it available to end-users. 1. Reduced Risk: As the software is already existing, the risk is less as
compared to new software development. Development problems,
staffing problems and specification problems are the lots of problems
that may arise in new software development.

Steps involved in Re-engineering 2. Reduced Cost: The cost of re-engineering is less than the costs of
developing new software.
1. Inventory Analysis
3. Revelation of Business Rules: As a system is re-engineered , business 2. Disruption to business operations: Re-engineering can disrupt normal
rules that are embedded in the system are rediscovered. business operations and cause inconvenience to customers, employees
and other stakeholders.
4. Better use of Existing Staff: Existing staff expertise can be maintained
and extended accommodate new skills during re-engineering. 3. Resistance to change: Re-engineering can encounter resistance from
employees who may be resistant to change and uncomfortable with new
5. Improved efficiency: By analyzing and redesigning processes, re- processes and technologies.
engineering can lead to significant improvements in productivity, speed,
and cost-effectiveness. 4. Risk of failure: Re-engineering projects can fail if they are not planned
and executed properly, resulting in wasted resources and lost
6. Increased flexibility: Re-engineering can make systems more adaptable opportunities.
to changing business needs and market conditions.
5. Lack of employee involvement: Re-engineering projects that are not
7. Better customer service: By redesigning processes to focus on properly communicated and involve employees, may lead to lack of
customer needs, re-engineering can lead to improved customer employee engagement and ownership resulting in failure of the project.
satisfaction and loyalty.
6. Difficulty in measuring success: Re-engineering can be difficult to
8. Increased competitiveness: Re-engineering can help organizations measure in terms of success, making it difficult to justify the cost and
become more competitive by improving efficiency, flexibility, and effort involved.
customer service.
7. Difficulty in maintaining continuity: Re-engineering can lead to
9. Improved quality: Re-engineering can lead to better quality products significant changes in processes and systems, making it difficult to
and services by identifying and eliminating defects and inefficiencies in maintain continuity and consistency in the organization.
processes.
CASE Tools:
10.Increased innovation: Re-engineering can lead to new and innovative
ways of doing things, helping organizations to stay ahead of their CASE tools are set of software application programs, which are used to
competitors. automate SDLC activities. CASE tools are used by software project
managers, analysts and engineers to develop software system.
11.Improved compliance: Re-engineering can help organizations to
comply with industry standards and regulations by identifying and There are number of CASE tools available to simplify various stages of
addressing areas of non-compliance. Software Development Life Cycle such as Analysis tools, Design tools,
Project management tools, Database Management tools, Documentation
Disadvantages of Re-engineering tools are to name a few.

Major architectural changes or radical reorganizing of the systems data Use of CASE tools accelerates the development of project to produce
desired result and helps to uncover flaws before moving ahead with next
management has to be done manually. Re-engineered system is not likely to be
stage in software development.
as maintainable as a new system developed using modern software Re-
engineering methods. Components of CASE Tools

1. High costs: Re-engineering can be a costly process, requiring CASE tools can be broadly divided into the following parts based on their
significant investments in time, resources, and technology. use at a particular SDLC stage:
• Central Repository - CASE tools require a central repository, which can Analysis Tools
serve as a source of common, integrated and consistent information.
Central repository is a central place of storage where product These tools help to gather requirements, automatically check for any
specifications, requirement documents, related reports and diagrams, inconsistency, inaccuracy in the diagrams, data redundancies or erroneous
other useful information regarding management is stored. Central omissions. For example, Accept 360, Accompa, CaseComplete for requirement
repository also serves as data dictionary. analysis, Visible Analyst for total analysis.

Design Tools

These tools help software designers to design the block structure of the
software, which may further be broken down in smaller modules using
refinement techniques. These tools provides detailing of each module and
interconnections among modules. For example, Animated Software Design

Programming Tools

These tools consist of programming environments like IDE (Integrated


Development Environment), in-built modules library and simulation tools.
These tools provide comprehensive aid in building software product and include
features for simulation and testing. For example, Cscope to search code in C,
Eclipse.
• Upper Case Tools - Upper CASE tools are used in planning, analysis
and design stages of SDLC. Integration testing tools
• Lower Case Tools - Lower CASE tools are used in implementation,
Integration testing tools are used to test the interface between modules and find
testing and maintenance.
the bugs; these bugs may happen because of the multiple modules integration.
• Integrated Case Tools - Integrated CASE tools are helpful in all the
The main objective of these tools is to make sure that the specific modules are
stages of SDLC, from Requirement gathering to Testing and
working as per the client's needs. To construct integration testing suites, we will
documentation.
use these tools.
CASE tools can be grouped together if they have similar functionality, process
Some of the most used integration testing tools are as follows:
activities and capability of getting integrated with other tools.
o Citrus
Project Management Tools
o FitNesse
These tools are used for project planning, cost and effort estimation, project o TESSY
scheduling and resource planning. Managers have to strictly comply project o Protractor
execution with every mentioned step in software project management. Project o Rational Integration tester
management tools help in storing and sharing project information in real-time
throughout the organization. For example, Creative Pro Office, Trac Project,
Basecamp.
Software Development Life Cycle (SDLC) Stage4: Developing the project
In this phase of SDLC, the actual development begins, and the
A software life cycle model (also termed process model) is a pictorial and
programming is built. The implementation of design begins concerning
diagrammatic representation of the software life cycle. A life cycle model
writing code. Developers have to follow the coding guidelines described by
represents all the methods required to make a software product transit
their management and programming tools like compilers, interpreters,
through its life cycle stages.
debuggers, etc. are used to develop and implement the code.
SDLC Cycle represents the process of developing software. SDLC framework
Stage5: Testing
includes the following steps:
After the code is generated, it is tested against the requirements to make sure
The stages of SDLC are as follows:
that the products are solving the needs addressed and gathered during the
Stage1: communication and requirement analysis requirements stage.

In Communication, the user request for software by meeting During this stage, unit testing, integration testing, system testing, acceptance
testing are done.
service provider. Requirement Analysis is the most
Stage6: Deployment
important and necessary stage in SDLC.
Once the software is certified, and no bugs or errors are stated, then it is
Business analyst and Project organizer set up a meeting with the client to deployed.
gather all the data like what the customer wants to build, who will be the end
user, what is the objective of the product. Before creating a product, a core Then based on the assessment, the software may be released as it is or with
understanding or knowledge of the product is very necessary. suggested enhancement in the object segment.

Once the requirement is understood, the SRS (Software Requirement Stage7: Maintenance
Specification) document is created. The developers should thoroughly follow
Once when the client starts using the developed systems, then the real issues
this document and also should be reviewed by the customer for future
come up and requirements to be solved from time to time.
reference.
This procedure where the care is taken for the developed product is known as
Stage2: Feasibility study and system analysis maintenance.
Rough plan and road map is done for software by using algorithms, models. Different Software models
Stage3: Designing the Software
Waterfall Model:
The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of the The Waterfall Model was the first Process Model to be introduced. It is also
last two, like inputs from the customer, requirement gathering and blueprint referred to as a linear- sequential life cycle model or classic model. It is
of software. very simple to understand and use. In a waterfall model, each phase must be
completed before the next phase can begin and there is no overlapping in the
phases. the next phase. Each unit is developed and tested for its functionality,
which is referred to as Unit Testing.
The Waterfall model is the earliest SDLC approach that was used for software
development. Integration and Testing − All the units developed in the
The waterfall Model illustrates the software development process in a linear implementation phase are integrated into a system after testing of each
sequential flow. This means that any phase in the development process begins unit. Post integration the entire system is tested for any faults and
only if the previous phase is complete. In this waterfall model, the phases do failures.
not overlap. Deployment of system − Once the functional and non-functional
Waterfall approach was first SDLC Model to be used widely in Software testing is done; the product is deployed in the customer environment or
Engineering to ensure success of the project. In "The Waterfall" approach, the released into the market.
whole process of software development is divided into separate phases. In Maintenance − There are some issues which come up in the client
this Waterfall model, typically, the outcome of one phase acts as the input for environment. To fix those issues, patches are released. Also to enhance
the next phase sequentially. the product some better versions are released. Maintenance is done to
The following illustration is a representation of the different phases of the deliver these changes in the customer environment.
Waterfall Model.
All these phases are cascaded to each other in which progress is seen as
flowing steadily downwards (like a waterfall) through the phases. The next
phase is started only after the defined set of goals are achieved for previous
phase and it is signed off, so the name "Waterfall Model". In this model, phases
do not overlap.
Advantages:

Some of the major advantages of the Waterfall Model are as follows −


Simple and easy to understand and use
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well
The sequential phases in Waterfall model are −
understood.
Requirement Gathering and analysis − All possible requirements of It is disciplined in approach.
the system to be developed are captured in this phase and documented
Disadvantages:
in a requirement specification document.
System Design − The requirement specifications from first phase are No working software is produced until late during the life cycle.
studied in this phase and the system design is prepared. This system High amounts of risk and uncertainty.
design helps in specifying hardware and system requirements and helps Not a good model for complex and object-oriented projects.
in defining the overall system architecture.
Poor model for long and ongoing projects.
Implementation − With inputs from the system design, the system is
Not suitable for the projects where requirements are at a moderate to
first developed in small programs called units, which are integrated in high risk of changing.So, risk and uncertainty is high with this
process model. After testing the build, at the end of first iteration, the customer evaluates the
software and provides feedback.
Engineering or construct phase
Spiral Model: The Construct phase refers to production of the actual software product at
every spiral. In the baseline spiral, when the product is just thought of and the
The spiral model, initially proposed by Boehm, it is the combination of
design is being developed a POC (Proof of Concept) is developed in this phase
waterfall and iterative model, Using the spiral model, the software is
to get customer feedback.
developed in a series of incremental releases. Each phase in spiral model
begins with planning phase and ends with evaluation phase. Evaluation Phase
This phase allows the customer to evaluate the output of the project to update
The spiral model has four phases. A software project repeatedly passes
before the project continues to the next spiral.
through these phases in iterations called Spirals.
Software project repeatedly passes through all these four phases.
Advantages:
Flexible model
Project monitoring is very easy and effective
Risk management
Easy and frequent feedback from users.
Dis advantages:
It doesn’t work for smaller projects
Risk analysis require specific expertise.
It is costly model & complex.
Planning phase
Project success is highly dependent on risk.
This phase starts with gathering the business requirements in the baseline
Prototype Model:
spiral. In the subsequent spirals as the product matures, identification of
system requirements, subsystem requirements and unit requirements are all To overcome the disadvantages of waterfall model, this model is
done in this phase. implemented with a special factor called prototype. It is also known as
This phase also includes understanding the system requirements by revaluation model.
continuous communication between the customer and the system analyst. At Step 1: Requirements gathering and analysis
the end of the spiral, the product is deployed in the identified market.
Risk Analysis A prototyping model starts with requirement analysis. In this phase, the
requirements of the system are defined in detail. During the process, the users
Risk Analysis includes identifying, estimating and monitoring the technical of the system are interviewed to know what is their expectation from the
feasibility and management risks, such as schedule slippage and cost overrun. system.
Step 2: Quick design

The second phase is a preliminary design or a quick design. In this stage, a


simple design of the system is created. However, it is not a complete design.
It gives a brief idea of the system to the user. The quick design helps in
developing the prototype.

Step 3: Build a Prototype

In this phase, an actual prototype is designed based on the information


gathered from quick design. It is a small working model of the required
Advantages:
system.
Users are actively involved in development. Therefore, errors can
Step 4: Initial user evaluation
be detected in the initial stage of the software development process.
In this stage, the proposed system is presented to the client for an initial Missing functionality can be identified, which helps to reduce the risk
evaluation. It helps to find out the strength and weakness of the working of failure as Prototyping is also considered as a risk reduction activity.
model. Comment and suggestion are collected from the customer and Helps team member to communicate effectively
provided to the developer. Customer satisfaction exists because the customer can feel the product
at a very early stage.
Step 5: Refining prototype
Disadvantages:
If the user is not happy with the current prototype, you need to refine the
Prototyping is a slow and time taking process.
prototype according to the user's feedback and suggestions.
The cost of developing a prototype is a total waste as the
This phase will not over until all the requirements specified by the user are prototype is ultimately thrown away.
met. Once the user is satisfied with the developed prototype, a final system is Prototyping may encourage excessive change requests.
developed based on the approved final prototype. After seeing an early prototype model, the customers may think that
the actual product will be delivered to him soon.
Step 6: Implement Product and Maintain The client may lose interest in the final product when he or she is
not happy with the initial prototype.
Once the final system is developed based on the final prototype, it is
thoroughly tested and deployed to production. The system undergoes routine
maintenance for minimizing downtime and prevent large-scale failures.
SDLC - V-Model

The V-model is an SDLC model where execution of processes happens in a


sequential manner in a V- shape. It is also known as Verification and
Validation model.
The V-Model is an extension of the waterfall model and is based on the
association of a testing phase for each corresponding development stage. This phase.
means that for every single phase in the development cycle, there is a directly 4. Module Design: In the module design phase, the system breaks down
associated testing phase. This is a highly-disciplined model and the next phase into small modules. The detailed design of the modules is specified,
starts only after completion of the previous phase. which is known as Low-Level Design
V- Model - Design 5. Coding Phase: After designing, the coding phase is started. Based on
the requirements, a suitable programming language is decided. There
Under the V-Model, the corresponding testing phase of the development phase is are some guidelines and standards for coding. Before checking in the
planned in parallel. So, there are Verification phases on one side of the ‘V’ repository, the final build is optimized for better performance, and the
and Validation phases on the other side. The Coding Phase joins the two sides code goes through many code reviews to check the performance.
of the V-Model.
There are the various phases of Validation Phase of V-model:

1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed
during the module design phase. These UTPs are executed to
eliminate errors at code level or unit level. A unit is the smallest entity
which can independently exist, e.g., a program module. Unit testing
verifies that the smallest entity can function correctly when isolated
from the rest of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during
the Architectural Design Phase. These tests verify that groups
created and tested independently can coexist and communicate
among themselves.
There are the various phases of Verification Phase of V-model:
3. System Testing: System Tests Plans are developed during System
1. Business requirement analysis: This is the first step where product Design Phase. Unlike Unit and Integration Test Plans, System Tests
requirements understood from the customer's side. This phase Plans are composed by the client’s business team. System Test
contains detailed communication to understand customer's ensures that expectations from an application developer are met.
expectations and exact requirements. 4. Acceptance Testing: Acceptance testing is related to the business
2. System Design: In this stage system engineers analyze and requirement analysis part. It includes testing the software product in
interpret the business of the proposed system by studying the user atmosphere. Acceptance tests reveal the compatibility problems
user requirements document. with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like
3. Architecture Design: The baseline in selecting the architecture is that
load and performance defects within the real user atmosphere.
it should understand all which typically consists of the list of modules,
brief functionality of each module, their interface relationships, When to use V-Model?
dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular
1. When the requirement is well defined and not ambiguous. RAD Model Design:
2. The V-shaped model should be used for small to
medium-sized projects where requirements are clearly RAD model distributes the analysis, design, build and test phases into a series
defined and fixed. of short, iterative development cycles.
3. The V-shaped model should be chosen when sample technical
resources are available with essential technical expertise.

Advantage:

• Easy to Understand.
• Testing Methods like planning, test designing happens well before
coding.
• This saves a lot of time. Hence a higher chance of success over the
waterfall model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.

Disadvantage:
Following are the various phases of the RAD Model −
• Very rigid and least flexible. Business Modelling:
• Not a good for a complex project. The business model for the product under development is designed in terms
• Software is developed during the implementation stage, so no of flow of information and the distribution of information between various
early prototypes of the software are produced. business channels. A complete business analysis is performed to find the vital
• If any changes happen in the midway, then the test documents information for business, how it can be obtained, how and when is the
along with the required documents, information processed and what are the factors driving successful flow of
information.
SDLC - RAD Model
Data Modelling:
The RAD (Rapid Application Development) model is based on prototyping The information gathered in the Business Modelling phase is reviewed and
and iterative development with no specific planning involved. The process of analyzed to form sets of data objects vital for the business. The attributes of all
writing the software itself involves the planning required for developing the data sets is identified and defined. The relation between these data objects are
product. established and defined in detail in relevance to the business model.
Rapid Application Development focuses on gathering customer requirements Process Modelling:
through workshops or focus groups, early testing of the prototypes by the
The data object sets defined in the Data Modelling phase are converted to
customer using iterative concept, reuse of the existing prototypes
establish the business information flow needed to achieve specific business
(components), continuous integration and rapid delivery.
objectives as per the business model. The process model for any changes or
enhancements to the data object sets is defined in this phase. Process
descriptions for adding, deleting, retrieving or modifying a data object are 2. Design: In the design phase, team design the software by the different
given. diagrams like Data Flow diagram, activity diagram, class diagram, state
transition diagram, etc.
Application Generation:
The actual system is built and coding is done by using automation tools to 3. Implementation: In the implementation, requirements are written in
convert process and data models into actual prototypes. the coding language and transformed into computer programs which are
called Software.
Testing and Turnover
The overall testing time is reduced in the RAD model as the prototypes are 4. Testing: After completing the coding phase, software testing starts using
independently tested during every iteration. However, the data flow and the different test methods. There are many test methods, but the most common
interfaces between all the components need to be thoroughly tested with are white box, black box, and grey box test methods.
complete test coverage. Since most of the programming components have
already been tested, it reduces the risk of any major issues. 5. Deployment: After completing all the phases, software is deployed to its
work environment.
Incremental model:
6. Review: In this phase, after the product deployment, review phase is
It is a process of software development where requirements divided into performed to check the behavior and validity of the developed product.
multiple models of SDLC. And if there are any error found then the process starts again from the
Each module goes through the requirement, design, implementation and requirement gathering.
testing phases, this process continues until complete system is achieved.
7. Maintenance: In the maintenance phase, after deployment of the
software in the working environment there may be some bugs, some
errors or new updates are required. Maintenance involves debugging
and new addition options.

Advantages:

1. Testing and debugging during smaller iteration is easy.


2. A Parallel development can plan.
3. It is easily acceptable to ever-changing needs of the project.
4. Risks are identified and resolved during iteration.
5. Limited time spent on documentation and extra time on designing.

The various phases of Iterative model are as follows: Disadvantages:

1. Requirement gathering & analysis: In this phase, requirements are 1. It is not suitable for smaller projects.
gathered from customers and check by an analyst whether requirements will 2. Design can be changed again and again because of imperfect
fulfil or not. Analyst checks that need will achieve within budget or not. requirements.
After all of this, the software team skips to the next phase. 3. Requirement changes can cause over budget.
4. Project completion date not confirmed because of changing
requirements.

You might also like