0% found this document useful (0 votes)
21 views

Software-Engineering-and-UML - Microsoft OneNote Online

uml notes for mca

Uploaded by

tigodo1827
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Software-Engineering-and-UML - Microsoft OneNote Online

uml notes for mca

Uploaded by

tigodo1827
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Quick-Revision-Notes

Thursday, July 4, 2024 8:15 PM

What is Software Engineering?


The word “software engineering” comprises two different words.
Software: A collection of executable (do some task) programs.
Engineering: It includes designing the software, developing it, testing, methods used, and
paradigms used to make software.

Layered Technology in Software Engineering


Software engineering is a fully layered technology, to develop software we need to
go from one layer to another. All the layers are connected and each layer demands
the fulfillment of the previous layer.

Layered technology is divided into four parts:


1. A quality focus: It defines the continuous process improvement principles of
software. It provides integrity that means providing security to the software so that
data can be accessed by only an authorized person, no outsider can access the data.
It also focuses on maintainability and usability.
2. Process: It is the foundation or base layer of software engineering. It is key that
binds all the layers together which enables the development of software before the
deadline or on time. Process defines a framework that must be established for the
effective delivery of software engineering technology. The software process covers
all the activities, actions, and tasks required to be carried out for software
development.

Process activities are listed below:-


• Communication: It is the first and foremost thing for the development of
software. Communication is necessary to know the actual demand of the client.
• Planning: It basically means drawing a map for reduced the complication of
development.
• Modeling: In this process, a model is created according to the client for better
understanding.
• Construction: It includes the coding and testing of the problem.
• Deployment:- It includes the delivery of software to the client for evaluation
and feedback.
3. Method: During the process of software development the answers to all “how-to-
do” questions are given by method. It has the information of all the tasks which
includes communication, requirement analysis, design modeling, program
construction, testing, and support.
4. Tools: Software engineering tools provide a self-operating system for processes
and methods. Tools are integrated which means information created by one tool can
be used by another.

What is software?
Software is like the brain of electronic devices, defined as a set of instructions that allow the
user to control a specific device.
Therefore, there are a wide variety of software on the market, such as programs installed on
computers, mobile applications, or even the system of a robot vacuum cleaner.
Without software, these devices are nothing more than electronic shells, after all, without an
operating system, your cell phone cannot even make a simple call.
Therefore, on computers, without an operating system such as Windows, Linux or MacOS, it
becomes impossible to perform any task.
So, in most cases, software is installed in device storage, be it HDD, SSD, memory cards, among
others.
However, you can acquire software through downloading from the developer’s page, in stores
like Google Play and App Stores, or in physical media sold in stores.

Why Software Is Important?


Maximizes the Full Potential of Hardware
All the latest and greatest computer hardware in the world is useless if there isn’t any software to
interpret it. Software is there to utilize hardware: from making sure that it runs efficiently, to
providing people with the latest functionality upgrades. Even older hardware benefits from
improved software support, namely driver upgrades.

Provides an Intuitive Interface to Work With


Software is important in the sense that it can even dictate how we use our devices. How something
looks, feels, and functions are all important factors in how we interact with computers, and how
efficiently and accurately we can use them. Nowadays, with the amazing progress of software, even
the most complex of tasks can now be accomplished through simple input of information.

Boosts Efficiency and Productivity


Of course, among the most significant advantages that software has provided us is how it makes
everything more efficient. Accountants can now go through months’ worth of data and figures
without having to physically sift through stacks of papers. Utility bills, mortgages, and other finances
can be paid remotely.

Brief description about Software Myths


Software Myths:
Most, experienced experts have seen myths or superstitions (false beliefs or
interpretations) or misleading attitudes (naked users) which creates major problems
for management and technical people. The types of software-related myths are listed
below.

`Types of Software Myths

(i) Management Myths:


Myth 1:
We have all the standards and procedures available for software development.
Fact:
• Software experts do not know all the requirements for the software
development.
• And all existing processes are incomplete as new software development is based
on new and different problem.
Myth 2:
The addition of the latest hardware programs will improve the software
development.
Fact:
• The role of the latest hardware is not very high on standard software
development; instead (CASE) Engineering tools help the computer, they are more
important than hardware to produce quality and productivity.
• Hence, the hardware resources are misused.
Myth 3:
• With the addition of more people and program planners to Software
development can help meet project deadlines (If lagging behind).
Fact:
• If software is late, adding more people will merely make the problem worse. This
is because the people already working on the project now need to spend time
educating the newcomers, and are thus taken away from their work. The
newcomers are also far less productive than the existing software engineers, and
so the work put into training them to work on the software does not
immediately meet with an appropriate reduction in work.
(ii)Customer Myths:
The customer can be the direct users of the software, the technical team, marketing /
sales department, or other company. Customer has myths leading to false
expectations (customer) & that’s why you create dissatisfaction with the developer.
Myth 1:
A general statement of intent is enough to start writing plans (software development)
and details of objectives can be done over time.
Fact:
• Official and detailed description of the database function, ethical performance,
communication, structural issues and the verification process are important.
• Unambiguous requirements (usually derived iteratively) are developed only
through effective and continuous
communication between customer and developer.
Myth 2:
Software requirements continually change, but change can be easily accommodated
because software is flexible
Fact:
• It is true that software requirements change, but the impact of change varies
with the time at which it is introduced. When requirements changes are
requested early (before design or code has been started), the cost impact is
relatively small. However, as time passes, the cost impact grows rapidly—
resources have been committed, a design framework has been established, and
change can cause upheaval that requires additional resources and major design
modification.

Different Stages of Myths

(iii)Practitioner’s Myths:
Myths 1:
They believe that their work has been completed with the writing of the plan.
Fact:
• It is true that every 60-80% effort goes into the maintenance phase (as of the
latter software release). Efforts are required, where the product is available first
delivered to customers.
Myths 2:
There is no other way to achieve system quality, until it is “running”.
Fact:
• Systematic review of project technology is the quality of effective software
verification method. These updates are quality filters and more accessible than
test.
Myth 3:
An operating system is the only product that can be successfully exported project.
Fact:
• A working system is not enough, the right document brochures and booklets are
also required to provide guidance & software support.
Myth 4:
Engineering software will enable us to build powerful and unnecessary document &
always delay us.
Fact:
• Software engineering is not about creating documents. It is about creating a
quality product. Better quality leads to reduced rework. And reduced rework
results in faster delivery times

Software Engineering Paradigms


Software paradigm refers to method and steps, which are taken while designing the
software. Programming paradigm is a subset of software design paradigm which is
future for other a subset of software development paradigm. Software is considered
to be a collection of executable programming code, associated libraries, and
documentation. Software development paradigm is also known as software
engineering, all the engineering concepts pertaining to developments software
applied. It consists of the following parts as Requirement Gathering, Software design,
Programming, etc. The software design paradigm is a part of software development.
It includes design, maintenance, programming.
Software paradigm is a theoretical framework that serves as a guide for the
development and structure of a software system. There are several software
paradigms, including:
Imperative paradigm: This is the most common paradigm and is based on the idea
that a program is a set of instructions that tell a computer what to do. It is often used
in languages such as C and C++.
• Object-oriented paradigm: This paradigm is based on the idea of objects, which
are self-contained units that contain both data and behavior. It is often used in
languages such as Java, C#, and Python.
• Functional paradigm: This paradigm is based on the idea that a program is a set
of mathematical functions that transform inputs into outputs. It is often used in
languages such as Haskell, Lisp, and ML.
• Logic paradigm: This paradigm is based on the idea that a program is a set of
logical statements that can be used to infer new information. It is often used in
languages such as Prolog and Mercury.
• The Software Development Life Cycle (SDLC) is a process that software
developers use to plan, design, develop, test, deploy, and maintain software
systems. The most common SDLC models include:
• Waterfall model: This model is based on the idea that software development is a
linear process, with each phase building on the previous one.
• Agile model: This model is based on the idea that software development is an
iterative process, with small

Software Process Models


Software Process Model
A software process model is an abstraction of the actual process, which is being
described. It can also be defined as a simplified representation of a software process.
Each model represents a process from a specific perspective.
Following are some basic software process models on which different type of
software process models can be implemented:
1. A workflow Model : It is the sequential series of tasks and decisions that make
up a business process.
2. The Waterfall Model: It is a sequential design process in which progress is seen
as flowing steadily downwards.
• Phases in waterfall model:
○ Requirements Specification
○ Software Design
○ Implementation
○ Testing

3. Dataflow Model: It is diagrammatic representation of the flow and exchange of


information within a system.
4. Evolutionary Development Model: Following activities are considered in this
method:
• Specification
• Development
• Validation
5. Role / Action Model: Roles of the people involved in the software process and
the activities.
Need for Process Model
The software development team must decide the process model that is to be used for
software product development and then the entire team must adhere to it. This is
necessary because the software product development can then be done
systematically. Each team member will understand what is the next activity and how
to do it. Thus process model will bring the definiteness and discipline in overall
development process. Every process model consists of definite entry and exit criteria
for each phase. Hence the transition of the product through various phases is definite.
If the process model is not followed for software development then any team
member can perform any software development activity, this will ultimately cause a
chaos and software project will definitely fail without using process model, it is
difficult to monitor the progress of software product. Thus process model plays an
important role in software engineering.
Advantages or Disadvantages of Process Model
There are several advantages and disadvantages to different software development
methodologies, such as:
Waterfall
Advantages of waterfall model are:
1. Clear and defined phases of development make it easy to plan and manage the
project.
2. It is well-suited for projects with well-defined and unchanging requirements.
Disadvantages of waterfall model are:
1. Changes made to the requirements during the development phase can be costly
and time-consuming.
2. It can be difficult to know how long each phase will take, making it difficult to
estimate the overall time and cost of the project.
3. It does not have much room for iteration and feedback throughout the
development process.
Agile
Advantages of Agile Model are:
1. Flexible and adaptable to changing requirements.
2. Emphasizes rapid prototyping and continuous delivery, which can help to identify
and fix problems early on.
3. Encourages collaboration and communication between development teams and
stakeholders.
Disadvantages of Agile Model are:
1. It may be difficult to plan and manage a project using Agile methodologies, as
requirements and deliverables are not always well-defined in advance.
2. It can be difficult to estimate the overall time and cost of a project, as the
process is iterative and changes are made throughout the development.
Scrum
Advantages of Scrum are:
1. Encourages teamwork and collaboration.
2. Provides a flexible and adaptive framework for planning and managing software
development projects.
3. Helps to identify and fix problems early on by using frequent testing and
inspection.
Disadvantages of Scrum are:
1. A lack of understanding of Scrum methodologies can lead to confusion and
inefficiency.
2. It can be difficult to estimate the overall time and cost of a project, as the
process is iterative and changes are made throughout the development.
DevOps
Advantages of DevOps are:
1. Improves collaboration and communication between development and
operations teams.
2. Automates software delivery process, making it faster and more efficient.
3. Enables faster recovery and response time in case of issues.
Disadvantages of DevOps are:
1. Requires a significant investment in tools and technologies.
2. Can be difficult to implement in organizations with existing silos and lack of
culture of collaboration.
3. Need to have a skilled workforce to effectively implement the devops practices.
4. Ultimately, the choice of which methodology to use depends on the specific
project and organization, as well as the goals and requirements of the project.

Sequential Model
What is the SDLC Waterfall Model?
The waterfall model is a software development model used in the context of large,
complex projects, typically in the field of information technology. It is characterized by
a structured, sequential approach to project management and software development.
The waterfall model is useful in situations where the project requirements are well-
defined and the project goals are clear. It is often used for large-scale projects with
long timelines, where there is little room for error and the project stakeholders need
to have a high level of confidence in the outcome.

Phases of SDLC Waterfall Model – Design


The Waterfall Model is a classical software development methodology that was first
introduced by Winston W. Royce in 1970. It is a linear and sequential approach to
software development that consists of several phases that must be completed in a
specific order.
The Waterfall Model has six phases which are:
1. Requirements: The first phase involves gathering requirements from stakeholders
and analyzing them to understand the scope and objectives of the project.
2. Design: Once the requirements are understood, the design phase begins. This
involves creating a detailed design document that outlines the software architecture,
user interface, and system components.
3. Development: The Development phase include implementation involves coding the
software based on the design specifications. This phase also includes unit testing to
ensure that each component of the software is working as expected.
4. Testing: In the testing phase, the software is tested as a whole to ensure that it
meets the requirements and is free from defects.
5. Deployment: Once the software has been tested and approved, it is deployed to
the production environment.
6. Maintenance: The final phase of the Waterfall Model is maintenance, which
involves fixing any issues that arise after the software has been deployed and
ensuring that it continues to meet the requirements over time.

Advantages of the SDLC Waterfall Model


The classical waterfall model is an idealistic model for software development. It is
very simple, so it can be considered the basis for other software development life
cycle models. Below are some of the major advantages of this SDLC model.
• Easy to Understand: The Classical Waterfall Model is very simple and easy to
understand.
• Individual Processing: Phases in the Classical Waterfall model are processed
one at a time.
• Properly Defined: In the classical waterfall model, each stage in the model is
clearly defined.
• Clear Milestones: The classical Waterfall model has very clear and well-
understood milestones.
• Properly Documented: Processes, actions, and results are very well
documented.
• Reinforces Good Habits: The Classical Waterfall Model reinforces good habits
like define-before-design and design-before-code.
• Working: Classical Waterfall Model works well for smaller projects and projects
where requirements are well understood.

Prototyping Model – Software Engineering


The Prototyping Model is one of the most popularly used Software Development Life
Cycle Models (SDLC models). This model is used when the customers do not know the
exact project requirements beforehand. In this model, a prototype of the end product
is first developed, tested, and refined as per customer feedback repeatedly till a final
acceptable prototype is achieved which forms the basis for developing the final
product.
In this process model, the system is partially implemented before or during the
analysis phase thereby allowing the customers to see the product early in the life
cycle. The process starts by interviewing the customers and developing the
incomplete high-level paper model. This document is used to build the initial
prototype supporting only the basic functionality as desired by the customer. Once
the customer figures out the problems, the prototype is further refined to eliminate
them. The process continues until the user approves the prototype and finds the
working model to be satisfactory.
Steps of Prototyping Model
Step 1: Requirement Gathering and Analysis: This is the initial step in designing a
prototype model. In this phase, users are asked about what they expect or what they
want from the system.
Step 2: Quick Design: This is the second step in the Prototyping Model. This model
covers the basic design of the requirement through which a quick overview can be
easily described.
Step 3: Build a Prototype: This step helps in building an actual prototype from the
knowledge gained from prototype design.
Step 4: Initial User Evaluation: This step describes the preliminary testing where the
investigation of the performance model occurs, as the customer will tell the strengths
and weaknesses of the design, which was sent to the developer.
Step 5: Refining Prototype: If any feedback is given by the user, then improving the
client’s response to feedback and suggestions, the final system is approved.
Step 6: Implement Product and Maintain: This is the final step in the phase of the
Prototyping Model where the final system is tested and distributed to production,
here the program is run regularly to prevent failures.
Types of Prototyping Models
There are four types of Prototyping Models, which are described below.
• Rapid Throwaway Prototyping
• Evolutionary Prototyping
• Incremental Prototyping
• Extreme Prototyping
1. Rapid Throwaway Prototyping
• This technique offers a useful method of exploring ideas and getting customer
feedback for each of them.
• In this method, a developed prototype need not necessarily be a part of the
accepted prototype.
• Customer feedback helps prevent unnecessary design faults and hence, the final
prototype developed is of better quality.
2. Evolutionary Prototyping
• In this method, the prototype developed initially is incrementally refined based
on customer feedback till it finally gets accepted.
• In comparison to Rapid Throwaway Prototyping, it offers a better approach that
saves time as well as effort.
• This is because developing a prototype from scratch for every iteration of the
process can sometimes be very frustrating for the developers.
3. Incremental Prototyping
• In this type of incremental prototyping, the final expected product is broken into
different small pieces of prototypes and developed individually.
• In the end, when all individual pieces are properly developed, then the different
prototypes are collectively merged into a single final product in their predefined
order.
• It’s a very efficient approach that reduces the complexity of the development
process, where the goal is divided into sub-parts and each sub-part is developed
individually.
• The time interval between the project’s beginning and final delivery is
substantially reduced because all parts of the system are prototyped and tested
simultaneously.
• Of course, there might be the possibility that the pieces just do not fit together
due to some lack of ness in the development phase – this can only be fixed by
careful and complete plotting of the entire system before prototyping starts.
4. Extreme Prototyping
This method is mainly used for web development. It consists of three sequential
independent phases:
• In this phase, a basic prototype with all the existing static pages is presented in
HTML format.
• In the 2nd phase, Functional screens are made with a simulated data process
using a prototype services layer.
• This is the final step where all the services are implemented and associated with
the final prototype.
This Extreme Prototyping method makes the project cycling and delivery robust and
fast and keeps the entire developer team focused and centralized on product
deliveries rather than discovering all possible needs and specifications and adding
necessitated features.

Rapid application development model (RAD)


The Rapid Application Development Model was first proposed by IBM in the 1980s.
The RAD model is a type of incremental process model in which there is an extremely
short development cycle. When the requirements are fully understood and the
component-based construction approach is adopted then the RAD model is used.
Various phases in RAD are Requirements Gathering, Analysis and Planning, Design,
Build or Construction, and finally Deployment.

The critical feature of this model is the use of powerful development tools and
techniques. A software project can be implemented using this model if the project
can be broken down into small modules wherein each module can be assigned
independently to separate teams. These modules can finally be combined to form the
final product. Development of each module involves the various basic steps as in the
waterfall model i.e. analyzing, designing, coding, and then testing, etc. as shown in the
figure. Another striking feature of this model is a short period i.e. the time frame for
delivery(time-box) is generally 60-90 days.
Multiple teams work on developing the software system using the RAD model
parallelly.

The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also
an integral part of the projects. This model consists of 4 basic phases:
1. Requirements Planning – This involves the use of various techniques used in
requirements elicitation like brainstorming, task analysis, form analysis, user
scenarios, FAST (Facilitated Application Development Technique), etc. It also
consists of the entire structured plan describing the critical data, methods to
obtain it, and then processing it to form a final refined model.
2. User Description – This phase consists of taking user feedback and building the
prototype using developer tools. In other words, it includes re-examination and
validation of the data collected in the first phase. The dataset attributes are also
identified and elucidated in this phase.
3. Construction – In this phase, refinement of the prototype and delivery takes
place. It includes the actual use of powerful automated tools to transform
processes and data models into the final working product. All the required
modifications and enhancements are to be done in this phase.
4. Cutover – All the interfaces between the independent modules developed by
separate teams have to be tested properly. The use of powerfully automated
tools and subparts makes testing easier. This is followed by acceptance testing
by the user.
The process involves building a rapid prototype, delivering it to the customer, and
taking feedback. After validation by the customer, the SRS document is developed
and the design is finalized.
When to use the RAD Model?
-Well-understood Requirements
-Time Sensitive projects: tight deadlines
-Small to medium sized projects
-Innovation and creativity

Evolutionary Process Model


The evolutionary model is based on the concept of making an initial product and then
evolving the software product over time with iterative and incremental approaches
with proper feedback. In this type of model, the product will go through several
iterations and come up when the final product is built through multiple iterations. The
development is carried out simultaneously with the feedback during the
development. This model has a number of advantages such as customer involvement,
taking feedback from the customer during development, and building the exact
product that the user wants. Because of the multiple iterations, the chances of errors
get reduced and the reliability and efficiency will increase.

Evolutionary Model

Types of Evolutionary Process Models


1. Iterative Model
2. Incremental Model
3. Spiral Model
Iterative Model
In the iterative model first, we take the initial requirements then we enhance the
product over multiple iterations until the final product gets ready. In every iteration,
some design modifications were made and some changes in functional requirements
is added. The main idea behind this approach is to build the final product through
multiple iterations that result in the final product being almost the same as the user
wants with fewer errors and the performance, and quality would be high.
Iterative model

Incremental Model
In the incremental model, we first build the project with basic features and then
evolve the project in every iteration, it is mainly used for large projects. The first step
is to gather the requirements and then perform analysis, design, code, and test and
this process goes the same over and over again until our final project is ready.

Incremental Model

What is Spiral Model in Software Engineering?


What is the Spiral Model?
The Spiral Model is a Software Development Life Cycle (SDLC) model that provides
a systematic and iterative approach to software development. In its diagrammatic
representation, looks like a spiral with many loops. The exact number of loops of
the spiral is unknown and can vary from project to project. Each loop of the spiral
is called a phase of the software development process.
Some Key Points regarding the phase of a Spiral Model:

1. The exact number of phases needed to develop the product can be varied by the
project manager depending upon the project risks.
2. As the project manager dynamically determines the number of phases, the
project manager has an important role in developing a product using the spiral
model.
3. It is based on the idea of a spiral, with each iteration of the spiral representing a
complete software development cycle, from requirements gathering and
analysis to design, implementation, testing, and maintenance.

What Are the Phases of the Spiral Model?


The Spiral Model is a risk-driven model, meaning that the focus is on managing risk
through multiple iterations of the software development process. It consists of the
following phases:
1. Planning: The first phase of the Spiral Model is the planning phase, where the
scope of the project is determined and a plan is created for the next iteration of
the spiral.
2. Risk Analysis: In the risk analysis phase, the risks associated with the project are
identified and evaluated.
3. Engineering: In the engineering phase, the software is developed based on the
requirements gathered in the previous iteration.
4. Evaluation: In the evaluation phase, the software is evaluated to determine if it
meets the customer’s requirements and if it is of high quality.
5. Planning: The next iteration of the spiral begins with a new planning phase,
based on the results of the evaluation.
The Spiral Model is often used for complex and large software development projects,
as it allows for a more flexible and adaptable approach to software development. It
is also well-suited to projects with significant uncertainty or high levels of risk.
The Radius of the spiral at any point represents the expenses (cost) of the project so
far, and the angular dimension represents the progress made so far in the current
phase.

Each phase of the Spiral Model is divided into four quadrants as shown in the
above figure. The functions of these four quadrants are discussed below:
1. Objectives determination and identify alternative solutions: Requirements are
gathered from the customers and the objectives are identified, elaborated, and
analyzed at the start of every phase. Then alternative solutions possible for the
phase are proposed in this quadrant.
2. Identify and resolve Risks: During the second quadrant, all the possible solutions
are evaluated to select the best possible solution. Then the risks associated with
that solution are identified and the risks are resolved using the best possible
strategy. At the end of this quadrant, the Prototype is built for the best possible
solution.
3. Develop the next version of the Product: During the third quadrant, the
identified features are developed and verified through testing. At the end of the
third quadrant, the next version of the software is available.
4. Review and plan for the next Phase: In the fourth quadrant, the Customers
evaluate the so-far developed version of the software. In the end, planning for
the next phase is started.

Risk Handling in Spiral Model


A risk is any adverse situation that might affect the successful completion of a
software project. The most important feature of the spiral model is handling these
unknown risks after the project has started. Such risk resolutions are easier done by
developing a prototype.
Why Spiral Model is called Meta Model?
The Spiral model is called a Meta-Model because it subsumes all the other SDLC
models. For example, a single loop spiral actually represents the Iterative Waterfall
Model.

Advantages of the Spiral Model


-Risk Handling
-Good for big projects
-Flexibility in requirements
-customer satisfaction
-Improved communication

Component Assembly Model


The component-based assembly model uses object-oriented technologies. In object-
oriented technologies, the emphasis is on the creation of classes. Classes are the
entities that encapsulate data and algorithms. In component-based architecture,
classes (i.e., components required to build application) can be uses as reusable
components. This model uses various characteristics of spiral model. This model is
evolutionary by nature. Hence, software development can be done using iterative
approach. In CBD model, multiple classes can be used. These classes are basically the
prepackaged components. The model works in following manner:
• Step-1: First identify all the required candidate components, i.e., classes with the
help of application data and algorithms.
• Step-2: If these candidate components are used in previous software projects
then they must be present in the library.
• Step-3: Such preexisting components can be excited from the library and used
for further development.
• Step-4: But if the required component is not present in the library then build or
create the component as per requirement.
• Step-5: Place this newly created component in the library. This makes one
iteration of the system.
• Step-6: Repeat steps 1 to 5 for creating n iterations, where n denotes the
number of iterations required to develop the complete application.

Characteristics of Component Assembly Model:


• Uses object-oriented technology.
• Components and classes encapsulate both data and algorithms.
• Components are developed to be reusable.
• Paradigm similar to spiral model, but engineering activity involves components.
• The system produced by assembling the correct components.

Formal Methods
Formal methods are techniques we use in computer science and software
engineering to ensure the correctness of our programs and reduction in our programs'
errors. They rely on math and logic to model and analyze system behavior, making
systems more reliable and secure.
In this Answer, we will be discussing how formal methods are useful in the field of
software engineering.
Why do we use formal methods?
-Correctness and Reliability
-Early Error Detection
-Verification and Validation

Fourth Generation Techniques


The term fourth generation techniques (4GT) encompasses a broad array of software that have
one thing in common: each enables the software engineer to specify some characteristic of
software at a high level. The tool then automatically generates source code based on the
developer's specification. There is little debate that the higher the level at which software can
be specified to a machine, the faster a program can be built. The 4GT paradigm for software
engineering focuses on the ability to specify software using specialized language forms or a
graphic notation that describes the problem to be solved in terms that the customer can
understand. Currently, a software development environment that supports the 4GT paradigm
includes some or all of the following tools: nonprocedural languages for database query, report
generation, data manipulation, screen interaction and definition, code generation; high-level
graphics capability; spreadsheet capability, and automated generation of HTML and similar
languages used for Web-site creation using advanced software tools. Initially, many of the tools
noted previously were available only for very specific application domains, but today 4GT
environments have been extended to address most software application categories. Like other
paradigms, 4GT begins with a requirements gathering step. Ideally, the customer would
describe requirements and these would be directly translated into an operational prototype. But
this is unworkable. The customer may be unsure of what is required, may be ambiguous in
specifying facts that are known, and may be unable or unwilling to specify information in a
manner that a 4GT tool can consume. Fourth generation techniques:

UNIT – 2
Analysis principles – Analysis Modelling in Software
Engineering
Analysis Model is a technical representation of the system. It acts as a link between
the system description and the design model. In Analysis Modelling, information,
behavior, and functions of the system are defined and translated into the
architecture, component, and interface level design in the design modeling.

Objectives of Analysis Modelling


• Understanding Needs: The process of analysis modelling helps in the
understanding and extraction of user needs for the software system.
• Communication: Analysis models facilitate communication between users,
clients, developers, and testers, among other stakeholders.
• Clarifying Ambiguities: Analysis models assist in resolving requirements disputes
and providing clarification on unclear areas.
• Finding the Data Requirements: Analysis modelling assists in determining the
relationships, entities, and qualities of the data that the system needs.
• Defining Behavior: Analysis modelling aids in the definition of the system’s
dynamic behavior, including workflows, processes, and inter-component
interactions.
• System Boundary Identification: It is made easier by analysis modelling, which
helps in defining the parameters of the software system and its interactions with
users, other systems, and hardware components.
Elements of Analysis Model

Elements of Analysis Model

1. Data Dictionary:
It is a repository that consists of a description of all data objects used or
produced by the software. It stores the collection of data present in the
software. It is a very crucial element of the analysis model. It acts as a
centralized repository and also helps in modeling data objects defined during
software requirements.

2. Entity Relationship Diagram (ERD):


It depicts the relationship between data objects and is used in conducting data
modeling activities. The attributes of each object in the Entity-Relationship
Diagram can be described using Data object description. It provides the basis for
activity related to data design.

3. Data Flow Diagram (DFD):


It depicts the functions that transform data flow, and it also shows how data is
transformed when moving from input to output. It provides the additional
information that is used during the analysis of the information domain and
serves as a basis for the modeling of function. It also enables the engineer to
develop models of functional and information domains at the same time.

4. State Transition Diagram:


It shows various modes of behavior (states) of the system and also shows the
transitions from one state to another state in the system. It also provides the
details of how the system behaves due to the consequences of external events.
It represents the behavior of a system by presenting its states and the events
that cause the system to change state. It also describes what actions are taken
due to the occurrence of a particular event.

5. Process Specification:
It stores the description of each function present in the data flow diagram. It
describes the input to a function, the algorithm that is applied for the
transformation of input, and the output that is produced. It also shows
regulations and barriers imposed on the performance characteristics that apply
to the process and layout constraints that could influence how the process will
be implemented.

6. Control Specification:
It stores additional information about the control aspects of the software. It is
used to indicate how the software behaves when an event occurs and which
processes are invoked due to the occurrence of the event. It also provides the
details of the processes which are executed to manage events.

7. Data Object Description:


It stores and provides complete knowledge about a data object present and used
in the software. It also gives us the details of attributes of the data object
present in the Entity Relationship Diagram. Hence, it incorporates all the data
objects and their attributes.
Key Principles of Analysis Modelling
1. Abstraction: Analysis modelling involves separating important system
components from unneeded specifics. While leaving out unnecessary or low-
level information, it concentrates on capturing the essential ideas, behaviors,
and relationships relevant to the system’s requirements.
2. Modularity: Analysis models ought to be able to break down a system into
smaller, more manageable parts. It is simpler to understand, assess, and alter the
system when each module or component reflects a different part of its
functionality.
3. Consistency: Internally and with other project artifacts, including requirements
documents, design specifications, and implementation code, analysis models
should be consistent. By preventing opposing or conflicting representations of
the system, consistency promotes greater stakeholder comprehension and
alignment.
4. Traceability: Analysis models ought to be able to be linked to other project
components so that interested parties may follow requirements from their
inception to their execution. Throughout the software development lifecycle,
it helps with impact analysis, change management, and requirements coverage
verification.
5. Precision: To provide an unambiguous picture of the needs and behaviors of the
system, analysis models must be accurate and exact. Accuracy lowers the
chance of miscommunication and misunderstanding among stakeholders as well
as implementation problems.
6. Separation of Concerns: Analysis modeling divides various system components
or concerns into discrete representations. For instance, behavioral modeling
aims to capture the dynamic behavior of the system, whereas data modeling
concentrates on expressing the relationships and structure of data items.

Data Modification
Data Modification refers to the process of altering data within a database system.
It is a critical aspect of software development that ensures the accuracy,
consistency, and integrity of stored data. Data modification can be performed
through various operations such as:
• INSERT: Adding new records to a table.
• UPDATE: Modifying existing records with new data.
• DELETE: Removing records from a database.
Data modification commands must be used with caution to prevent data
corruption and should be managed within transactions to maintain data integrity.
This process is fundamental for dynamic applications that rely on persistent data
storage and manipulation.

Functional modelling and Information Flow


modelling
In the Functional Model, software converts information. and to accomplish this, it
must perform at least three common tasks- input, processing and output. When
functional models of an application are created, the software engineer emphasizes
problem specific tasks. The functional model begins with a single reference level
model (i.e., be manufactured). In a series of iterations, more and more functional
detail is given, until all system functionality is fully represented.
Information is converted because it flows from a computer-based system. The
system takes input in various forms; Hardware, software, and human elements are
applied to replace it; And produces in various forms. The transformation (s) or
function may be composed of a single logical comparison, a complex numerical
method, or a rule- the invention approach of an expert system. The output can light
an LED or provide a 200 page report. Instead, we can create a model or flow model
for any computer- based system, regardless of size and complexity.
Structural analysis started as an Information Flow Modelling technique. A computer-
based system can be modeled as an information transform function as shown in
figure.
A rectangle represents an external unit. That is, a system element, such as a
hardware, a person or another system that provides information for transformation
by the software or receives information provided by the software. A circle is used to
represent a process or transform or a function that is applied to data and changes it
in some way. An arrow is used to represent one or more data items.

All arrows should be labeled in a DFD. The double line is used to represent data store.
There may be implicit procedure or sequence in the diagram but explicit logical details
are generally delayed until software design.

Structured Analysis and Structured Design (SA/SD)


Structured Analysis and Structured Design (SA/SD) is a diagrammatic notation that
is designed to help people understand the system. The basic goal of SA/SD is to
improve quality and reduce the risk of system failure. It establishes concrete
management specifications and documentation. It focuses on the solidity, pliability,
and maintainability of the system.
Structured Analysis and Structured Design (SA/SD) is a software development method
that was popular in the 1970s and 1980s. The method is based on the principle of
structured programming, which emphasizes the importance of breaking down a
software system into smaller, more manageable components.
In SA/SD, the software development process is divided into two phases: Structured
Analysis and Structured Design. During the Structured Analysis phase, the problem to
be solved is analyzed and the requirements are gathered. The Structured Design phase
involves designing the system to meet the requirements that were gathered in the
Structured Analysis phase.
Structured Analysis and Structured Design (SA/SD) is a traditional software
development methodology that was popular in the 1980s and 1990s. It involves a
series of techniques for designing and developing software systems in a structured
and systematic way. Here are some key concepts of SA/SD:
1. Functional Decomposition: SA/SD uses functional decomposition to break down
a complex system into smaller, more manageable subsystems. This technique
involves identifying the main functions of the system and breaking them down
into smaller functions that can be implemented independently.
2. Data Flow Diagrams (DFDs): SA/SD uses DFDs to model the flow of data through
the system. DFDs are graphical representations of the system that show how
data moves between the system’s various components.
3. Data Dictionary: A data dictionary is a central repository that contains
descriptions of all the data elements used in the system. It provides a clear and
consistent definition of data elements, making it easier to understand how the
system works.
4. Structured Design: SA/SD uses structured design techniques to develop the
system’s architecture and components. It involves identifying the major
components of the system, designing the interfaces between them, and
specifying the data structures and algorithms that will be used to implement the
system.
5. Modular Programming: SA/SD uses modular programming techniques to break
down the system’s code into smaller, more manageable modules. This makes it
easier to develop, test, and maintain the system.

UNIT – 3

Software Project Planning


A Software Project is the complete methodology of programming advancement from
requirement gathering to testing and support, completed by the execution procedures, in a
specified period to achieve intended software product.

Need of Software Project Management


Software development is a sort of all new streams in world business, and there's next to no
involvement in structure programming items. Most programming items are customized to
accommodate customer's necessities. The most significant is that the underlying technology
changes and advances so generally and rapidly that experience of one element may not be
connected to the other one. All such business and ecological imperatives bring risk in software
development; hence, it is fundamental to manage software projects efficiently.

Software Project planning starts before technical work start. The various steps of planning
activities are:

What is Project Size Estimation?


Project size estimation is determining the scope and resources required for the
project.
1. It involves assessing the various aspects of the project to estimate the effort,
time, cost, and resources needed to complete the project.
2. Accurate project size estimation is important for effective and efficient project
planning, management, and execution.
Importance of Project Size Estimation
Here are some of the reasons why project size estimation is critical in project
management:
1. Financial Planning: Project size estimation helps in planning the financial aspects
of the project, thus helping to avoid financial shortfalls.
2. Resource Planning: It ensures the necessary resources are identified and
allocated accordingly.
3. Timeline Creation: It facilitates the development of realistic timelines and
milestones for the project.
4. Identifying Risks: It helps to identify potential risks associated with overall
project execution.
5. Detailed Planning: It helps to create a detailed plan for the project execution,
ensuring all the aspects of the project are considered.
6. Planning Quality Assurance: It helps in planning quality assurance activities and
ensuring that the project outcomes meet the required standards.

Different Methods of Project Estimation


Expert Judgment:
Analogous Estimation: estimating the project size based on the similarities between
the current project and previously completed projects.
Bottom-up Estimation: In this technique, the project is divided into smaller modules
or tasks, and each task is estimated separately. The estimates are then aggregated to
arrive at the overall project estimate.
COCOMO (Constructive Cost Model):

Cost Estimation
Cost estimation simply means a technique that is used to find out the cost estimates.
The cost estimate is the financial spend that is done on the efforts to develop and
test software in Software Engineering. Cost estimation models are some
mathematical algorithms or parametric equations that are used to estimate the cost
of a product or a project. Various techniques or models are available for cost
estimation, also known as Cost Estimation Models.

Cost Estimation Models as shown below :

Cost Estimation Models


1. Empirical Estimation Technique – Empirical estimation is a technique or model
in which empirically derived formulas are used for predicting the data that are a
required and essential part of the software project planning step. These
techniques are usually based on the data that is collected previously from a
project and also based on some guesses, prior experience with the development
of similar types of projects, and assumptions. It uses the size of the software to
estimate the effort. In this technique, an educated guess of project parameters
is made. Hence, these models are based on common sense. However, as there
are many activities involved in empirical estimation techniques, this technique is
formalized. For example Delphi technique and Expert Judgement technique.
2. Heuristic Technique – Heuristic word is derived from a Greek word that means
“to discover”. The heuristic technique is a technique or model that is used for
solving problems, learning, or discovery in the practical methods which are used
for achieving immediate goals. These techniques are flexible and simple for
taking quick decisions through shortcuts and good enough calculations, most
probably when working with complex data. But the decisions that are made
using this technique are necessary to be optimal. In this technique, the
relationship among different project parameters is expressed using
mathematical equations. The popular heuristic technique is given
by Constructive Cost Model (COCOMO). This technique is also used to increase
or speed up the analysis and investment decisions.
3. Analytical Estimation Technique – Analytical estimation is a type of technique
that is used to measure work. In this technique, firstly the task is divided or
broken down into its basic component operations or elements for analyzing.
Second, if the standard time is available from some other source, then these
sources are applied to each element or component of work. Third, if there is no
such time available, then the work is estimated based on the experience of the
work. In this technique, results are derived by making certain basic assumptions
about the project. Hence, the analytical estimation technique has some scientific
basis. Halstead’s software science is based on an analytical estimation model.

COCOMO Model
What is the COCOMO Model?
The COCOMO Model is a procedural cost estimate model for software projects and is
often used as a process of reliably predicting the various parameters associated with
making a project such as size, effort, cost, time, and quality. It was proposed by Barry
Boehm in 1981 and is based on the study of 63 projects, which makes it one of the
best-documented models.
The key parameters that define the quality of any software product, which are also
an outcome of COCOMO, are primarily effort and schedule:
1. Effort: Amount of labor that will be required to complete a task. It is measured
in person-months units.
2. Schedule: This simply means the amount of time required for the completion of
the job, which is, of course, proportional to the effort put in. It is measured in
the units of time such as weeks, and months.
Types of Projects in the COCOMO Model
In the COCOMO model, software projects are categorized into three types based on
their complexity, size, and the development environment. These types are:
1. Organic: A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been solved
in the past and also the team members have a nominal experience regarding the
problem.
2. Semi-detached: A software project is said to be a Semi-detached type if the vital
characteristics such as team size, experience, and knowledge of the various
programming environments lie in between organic and embedded. The projects
classified as Semi-Detached are comparatively less familiar and difficult to
develop compared to the organic ones and require more experience better
guidance and creativity. Eg: Compilers or different Embedded Systems can be
considered Semi-Detached types.
3. Embedded: A software project requiring the highest level of complexity,
creativity, and experience requirement falls under this category. Such software
requires a larger team size than the other two models and also the developers
need to be sufficiently experienced and creative to develop such complex
models.
Comparison of these three types of Projects in COCOMO Model

Aspects Organic Semidetached Embedded


Project Size 2 to 50 KLOC 50-300 KLOC 300 and above KLOC
Complexity Low Medium High
Team Highly Some experienced as well as Mixed experience,
Experience experienced inexperienced staff includes experts
Environmen Flexible, fewer Somewhat flexible, moderate Highly rigorous, strict
t constraints constraints requirements
Effort E = 2.4(400)1.05 E = 3.0(400)1.12 E = 3.6(400)1.20
Equation
Example Simple payroll New system interfacing with Flight control
system existing systems software
Detailed Structure of COCOMO Model
Detailed COCOMO incorporates all characteristics of the intermediate version with
an assessment of the cost driver’s impact on each step of the software engineering
process. The detailed model uses different effort multipliers for each cost driver
attribute. In detailed COCOMO, the whole software is divided into different modules
and then we apply COCOMO in different modules to estimate effort and then sum the
effort.
The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model
7.

Phases of COCOMO Model

Different models of COCOMO have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of
these models can be applied to a variety of projects, whose characteristics determine
the value of the constant to be used in subsequent calculations. These characteristics
of different system types are mentioned below. Boehm’s definition of organic,
semidetached, and embedded systems:
Importance of the COCOMO Model
1. Cost Estimation: To help with resource planning and project budgeting,
COCOMO offers a methodical approach to software development cost
estimation.
2. Resource Management: By taking team experience, project size, and complexity
into account, the model helps with efficient resource allocation.
3. Project Planning: COCOMO assists in developing practical project plans that
include attainable objectives, due dates, and benchmarks.
4. Risk management: Early in the development process, COCOMO assists in
identifying and mitigating potential hazards by including risk elements.
5. Support for Decisions: During project planning, the model provides a
quantitative foundation for choices about scope, priorities, and resource
allocation.
6. Benchmarking: To compare and assess various software development projects
to industry standards, COCOMO offers a benchmark.
7. Resource Optimization: The model helps to maximize the use of resources,
which raises productivity and lowers costs.
Types of COCOMO Model
There are three types of COCOMO Model:
• Basic COCOMO Model
• Intermediate COCOMO Model
• Detailed COCOMO Model

Advantages of the COCOMO Model


1. Systematic cost estimation: Provides a systematic way to estimate the cost and
effort of a software project.
2. Helps to estimate cost and effort: This can be used to estimate the cost and
effort of a software project at different stages of the development process.
3. Helps in high-impact factors: Helps in identifying the factors that have the
greatest impact on the cost and effort of a software project.
4. Helps to evaluate the feasibility of a project: This can be used to evaluate the
feasibility of a software project by estimating the cost and effort required to
complete it.
Disadvantages of the COCOMO Model
1. Assumes project size as the main factor: Assumes that the size of the software
is the main factor that determines the cost and effort of a software project,
which may not always be the case.
2. Does not count development team-specific characteristics: Does not take into
account the specific characteristics of the development team, which can have a
significant impact on the cost and effort of a software project.
3. Not enough precise cost and effort estimate: This does not provide a precise
estimate of the cost and effort of a software project, as it is based on
assumptions and averages.

Putnam Resource Allocation Model


The Lawrence Putnam model describes the time and effort requires finishing a software project
of a specified size. Putnam makes a use of a so-called The Norden/Rayleigh Curve to estimate
project effort, schedule & defect rate as shown in fig:

Putnam noticed that software staffing profiles followed the well known Rayleigh distribution.
Putnam used his observation about productivity levels to derive the software equation:

The various terms of this expression are as follows:


K is the total effort expended (in PM) in product development, and L is the product estimate
in KLOC .
td correlate to the time of system and integration testing. Therefore, td can be relatively
considered as the time required for developing the product.
Ck Is the state of technology constant and reflects requirements that impede the development
of the program.
Typical values of Ck = 2 for poor development environment
Ck= 8 for good software development environment
Ck = 11 for an excellent environment (in addition to following software engineering principles,
automated tools and techniques are used).
The exact value of Ck for a specific task can be computed from the historical data of the
organization developing it.
Putnam proposed that optimal staff develop on a project should follow the Rayleigh curve. Only
a small number of engineers are required at the beginning of a plan to carry out planning and
specification tasks. As the project progresses and more detailed work are necessary, the
number of engineers reaches a peak. After implementation and unit testing, the number of
project staff falls.

Risk Mitigation, Monitoring, and Management


(RMMM) plan
A risk management technique is usually seen in the software Project plan. This can be
divided into Risk Mitigation, Monitoring, and Management Plan (RMMM). In this plan,
all works are done as part of risk analysis. As part of the overall project plan project
manager generally uses this RMMM plan.
In some software teams, risk is documented with the help of a Risk Information Sheet
(RIS). This RIS is controlled by using a database system for easier management of
information i.e creation, priority ordering, searching, and other analysis. After
documentation of RMMM and start of a project, risk mitigation and monitoring steps
will start.
Risk Mitigation :
It is an activity used to avoid problems (Risk Avoidance).
Steps for mitigating the risks as follows.
1. Finding out the risk.
2. Removing causes that are the reason for risk creation.
3. Controlling the corresponding documents from time to time.
4. Conducting timely reviews to speed up the work.
Risk Monitoring :
It is an activity used for project tracking.
It has the following primary objectives as follows.
1. To check if predicted risks occur or not.
2. To ensure proper application of risk aversion steps defined for risk.
3. To collect data for future risk analysis.
4. To allocate what problems are caused by which risks throughout the project.
Risk Management and planning :
It assumes that the mitigation activity failed and the risk is a reality. This task is done
by Project manager when risk becomes reality and causes severe problems. If the
project manager effectively uses project mitigation to remove risks successfully then
it is easier to manage the risks. This shows that the response that will be taken for
each risk by a manager. The main objective of the risk management plan is the risk
register. This risk register describes and focuses on the predicted threats to a
software project.

Project Schedule Tracking


Project Planning is an important activity performed by Project Managers. Project
Managers can use the tools and techniques to develop, monitor, and control project
timelines and schedules. The tracking tools can automatically produce a pictorial
representation of the project plan. These tools also instantly update time plans as
soon as new information is entered and produce automatic reports to control the
project. Scheduling tools also look into Task breakdown and Risk management also
with greater accuracy and ease of monitoring the reports. It also provides a good GUI
to effectively communicate with the stakeholders of the project.
Features of Project Scheduling Tools
• Time management: The project scheduling tools keep projects running the way
it is planned. There will be proper time management and better scheduling of the
tasks.
• Resource allocation: It provides the resources required for project development.
There will be proper resource allocation and it helps to make sure that proper
permissions are given to different individuals involved in the project. It helps to
monitor and control all resources in the project.
• Team collaboration: The project scheduling tool improves team collaboration
and communication. It helps to make it easy to comment and chat within the
platform without relying on external software.
• User-friendly interface: Good project scheduling tools are designed to be more
user-friendly to enable teams to complete projects in a better and more efficient
way.

Top 10 Project Scheduling Tools


1. Microsoft Project
2. Daily Activity Reporting and Tracking (DART)
3. Monday.com
4. ProjectManager.com
5. SmartTask
6. ProofHub
7. Asana
8. Wrike
9. GanttPRO
10. Zoho Projects
UNIT – 4

Types of Manual Testing


1. White Box Testing
2. Black Box Testing
3. Gray Box Testing
1. White Box Testing
White box testing techniques analyze the internal structures the used data structures,
internal design, code structure, and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing clear box testing
or structural testing. White Box Testing is also known as transparent testing or open
box testing.
White box testing is a software testing technique that involves testing the internal
structure and workings of a software application. The tester has access to the source
code and uses this knowledge to design test cases that can verify the correctness of
the software at the code level.
Advantages of White box Testing:
• Thorough Testing: White box testing is thorough as the entire code and
structures are tested.
• Code Optimization: It results in the optimization of code removing errors and
helps in removing extra lines of code.
• Early Detection of Defects: It can start at an earlier stage as it doesn’t require
any interface as in the case of black box testing.
• Integration with SDLC: White box testing can be easily started in the Software
Development Life Cycle.
• Detection of Complex Defects: Testers can identify defects that cannot be
detected through other testing techniques.
2. Black Box Testing
Black-box testing is a type of software testing in which the tester is not concerned
with the internal knowledge or implementation details of the software but rather
focuses on validating the functionality based on the provided specifications or
requirements.
Advantages of Black Box Testing:
• The tester does not need to have more functional knowledge or programming
skills to implement the Black Box Testing.
• It is efficient for implementing the tests in the larger system.
• Tests are executed from the user’s or client’s point of view.
• Test cases are easily reproducible.
• It is used to find the ambiguity and contradictions in the functional
specifications.
3. Gray Box Testing
Gray Box Testing is a software testing technique that is a combination of the Black
Box Testing technique and the White Box Testing technique.
1. In the Black Box Testing technique, the tester is unaware of the internal
structure of the item being tested and in White Box Testing the internal structure
is known to the tester.
2. The internal structure is partially known in Gray Box Testing.
3. This includes access to internal data structures and algorithms to design the test
cases.
Advantages of Gray Box Testing:
1. Clarity of goals: Users and developers have clear goals while doing testing.
2. Done from a user perspective: Gray box testing is mostly done from the user
perspective.
3. High programming skills not required: Testers are not required to have high
programming skills for this testing.
4. Non-intrusive: Gray box testing is non-intrusive.
5. Improved product quality: Overall quality of the product is improved.
Types of Black Box Testing
1. Functional Testing
2. Non-Functional Testing
1. Functional Testing
Functional Testing is a type of Software Testing in which the system is tested against
the functional requirements and specifications. Functional testing ensures that the
requirements or specifications are properly satisfied by the application. This type of
testing is particularly concerned with the result of processing. It focuses on the
simulation of actual system usage but does not develop any system structure
assumptions. The article focuses on discussing function testing.
Benefits of Functional Testing
• Bug-free product: Functional testing ensures the delivery of a bug-free and high-
quality product.
• Customer satisfaction: It ensures that all requirements are met and ensures that
the customer is satisfied.
• Testing focused on specifications: Functional testing is focused on
specifications as per customer usage.
• Proper working of application: This ensures that the application works as
expected and ensures proper working of all the functionality of the application.
• Improves quality of the product: Functional testing ensures the security and
safety of the product and improves the quality of the product.
2. Non-Functional Testing
Non-functional Testing is a type of Software Testing that is performed to verify the
non-functional requirements of the application. It verifies whether the behavior of
the system is as per the requirement or not. It tests all the aspects that are not tested
in functional testing. Non-functional testing is a software testing technique that
checks the non-functional attributes of the system. Non-functional testing is defined
as a type of software testing to check non-functional aspects of a software
application. It is designed to test the readiness of a system as per nonfunctional
parameters which are never addressed by functional testing. Non-functional testing is
as important as functional testing.
Benefits of Non-functional Testing
• Improved performance: Non-functional testing checks the performance of the
system and determines the performance bottlenecks that can affect the
performance.
• Less time-consuming: Non-functional testing is overall less time-consuming
than the other testing process.
• Improves user experience: Non-functional testing like Usability testing checks
how easily usable and user-friendly the software is for the users. Thus, focus on
improving the overall user experience for the application.
• More secure product: As non-functional testing specifically includes security
testing that checks the security bottlenecks of the application and how secure is
the application against attacks from internal and external sources.
Types of Functional Testing
1. Unit Testing
2. Integration Testing
3. System Testing
1. Unit Testing
Unit testing is a method of testing individual units or components of a software
application. It is typically done by developers and is used to ensure that the individual
units of the software are working as intended. Unit tests are usually automated and
are designed to test specific parts of the code, such as a particular function or
method. Unit testing is done at the lowest level of the software development
process, where individual units of code are tested in isolation.
Advantages of Unit Testing:
Some of the advantages of Unit Testing are listed below.
• It helps to identify bugs early in the development process before they become
more difficult and expensive to fix.
• It helps to ensure that changes to the code do not introduce new bugs.
• It makes the code more modular and easier to understand and maintain.
• It helps to improve the overall quality and reliability of the software.
Note: Some popular frameworks and tools that are used for unit testing
include JUnit, NUnit, and xUnit.
• It’s important to keep in mind that Unit Testing is only one aspect of software
testing and it should be used in combination with other types of testing such as
integration testing, functional testing, and acceptance testing to ensure that the
software meets the needs of its users.
• It focuses on the smallest unit of software design. In this, we test an individual
unit or group of interrelated units. It is often done by the programmer by using
sample input and observing its corresponding outputs.
Example:
1. In a program we are checking if the loop, method, or function is working fine.
2. Misunderstood or incorrect, arithmetic precedence.
3. Incorrect initialization.
2. Integration Testing
Integration testing is a method of testing how different units or components of a
software application interact with each other. It is used to identify and resolve any
issues that may arise when different units of the software are combined. Integration
testing is typically done after unit testing and before functional testing and is used to
verify that the different units of the software work together as intended.
Different Ways of Performing Integration Testing:
Different ways of Integration Testing are discussed below.
• Top-down integration testing: It starts with the highest-level modules and
differentiates them from lower-level modules.
• Bottom-up integration testing: It starts with the lowest-level modules and
integrates them with higher-level modules.
• Big-Bang integration testing: It combines all the modules and integrates them all
at once.
• Incremental integration testing: It integrates the modules in small groups, testing
each group as it is added.
Advantages of Integrating Testing
• It helps to identify and resolve issues that may arise when different units of the
software are combined.
• It helps to ensure that the different units of the software work together as
intended.
• It helps to improve the overall reliability and stability of the software.
• It’s important to keep in mind that Integration testing is essential for complex
systems where different components are integrated.
• As with unit testing, integration testing is only one aspect of software testing and
it should be used in combination with other types of testing such as unit testing,
functional testing, and acceptance testing to ensure that the software meets the
needs of its users.
The objective is to take unit-tested components and build a program structure that
has been dictated by design. Integration testing is testing in which a group of
components is combined to produce output.
Integration testing is of four types: (i) Top-down (ii) Bottom-up (iii) Sandwich (iv)
Big-Bang
Example:
1. Black Box testing: It is used for validation. In this, we ignore internal working
mechanisms and focus on “what is the output?”
2. White box testing: It is used for verification. In this, we focus on internal
mechanisms i.e. how the output is achieved.
3. System Testing
System testing is a type of software testing that evaluates the overall functionality
and performance of a complete and fully integrated software solution. It tests if the
system meets the specified requirements and if it is suitable for delivery to the end-
users. This type of testing is performed after the integration testing and before the
acceptance testing.
System Testing is a type of software testing that is performed on a completely
integrated system to evaluate the compliance of the system with the corresponding
requirements. In system testing, integration testing passed components are taken as
input. The goal of integration testing is to detect any irregularity between the units
that are integrated.
Advantages of System Testing:
• The testers do not require more knowledge of programming to carry out this
testing.
• It will test the entire product or software so that we will easily detect the errors
or defects that cannot be identified during the unit testing and integration
testing.
• The testing environment is similar to that of the real-time production or business
environment.
• It checks the entire functionality of the system with different test scripts and
also it covers the technical and business requirements of clients.
• After this testing, the product will almost cover all the possible bugs or errors
and hence the development team will confidently go ahead with acceptance
testing.
Types of Integration Testing
1. Incremental Testing
2. Non-Incremental Testing
1. Incremental Testing
Like development, testing is also a phase of SDLC (Software Development Life Cycle).
Different tests are performed at different stages of the development cycle.
Incremental testing is one of the testing approaches that is commonly used in the
software field during the testing phase of integration testing which is performed
after unit testing. Several stubs and drivers are used to test the modules one after
one which helps in discovering errors and defects in the specific modules.
Advantages of Incremental Testing
• Each module has its specific significance. Each one gets a role to play during the
testing as they are incremented individually.
• Defects are detected in smaller modules rather than denoting errors and then
editing and re-correcting large files.
• It’s more flexible and cost-efficient as per requirements and scopes.
• The customer gets the chance to respond to each building.
There are 2 Types of Incremental Testing
1. Top-down Integration Testing
2. Bottom-up Integration Testing
1. Top-down Integration Testing
Top-down testing is a type of incremental integration testing approach in which
testing is done by integrating or joining two or more modules by moving down from
top to bottom through the control flow of the architecture structure. In these, high-
level modules are tested first, and then low-level modules are tested. Then, finally,
integration is done to ensure that the system is working properly. Stubs and drivers
are used to carry out this project. This technique is used to increase or stimulate the
behavior of Modules that are not integrated into a lower level.
Advantages Top Down Integration Testing
1. There is no need to write drivers.
2. Interface errors are identified at an early stage and fault localization is also
easier.
3. Low-level utilities that are not important are not tested well and high-level
testers are tested well in an appropriate manner.
4. Representation of test cases is easier and simpler once Input-Output functions
are added.
2. Bottom-up Integration Testing
Bottom-up Testing is a type of incremental integration testing approach in which
testing is done by integrating or joining two or more modules by moving upward from
bottom to top through the control flow of the architecture structure. In these, low-
level modules are tested first, and then high-level modules are tested. This type of
testing or approach is also known as inductive reasoning and is used as a synthesis
synonym in many cases. Bottom-up testing is user-friendly testing and results in an
increase in overall software development. This testing results in high success rates
with long-lasting results.
Advantages of Bottom-up Integration Testing
• It is easy and simple to create and develop test conditions.
• It is also easy to observe test results.
• It is not necessary to know about the details of the structural design.
• Low-level utilities are also tested well and are also compatible with the object-
oriented structure.
Types of Non-functional Testing
1. Performance Testing
2. Usability Testing
3. Compatibility Testing
1. Performance Testing
Performance Testing is a type of software testing that ensures software applications
perform properly under their expected workload. It is a testing technique carried out
to determine system performance in terms of sensitivity, reactivity, and stability under
a particular workload.
Performance testing is a type of software testing that focuses on evaluating the
performance and scalability of a system or application. The goal of performance
testing is to identify bottlenecks, measure system performance under various loads
and conditions, and ensure that the system can handle the expected number of users
or transactions.
Advantages of Performance Testing
• Performance testing ensures the speed, load capability, accuracy, and other
performances of the system.
• It identifies, monitors, and resolves the issues if anything occurs.
• It ensures the great optimization of the software and also allows many users to
use it at the same time.
• It ensures the client as well as the end-customer’s satisfaction. Performance
testing has several advantages that make it an important aspect of software
testing:
• Identifying bottlenecks: Performance testing helps identify bottlenecks in the
system such as slow database queries, insufficient memory, or network
congestion. This helps developers optimize the system and ensure that it can
handle the expected number of users or transactions.
2. Usability Testing
You design a product (say a refrigerator) and when it becomes completely ready, you
need a potential customer to test it to check it working. To understand whether the
machine is ready to come on the market, potential customers test the machines.
Likewise, the best example of usability testing is when the software also undergoes
various testing processes which is performed by potential users before launching into
the market. It is a part of the software development lifecycle (SDLC).
Advantages and Disadvantages of Usability Testing
Usability testing is preferred to evaluate a product or service by testing it with the
proper users. In Usability testing, the development and design teams will use to
identify issues before coding and the result will be earlier issues will be solved. During
a Usability test, you can,
• Learn if participants will be able to complete the specific task completely.
• identify how long it will take to complete the specific task.
• Gives excellent features and functionalities to the product
• Improves user satisfaction and fulfills requirements based on user feedback
• The product becomes more efficient and effective
3. Compatibility Testing
Compatibility testing is software testing that comes under the non functional
testing category, and it is performed on an application to check its compatibility
(running capability) on different platforms/environments. This testing is done only
when the application becomes stable. This means simply this compatibility test aims
to check the developed software application functionality on various software,
hardware platforms, networks browser etc. This compatibility testing is very
important in product production and implementation point of view as it is performed
to avoid future issues regarding compatibility.
Advantages of Compatibility Testing
• It ensures complete customer satisfaction.
• It provides service across multiple platforms.
• Identifying bugs during the development process.
There are 4 Types of Performance Testing
1. Load Testing
2. Stress Testing
3. Scalability Testing
4. Stability Testing
1. Load Testing
Load testing determines the behavior of the application when multiple users use it at
the same time. It is the response of the system measured under varying load
conditions.
1. The load testing is carried out for normal and extreme load conditions.
2. Load testing is a type of performance testing that simulates a real-world load on
a system or application to see how it performs under stress.
3. The goal of load testing is to identify bottlenecks and determine the maximum
number of users or transactions the system can handle.
4. It is an important aspect of software testing as it helps ensure that the system
can handle the expected usage levels and identify any potential issues before the
system is deployed to production.
Advantages of Load Testing:
Load testing has several advantages that make it an important aspect of software
testing:
1. Identifying bottlenecks: Load testing helps identify bottlenecks in the system
such as slow database queries, insufficient memory, or network congestion. This
helps developers optimize the system and ensure that it can handle the expected
number of users or transactions.
2. Improved scalability: By identifying the system’s maximum capacity, load testing
helps ensure that the system can handle an increasing number of users or
transactions over time. This is particularly important for web-based systems and
applications that are expected to handle a high volume of traffic.
3. Improved reliability: Load testing helps identify any potential issues that may
occur under heavy load conditions, such as increased error rates or slow
response times. This helps ensure that the system is reliable and stable when it is
deployed to production.
2. Stress Testing
In Stress Testing, we give unfavorable conditions to the system and check how it
perform in those conditions.
Example:
1. Test cases that require maximum memory or other resources are executed.
2. Test cases that may cause thrashing in a virtual operating system.
3. Test cases that may cause excessive disk requirement Performance Testing.
It is designed to test the run-time performance of software within the context of an
integrated system. It is used to test the speed and effectiveness of the program. It is
also called load testing. In it, we check, what is the performance of the system in the
given load.
Example:
Checking several processor cycles.
3. Scalability Testing
Scalability Testing is a type of non-functional testing in which the performance of a
software application, system, network, or process is tested in terms of its capability
to scale up or scale down the number of user request load or other such performance
attributes. It can be carried out at a hardware, software or database level. Scalability
Testing is defined as the ability of a network, system, application, product or a
process to perform the function correctly when changes are made in the size or
volume of the system to meet a growing need. It ensures that a software product can
manage the scheduled increase in user traffic, data volume, transaction counts
frequency, and many other things. It tests the system, processes, or database’s ability
to meet a growing need.
Advantages of Scalability Testing
• It provides more accessibility to the product.
• It detects issues with web page loading and other performance issues.
• It finds and fixes the issues earlier in the product which saves a lot of time.
• It ensures the end-user experience under the specific load. It provides customer
satisfaction.
• It helps in effective tool utilization tracking.
4. Stability Testing
Stability Testing is a type of Software Testing to checks the quality and behavior of
the software under different environmental parameters. It is defined as the ability of
the product to continue to function over time without failure.
It is a Non-functional Testing technique that focuses on stressing the software
component to the maximum. Stability testing is done to check the efficiency of a
developed product beyond normal operational capacity which is known as break
point. It has higher significance in error handling, software reliability, robustness, and
scalability of a product under heavy load rather than checking the system behavior
under normal circumstances.
Stability testing assesses stability problems. This testing is mainly intended to check
whether the application will crash at any point in time or not.
Advantages of Stability Testing
1. It gives the limit of the data that a system can handle practically.
2. It provides confidence on the performance of the system.
3. It determines the stability and robustness of the system under load.
4. Stability testing leads to a better end-user experience.
Other Types of Testing
1. Smoke Testing
Smoke Testing is done to make sure that the software under testing is ready or stable
for further testing
It is called a smoke test as the testing of an initial pass is done to check if it did not
catch fire or smoke in the initial switch-on.
Example:
If the project has 2 modules so before going to the module make sure
that module 1 works properly.
Advantages of Smoke Testing
1. Smoke testing is easy to perform.
2. It helps in identifying defects in the early stages.
3. It improves the quality of the system.
4. Smoke testing reduces the risk of failure.
5. Smoke testing makes progress easier to access.
2. Sanity Testing
It is a subset of regression testing. Sanity testing is performed to ensure that the code
changes that are made are working properly. Sanity testing is a stoppage to check
whether testing for the build can proceed or not. The focus of the team during the
sanity testing process is to validate the functionality of the application and not
detailed testing. Sanity testing is generally performed on a build where the production
deployment is required immediately like a critical bug fix.
Advantages of Sanity Testing
• Sanity testing helps to quickly identify defects in the core functionality.
• It can be carried out in less time as no documentation is required for sanity
testing.
• If the defects are found during sanity testing, the project is rejected which is
helpful in saving time for execution of regression tests.
• This testing technique is not so expensive when compared to another type of
testing.
• It helps to identify the dependent missing objects.
3. Regression Testing
The process of testing the modified parts of the code and the parts that might get
affected due to the modifications ensures that no new errors have been introduced in
the software after the modifications have been made. Regression means the return
of something and in the software field, it refers to the return of a bug.
Advantages of Regression Testing
• It ensures that no new bugs have been introduced after adding new
functionalities to the system.
• As most of the test cases used in Regression Testing are selected from the
existing test suite, and we already know their expected outputs. Hence, it can be
easily automated by the automated tools.
• It helps to maintain the quality of the source code.
4. Acceptance Testing
Acceptance testing is done by the customers to check whether the delivered products
perform the desired tasks or not, as stated in the requirements. We use Object-
Oriented Testing for discussing test plans and for executing the projects.
Advantages of Acceptance Testing
1. This testing helps the project team to know the further requirements of the users
directly as it involves the users for testing.
2. Automated test execution.
3. It brings confidence and satisfaction to the clients as they are directly involved in
the testing process.
4. It is easier for the user to describe their requirement.
5. It covers only the Black-Box testing process and hence the entire functionality of
the product will be tested.

What is White Box Testing?


White box testing is a software testing technique that involves testing the internal
structure and workings of a software application. The tester has access to the source
code and uses this knowledge to design test cases that can verify the correctness of
the software at the code level.
White box testing is also known as structural testing or code-based testing, and it is
used to test the software’s internal logic, flow, and structure. The tester creates test
cases to examine the code paths and logic flows to ensure they meet the specified
requirements.

Types Of White Box Testing


White box testing can be done for different purposes. The three main types are:
1. Unit Testing
2. Integration Testing
3. Regression Testing

Unit Testing
• Checks if each part or function of the application works correctly.
• Ensures the application meets design requirements during development.
Integration Testing
• Examines how different parts of the application work together.
• Done after unit testing to make sure components work well both alone and
together.
Regression Testing
• Verifies that changes or updates don’t break existing functionality.
• Ensures the application still passes all existing tests after updates.

White Box Testing Techniques


One of the main benefits of white box testing is that it allows for testing every part of
an application. To achieve complete code coverage, white box testing uses the
following techniques:
1. Statement Coverage
In this technique, the aim is to traverse all statements at least once. Hence, each line
of code is tested. In the case of a flowchart, every node must be traversed at least
once. Since all lines of code are covered, it helps in pointing out faulty code.

Statement Coverage Example

2. Branch Coverage
In this technique, test cases are designed so that each branch from all decision points
is traversed at least once. In a flowchart, all edges must be traversed at least once.

4 test cases are required such that all branches of all decisions are covered, i.e, all edges of the flowchart are covered

3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the following
example:
• READ X, Y
• IF(X == 0 || Y == 0)
• PRINT ‘0’
• #TC1 – X = 0, Y = 55
• #TC2 – X = 5, Y = 0
4. Multiple Condition Coverage
In this technique, all the possible combinations of the possible outcomes of
conditions are tested at least once. Let’s consider the following example:
• READ X, Y
• IF(X == 0 || Y == 0)
• PRINT ‘0’
• #TC1: X = 0, Y = 0
• #TC2: X = 0, Y = 5
• #TC3: X = 55, Y = 0
• #TC4: X = 55, Y = 5
5. Basis Path Testing
In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths
so that the minimal number of test cases can be designed for each independent
path. Steps:
• Make the corresponding control flow graph
• Calculate the cyclomatic complexity
• Find the independent paths
• Design test cases corresponding to each independent path
• V(G) = P + 1, where P is the number of predicate nodes in the flow graph
• V(G) = E – N + 2, where E is the number of edges and N is the total number of
nodes
• V(G) = Number of non-overlapping regions in the graph
• #P1: 1 – 2 – 4 – 7 – 8
• #P2: 1 – 2 – 3 – 5 – 7 – 8
• #P3: 1 – 2 – 3 – 6 – 7 – 8
• #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing
Loops are widely used and these are fundamental to many algorithms hence, their
testing is very important. Errors often occur at the beginnings and ends of loops.
• Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
• Nested loops: For nested loops, all the loops are set to their minimum count,
and we start from the innermost loop. Simple loop tests are conducted for the
innermost loop and this is worked outwards till all the loops have been tested.
• Concatenated loops: Independent loops, one after another. Simple loop tests
are applied for each. If they’re not independent, treat them like nesting.
Black Box vs White Box vs Gray Box Testing
Here is a simple comparison of Black Box, White Box, and Gray Box testing,
highlighting key aspects:

Aspect Black Box Testing White Box Testing Gray Box Testing
Knowledge Not required Required Partially required
of Internal
Code
Other Functional testing, Structural testing, clear Translucent testing
Names data-driven testing, box testing, code-based
closed box testing testing, transparent
testing
Approach Trial and error, based Verification of internal Combination of both
on external coding, system black box and white
functionality boundaries, and data box approaches
domains
Test Case Largest Smaller compared to Black Smaller than both
Input Size Box Black Box and White
Box
Finding Difficult Easier due to internal code Challenging, may be
Hidden access found at user level
Errors
Algorithm Not suitable Well-suited and Not suitable
Testing recommended
Time Depends on High due to complex code Moderate, faster
Consumptio functional analysis than White Box
n specifications

Process of White Box Testing


1. Input: Requirements, Functional specifications, design documents, source code.
2. Processing: Performing risk analysis to guide through the entire process.
3. Proper test planning: Designing test cases to cover the entire code. Execute
rinse-repeat until error-free software is reached. Also, the results are
communicated.
4. Output: Preparing the final report of the entire testing process.
White Testing is performed in 2 Steps
1. Tester should understand the code well
2. Tester should write some code for test cases and execute them
Tools required for White box testing:
• PyUnit
• Sqlmap
• Nmap
• Parasoft Jtest
• Nunit
• VeraUnit
• CppUnit
• Bugzilla
• Fiddler
• JSUnit.net
• OpenGrok
• Wireshark
• HP Fortify
• CSUnit

Features of White box Testing


Code coverage analysis:
Access to the source code:
Knowledge of programming languages:
Identifying logical errors:
Optimization of code:

Advantages of White Box Testing


1. Thorough Testing: White box testing is thorough as the entire code and
structures are tested.
2. Code Optimization: It results in the optimization of code removing errors and
helps in removing extra lines of code.
3. Early Detection of Defects: It can start at an earlier stage as it doesn’t require
any interface as in the case of black box testing.
4. Integration with SDLC: White box testing can be easily started in Software
Development Life Cycle.
5. Detection of Complex Defects: Testers can identify defects that cannot be
detected through other testing techniques.
6. Comprehensive Test Cases: Testers can create more comprehensive and
effective test cases that cover all code paths.
7. Testers can ensure that the code meets coding standards and is optimized for
performance.
Disadvantages of White Box Testing
1. Programming Knowledge and Source Code Access: Testers need to have
programming knowledge and access to the source code to perform tests.
2. Overemphasis on Internal Workings: Testers may focus too much on the
internal workings of the software and may miss external issues.
3. Bias in Testing: Testers may have a biased view of the software since they are
familiar with its internal workings.
4. Test Case Overhead: Redesigning code and rewriting code needs test cases to be
written again.
5. Dependency on Tester Expertise: Testers are required to have in-depth
knowledge of the code and programming language as opposed to black-box
testing.
6. Inability to Detect Missing Functionalities: Missing functionalities cannot be
detected as the code that exists is tested.
7. Increased Production Errors: High chances of errors in production.

Verification and Validation


Verification and Validation is the process of investigating whether a software system
satisfies specifications and standards and fulfills the required purpose. Barry
Boehm described verification and validation as the following:
Verification: Are we building the product right?
Validation: Are we building the right product?

Verification
Verification is the process of checking that software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not.
It verifies whether the developed product fulfills the requirements that we have.
Verification is simply known as Static Testing.
Static Testing
Verification Testing is known as Static Testing and it can be simply termed as checking
whether we are developing the right product or not and also whether our software is
fulfilling the customer’s requirement or not. Here are some of the activities that are
involved in verification.
• Inspections
• Reviews
• Walkthroughs
• Desk-checking

Validation
Validation is the process of checking whether the software product is up to the mark
or in other words product has high-level requirements. It is the process of checking
the validation of the product i.e. it checks what we are developing is the right
product. it is a validation of actual and expected products. Validation is simply known
as Dynamic Testing.
Dynamic Testing
Validation Testing is known as Dynamic Testing in which we examine whether we
have developed the product right or not and also about the business needs of the
client. Here are some of the activities that are involved in Validation.
1. Black Box Testing
2. White Box Testing
3. Unit Testing
4. Integration Testing
Note: Verification is followed by Validation.

Characteristics of Good Software

1. Operational
In operational categories, the factors that decide the
software performance in operations. It can be measured on:
• Budget
• Usability
• Efficiency
• Correctness
• Functionality
• Dependability
• Security
• Safety
2. Transitional
When the software is moved from one platform to another, the
factors deciding the software quality:
• Portability
• Interoperability
• Reusability
• Adaptability
3. Maintenance
In this categories all factors are included that describes
about how well a software has the capabilities to maintain
itself in the ever changing environment:
• Modularity
• Maintainability
• Flexibility
• Scalability

UNIT – 5

UML-Relationship
Relationships depict a connection between several things, such as
structural, behavioral, or grouping things in the unified modeling
language. Since it is termed as a link, it demonstrates how things
are interrelated to each other at the time of system execution. It
constitutes four types of relationships, i.e., dependency,
association, generalization, and realization.
Static diagram in UML
Static diagrams describe the state of a system from a
variety of perspectives. A static diagram describes what
a piece of the system is. A dynamic diagram describes
what a portion of the system is doing. There are seven
types of static diagrams: Component.

Class Diagram | Unified Modeling Language (UML)


Class diagrams are a type of UML (Unified Modeling Language)
diagram used in software engineering to visually represent the
structure and relationships of classes in a system. UML is a
standardized modeling language that helps in designing and
documenting software systems. They are an integral part of the
software development process, helping in both the design and
documentation phases.

What are class Diagrams?


Class diagrams are a type of UML (Unified Modeling Language)
diagram used in software engineering to visually represent the
structure and relationships of classes within a system i.e.
used to construct and visualize object-oriented systems.
In these diagrams, classes are depicted as boxes, each
containing three compartments for the class name, attributes,
and methods. Lines connecting classes illustrate associations,
showing relationships such as one-to-one or one-to-many.
Class diagrams provide a high-level overview of a system’s
design, helping to communicate and document the structure of
the software. They are a fundamental tool in object-oriented
design and play a crucial role in the software development
lifecycle.

You might also like