Software-Engineering-and-UML - Microsoft OneNote Online
Software-Engineering-and-UML - Microsoft OneNote Online
What is software?
Software is like the brain of electronic devices, defined as a set of instructions that allow the
user to control a specific device.
Therefore, there are a wide variety of software on the market, such as programs installed on
computers, mobile applications, or even the system of a robot vacuum cleaner.
Without software, these devices are nothing more than electronic shells, after all, without an
operating system, your cell phone cannot even make a simple call.
Therefore, on computers, without an operating system such as Windows, Linux or MacOS, it
becomes impossible to perform any task.
So, in most cases, software is installed in device storage, be it HDD, SSD, memory cards, among
others.
However, you can acquire software through downloading from the developer’s page, in stores
like Google Play and App Stores, or in physical media sold in stores.
(iii)Practitioner’s Myths:
Myths 1:
They believe that their work has been completed with the writing of the plan.
Fact:
• It is true that every 60-80% effort goes into the maintenance phase (as of the
latter software release). Efforts are required, where the product is available first
delivered to customers.
Myths 2:
There is no other way to achieve system quality, until it is “running”.
Fact:
• Systematic review of project technology is the quality of effective software
verification method. These updates are quality filters and more accessible than
test.
Myth 3:
An operating system is the only product that can be successfully exported project.
Fact:
• A working system is not enough, the right document brochures and booklets are
also required to provide guidance & software support.
Myth 4:
Engineering software will enable us to build powerful and unnecessary document &
always delay us.
Fact:
• Software engineering is not about creating documents. It is about creating a
quality product. Better quality leads to reduced rework. And reduced rework
results in faster delivery times
Sequential Model
What is the SDLC Waterfall Model?
The waterfall model is a software development model used in the context of large,
complex projects, typically in the field of information technology. It is characterized by
a structured, sequential approach to project management and software development.
The waterfall model is useful in situations where the project requirements are well-
defined and the project goals are clear. It is often used for large-scale projects with
long timelines, where there is little room for error and the project stakeholders need
to have a high level of confidence in the outcome.
The critical feature of this model is the use of powerful development tools and
techniques. A software project can be implemented using this model if the project
can be broken down into small modules wherein each module can be assigned
independently to separate teams. These modules can finally be combined to form the
final product. Development of each module involves the various basic steps as in the
waterfall model i.e. analyzing, designing, coding, and then testing, etc. as shown in the
figure. Another striking feature of this model is a short period i.e. the time frame for
delivery(time-box) is generally 60-90 days.
Multiple teams work on developing the software system using the RAD model
parallelly.
The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also
an integral part of the projects. This model consists of 4 basic phases:
1. Requirements Planning – This involves the use of various techniques used in
requirements elicitation like brainstorming, task analysis, form analysis, user
scenarios, FAST (Facilitated Application Development Technique), etc. It also
consists of the entire structured plan describing the critical data, methods to
obtain it, and then processing it to form a final refined model.
2. User Description – This phase consists of taking user feedback and building the
prototype using developer tools. In other words, it includes re-examination and
validation of the data collected in the first phase. The dataset attributes are also
identified and elucidated in this phase.
3. Construction – In this phase, refinement of the prototype and delivery takes
place. It includes the actual use of powerful automated tools to transform
processes and data models into the final working product. All the required
modifications and enhancements are to be done in this phase.
4. Cutover – All the interfaces between the independent modules developed by
separate teams have to be tested properly. The use of powerfully automated
tools and subparts makes testing easier. This is followed by acceptance testing
by the user.
The process involves building a rapid prototype, delivering it to the customer, and
taking feedback. After validation by the customer, the SRS document is developed
and the design is finalized.
When to use the RAD Model?
-Well-understood Requirements
-Time Sensitive projects: tight deadlines
-Small to medium sized projects
-Innovation and creativity
Evolutionary Model
Incremental Model
In the incremental model, we first build the project with basic features and then
evolve the project in every iteration, it is mainly used for large projects. The first step
is to gather the requirements and then perform analysis, design, code, and test and
this process goes the same over and over again until our final project is ready.
Incremental Model
1. The exact number of phases needed to develop the product can be varied by the
project manager depending upon the project risks.
2. As the project manager dynamically determines the number of phases, the
project manager has an important role in developing a product using the spiral
model.
3. It is based on the idea of a spiral, with each iteration of the spiral representing a
complete software development cycle, from requirements gathering and
analysis to design, implementation, testing, and maintenance.
Each phase of the Spiral Model is divided into four quadrants as shown in the
above figure. The functions of these four quadrants are discussed below:
1. Objectives determination and identify alternative solutions: Requirements are
gathered from the customers and the objectives are identified, elaborated, and
analyzed at the start of every phase. Then alternative solutions possible for the
phase are proposed in this quadrant.
2. Identify and resolve Risks: During the second quadrant, all the possible solutions
are evaluated to select the best possible solution. Then the risks associated with
that solution are identified and the risks are resolved using the best possible
strategy. At the end of this quadrant, the Prototype is built for the best possible
solution.
3. Develop the next version of the Product: During the third quadrant, the
identified features are developed and verified through testing. At the end of the
third quadrant, the next version of the software is available.
4. Review and plan for the next Phase: In the fourth quadrant, the Customers
evaluate the so-far developed version of the software. In the end, planning for
the next phase is started.
Formal Methods
Formal methods are techniques we use in computer science and software
engineering to ensure the correctness of our programs and reduction in our programs'
errors. They rely on math and logic to model and analyze system behavior, making
systems more reliable and secure.
In this Answer, we will be discussing how formal methods are useful in the field of
software engineering.
Why do we use formal methods?
-Correctness and Reliability
-Early Error Detection
-Verification and Validation
UNIT – 2
Analysis principles – Analysis Modelling in Software
Engineering
Analysis Model is a technical representation of the system. It acts as a link between
the system description and the design model. In Analysis Modelling, information,
behavior, and functions of the system are defined and translated into the
architecture, component, and interface level design in the design modeling.
1. Data Dictionary:
It is a repository that consists of a description of all data objects used or
produced by the software. It stores the collection of data present in the
software. It is a very crucial element of the analysis model. It acts as a
centralized repository and also helps in modeling data objects defined during
software requirements.
5. Process Specification:
It stores the description of each function present in the data flow diagram. It
describes the input to a function, the algorithm that is applied for the
transformation of input, and the output that is produced. It also shows
regulations and barriers imposed on the performance characteristics that apply
to the process and layout constraints that could influence how the process will
be implemented.
6. Control Specification:
It stores additional information about the control aspects of the software. It is
used to indicate how the software behaves when an event occurs and which
processes are invoked due to the occurrence of the event. It also provides the
details of the processes which are executed to manage events.
Data Modification
Data Modification refers to the process of altering data within a database system.
It is a critical aspect of software development that ensures the accuracy,
consistency, and integrity of stored data. Data modification can be performed
through various operations such as:
• INSERT: Adding new records to a table.
• UPDATE: Modifying existing records with new data.
• DELETE: Removing records from a database.
Data modification commands must be used with caution to prevent data
corruption and should be managed within transactions to maintain data integrity.
This process is fundamental for dynamic applications that rely on persistent data
storage and manipulation.
All arrows should be labeled in a DFD. The double line is used to represent data store.
There may be implicit procedure or sequence in the diagram but explicit logical details
are generally delayed until software design.
UNIT – 3
Software Project planning starts before technical work start. The various steps of planning
activities are:
Cost Estimation
Cost estimation simply means a technique that is used to find out the cost estimates.
The cost estimate is the financial spend that is done on the efforts to develop and
test software in Software Engineering. Cost estimation models are some
mathematical algorithms or parametric equations that are used to estimate the cost
of a product or a project. Various techniques or models are available for cost
estimation, also known as Cost Estimation Models.
COCOMO Model
What is the COCOMO Model?
The COCOMO Model is a procedural cost estimate model for software projects and is
often used as a process of reliably predicting the various parameters associated with
making a project such as size, effort, cost, time, and quality. It was proposed by Barry
Boehm in 1981 and is based on the study of 63 projects, which makes it one of the
best-documented models.
The key parameters that define the quality of any software product, which are also
an outcome of COCOMO, are primarily effort and schedule:
1. Effort: Amount of labor that will be required to complete a task. It is measured
in person-months units.
2. Schedule: This simply means the amount of time required for the completion of
the job, which is, of course, proportional to the effort put in. It is measured in
the units of time such as weeks, and months.
Types of Projects in the COCOMO Model
In the COCOMO model, software projects are categorized into three types based on
their complexity, size, and the development environment. These types are:
1. Organic: A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been solved
in the past and also the team members have a nominal experience regarding the
problem.
2. Semi-detached: A software project is said to be a Semi-detached type if the vital
characteristics such as team size, experience, and knowledge of the various
programming environments lie in between organic and embedded. The projects
classified as Semi-Detached are comparatively less familiar and difficult to
develop compared to the organic ones and require more experience better
guidance and creativity. Eg: Compilers or different Embedded Systems can be
considered Semi-Detached types.
3. Embedded: A software project requiring the highest level of complexity,
creativity, and experience requirement falls under this category. Such software
requires a larger team size than the other two models and also the developers
need to be sufficiently experienced and creative to develop such complex
models.
Comparison of these three types of Projects in COCOMO Model
Different models of COCOMO have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of
these models can be applied to a variety of projects, whose characteristics determine
the value of the constant to be used in subsequent calculations. These characteristics
of different system types are mentioned below. Boehm’s definition of organic,
semidetached, and embedded systems:
Importance of the COCOMO Model
1. Cost Estimation: To help with resource planning and project budgeting,
COCOMO offers a methodical approach to software development cost
estimation.
2. Resource Management: By taking team experience, project size, and complexity
into account, the model helps with efficient resource allocation.
3. Project Planning: COCOMO assists in developing practical project plans that
include attainable objectives, due dates, and benchmarks.
4. Risk management: Early in the development process, COCOMO assists in
identifying and mitigating potential hazards by including risk elements.
5. Support for Decisions: During project planning, the model provides a
quantitative foundation for choices about scope, priorities, and resource
allocation.
6. Benchmarking: To compare and assess various software development projects
to industry standards, COCOMO offers a benchmark.
7. Resource Optimization: The model helps to maximize the use of resources,
which raises productivity and lowers costs.
Types of COCOMO Model
There are three types of COCOMO Model:
• Basic COCOMO Model
• Intermediate COCOMO Model
• Detailed COCOMO Model
Putnam noticed that software staffing profiles followed the well known Rayleigh distribution.
Putnam used his observation about productivity levels to derive the software equation:
Unit Testing
• Checks if each part or function of the application works correctly.
• Ensures the application meets design requirements during development.
Integration Testing
• Examines how different parts of the application work together.
• Done after unit testing to make sure components work well both alone and
together.
Regression Testing
• Verifies that changes or updates don’t break existing functionality.
• Ensures the application still passes all existing tests after updates.
2. Branch Coverage
In this technique, test cases are designed so that each branch from all decision points
is traversed at least once. In a flowchart, all edges must be traversed at least once.
4 test cases are required such that all branches of all decisions are covered, i.e, all edges of the flowchart are covered
3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the following
example:
• READ X, Y
• IF(X == 0 || Y == 0)
• PRINT ‘0’
• #TC1 – X = 0, Y = 55
• #TC2 – X = 5, Y = 0
4. Multiple Condition Coverage
In this technique, all the possible combinations of the possible outcomes of
conditions are tested at least once. Let’s consider the following example:
• READ X, Y
• IF(X == 0 || Y == 0)
• PRINT ‘0’
• #TC1: X = 0, Y = 0
• #TC2: X = 0, Y = 5
• #TC3: X = 55, Y = 0
• #TC4: X = 55, Y = 5
5. Basis Path Testing
In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths
so that the minimal number of test cases can be designed for each independent
path. Steps:
• Make the corresponding control flow graph
• Calculate the cyclomatic complexity
• Find the independent paths
• Design test cases corresponding to each independent path
• V(G) = P + 1, where P is the number of predicate nodes in the flow graph
• V(G) = E – N + 2, where E is the number of edges and N is the total number of
nodes
• V(G) = Number of non-overlapping regions in the graph
• #P1: 1 – 2 – 4 – 7 – 8
• #P2: 1 – 2 – 3 – 5 – 7 – 8
• #P3: 1 – 2 – 3 – 6 – 7 – 8
• #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing
Loops are widely used and these are fundamental to many algorithms hence, their
testing is very important. Errors often occur at the beginnings and ends of loops.
• Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
• Nested loops: For nested loops, all the loops are set to their minimum count,
and we start from the innermost loop. Simple loop tests are conducted for the
innermost loop and this is worked outwards till all the loops have been tested.
• Concatenated loops: Independent loops, one after another. Simple loop tests
are applied for each. If they’re not independent, treat them like nesting.
Black Box vs White Box vs Gray Box Testing
Here is a simple comparison of Black Box, White Box, and Gray Box testing,
highlighting key aspects:
Aspect Black Box Testing White Box Testing Gray Box Testing
Knowledge Not required Required Partially required
of Internal
Code
Other Functional testing, Structural testing, clear Translucent testing
Names data-driven testing, box testing, code-based
closed box testing testing, transparent
testing
Approach Trial and error, based Verification of internal Combination of both
on external coding, system black box and white
functionality boundaries, and data box approaches
domains
Test Case Largest Smaller compared to Black Smaller than both
Input Size Box Black Box and White
Box
Finding Difficult Easier due to internal code Challenging, may be
Hidden access found at user level
Errors
Algorithm Not suitable Well-suited and Not suitable
Testing recommended
Time Depends on High due to complex code Moderate, faster
Consumptio functional analysis than White Box
n specifications
Verification
Verification is the process of checking that software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not.
It verifies whether the developed product fulfills the requirements that we have.
Verification is simply known as Static Testing.
Static Testing
Verification Testing is known as Static Testing and it can be simply termed as checking
whether we are developing the right product or not and also whether our software is
fulfilling the customer’s requirement or not. Here are some of the activities that are
involved in verification.
• Inspections
• Reviews
• Walkthroughs
• Desk-checking
Validation
Validation is the process of checking whether the software product is up to the mark
or in other words product has high-level requirements. It is the process of checking
the validation of the product i.e. it checks what we are developing is the right
product. it is a validation of actual and expected products. Validation is simply known
as Dynamic Testing.
Dynamic Testing
Validation Testing is known as Dynamic Testing in which we examine whether we
have developed the product right or not and also about the business needs of the
client. Here are some of the activities that are involved in Validation.
1. Black Box Testing
2. White Box Testing
3. Unit Testing
4. Integration Testing
Note: Verification is followed by Validation.
1. Operational
In operational categories, the factors that decide the
software performance in operations. It can be measured on:
• Budget
• Usability
• Efficiency
• Correctness
• Functionality
• Dependability
• Security
• Safety
2. Transitional
When the software is moved from one platform to another, the
factors deciding the software quality:
• Portability
• Interoperability
• Reusability
• Adaptability
3. Maintenance
In this categories all factors are included that describes
about how well a software has the capabilities to maintain
itself in the ever changing environment:
• Modularity
• Maintainability
• Flexibility
• Scalability
UNIT – 5
UML-Relationship
Relationships depict a connection between several things, such as
structural, behavioral, or grouping things in the unified modeling
language. Since it is termed as a link, it demonstrates how things
are interrelated to each other at the time of system execution. It
constitutes four types of relationships, i.e., dependency,
association, generalization, and realization.
Static diagram in UML
Static diagrams describe the state of a system from a
variety of perspectives. A static diagram describes what
a piece of the system is. A dynamic diagram describes
what a portion of the system is doing. There are seven
types of static diagrams: Component.