Se Notes
Se Notes
Layered Technology
Software engineering is a fully layered technology, to develop software we need to go from one layer to
another. All the layers are connected and each layer demands the fulfillment of the previous layer.
1
• Planning: It basically means drawing a map for reduced the complication of development.
• Modeling: In this process, a model is created according to the client for better understanding.
• Construction: It includes the coding and testing of the problem.
• Deployment:- It includes the delivery of software to the client for evaluation and feedback.
3. Method: During the process of software development the answers to all “how-to-do” questions are
given by method. It has the information of all the tasks which includes communication, requirement
analysis, design modeling, program construction, testing, and support.
4. Tools: Software engineering tools provide a self-operating system for processes and methods. Tools are
integrated which means information created by one tool can be used by another.
Umbrella Activities
Umbrella Activities are that take place during a software development process for improved project
management and tracking.
Software Project Tracking and Control: This activity involves assessing project progress and taking
corrective action to maintain the schedule, ensuring the project stays on track by comparing actual progress
against the plan.
Risk Management: Analyzing potential risks that could impact project outcomes or quality, and taking
measures to mitigate these risks.
Software Quality Assurance: Conducting activities to maintain software quality and ensure the product
meets specified standards.
Formal Technical Reviews: Evaluating engineering work products at each stage of the process to identify
and rectify errors before they progress to the next phase.
Software Configuration Management: Managing the process of configuration when changes occur in the
software, ensuring proper version control and tracking.
Work Product Preparation and Production: Performing activities to create various artifacts such as
models, documents, logs, forms, and lists needed throughout the development process.
Reusability Management: Defining criteria for work product reuse and ensuring that reusable
components are backed up and achieved.
Measurement: Defining and collecting process, project, and product metrics to assist the software team in
delivering the required software efficiently and effectively.
2
Five levels to the CMM development process:
1. Initial. At the initial level, processes are disorganized, ad hoc and even chaotic. Success likely
depends on individual efforts and is not considered to be repeatable. This is because processes are
not sufficiently defined and documented to enable them to be replicated.
2. Repeatable. At the repeatable level, requisite processes are established, defined and documented.
As a result, basic project management techniques are established, and successes in key process
areas are able to be repeated.
3. Defined. At the defined level, an organization develops its own standard software development
process. These defined processes enable greater attention to documentation, standardization and
integration.
4. Managed. At the managed level, an organization monitors and controls its own processes through
data collection and analysis.
5. Optimizing. At the optimizing level, processes are constantly improved through monitoring
feedback from processes and introducing innovative processes and functionality.
3
The
Principles of CMM:
1. People's capability is crucial for organizational success.
2. People's capabilities should align with business objectives.
3. Organizations should invest in improving people's capabilities.
4. Management is responsible for enhancing people's capabilities.
5. Improvement in people's capabilities should be a structured process.
6. Organizations should provide opportunities for improvement.
7. Continuous improvement is essential to adapt to evolving technologies and practices.
Importance
1. Optimization of Resources: CMM helps organizations make efficient use of resources such as
money, labor, and time by identifying and eliminating unproductive practices.
2. Comparing and Evaluating: It provides a formal framework for benchmarking and self-
evaluation, allowing organizations to assess their maturity levels, strengths, weaknesses, and
compare their performance against industry best practices.
3. Management of Quality: CMM emphasizes quality management, enabling businesses to apply
best practices for quality assurance and control, thereby improving the quality of their products
and services.
4. Enhancement of Process: CMM offers a systematic approach to evaluate and improve operations,
providing a roadmap for gradual process improvement, which enhances productivity and
efficiency.
5. Increased Output: By simplifying and optimizing processes, CMM aims to boost productivity
without compromising quality, leading to increased output and efficiency as organizations progress
through its levels.
Disadvantages
4
1. Mission Displacement: In some cases, the focus on achieving higher maturity levels may displace
the true mission of improving processes and overall software quality.
2. Early Implementation Requirement: CMM is most effective when implemented early in the
software development process.
3. Lack of Formal Theoretical Basis: It lacks a formal theoretical basis and relies heavily on the
experience of knowledgeable individuals.
4. Difficulty in Measuring Improvement: It may not accurately measure process improvement as it
relies on self-assessment and may not capture all aspects of the development process.
5. Focus on Documentation Over Outcomes: It may prioritize documentation and adherence to
procedures over actual outcomes such as software quality and customer satisfaction.
6. Not Suitable for All Organizations: It may not be suitable for all organizations, particularly those
with smaller teams or less structured development processes.
7. Lack of Agility: It may not be agile enough to respond quickly to changing business needs or
technological advancements, limiting its usefulness in dynamic environments.
5
CMM VS CMMI
Levels of CMMI
There are 5 performance levels of the CMMI Model.
Level 1: Initial: Processes are often ad hoc and unpredictable. There is little or no formal process in place.
Level 2: Managed: Basic project management processes are established. Projects are planned, monitored,
and controlled.
Level 3: Defined: Organizational processes are well-defined and documented. Standardized processes are
used across the organization.
Level 4: Quantitatively Managed: Processes are measured and controlled using statistical and
quantitative techniques. Process performance is quantitatively understood and managed.
Level 5: Optimizing: Continuous process improvement is a key focus. Processes are continuously
improved based on quantitative feedback.
6
Problems associated with engineering large-scale industrial-strength software.
SE Challenges
The problem of producing software to satisfy user needs drives the approaches used in SE.
Problems associated with engineering large-scale industrial-strength software are:
1. scale,
2. productivity,
3. quality,
4. consistency,
5. rate of change 1) Scale
• SE must deal with problem of scale: industrial strength SW problems tend to be large
• SE methods must be scalable
2) Productivity
• An engineering project is driven by cost and schedule
• Cost: In SE, cost is mainly manpower cost; hence, it is measured in person months.
The person-months cost is converted to money in order to get the monetary value
of the software. The cost of industrial-strength software is normally very high.
• Schedule: This determines the duration of a software development. It is expressed
in months or weeks. It is very important in business context. The duration of
industrial-strength software is normally high.
• Productivity captures both Cost and Schedule
▪ If P is higher, cost is lower
▪ If P is higher, time taken can be lesser
7
• Approaches used by SE must deliver high Productivity
3) Quality
• Software quality: The totality of features and characteristics of a software product
that bear on its ability to satisfy stated or implied needs. Developing high Quality
SW is a basic goal
• Approaches used should produce a high-Quality software
8
5) Rate of Change
• Software must change to support the changing business needs
• SE practices must accommodate change
▪ Methods that disallow change, even if high Q and P, are of little value
1. Functionality
It is the capability of the software product to provide functions which meet stated and implied
needs when the software is used under specified conditions.
• Suitability: It is the capability of the software product to provide an appropriate set of
functions for specified tasks and user objectives.
• Accuracy: It is the capability of the software product to provide the right or agreed
results or effects with the needed degree of precision.
• Interoperability It is the capability of the software product to interact with one or
more specified systems.
• Security: It is the capability of the software product to protect information and data so
that un unauthorized persons or systems cannot read or modify them and authorized
persons or systems are not denied access to them.
• Functionality compliance: It is the capability of the software product to adhere to
standards, conventions or regulations in laws and similar prescriptions relating to
functionality.
9
2. Reliability
It is the capability of the software product to maintain a specified level of performance when used
under specified conditions.
• Maturity It is the capability of the software product to avoid failures as a result of faults
in the software.
• Fault tolerance It is the capability of the software product to maintain a specified level of
performance and recover the data directly affected in the case of a failure.
• Recoverability It is the capability of the software product to re-establish a specified level of
performance and recover the data directly affected in the case of a failure.
• Reliability compliance It is the capability of the software product to adhere to standards,
conventions or regulations relating to reliability
3. Usability
It is the capability of the software product to be understood, learned, used and attractive to the user,
when used under specified conditions.
• Understandability It is the capability of the software product to enable the user to
understand whether the software is suitable, and how it can be used for particular tasks and
conditions of use.
• Learnability The capability of the software product to enable the user to learn its
application
• Operability It is the capability of the software product to enable the user to operate and
control it
• Attractiveness The capability of the software product to be attractive to the user
• Usability compliance It is the capability of the software product to adhere to standards,
conventions, style guides or regulations relating to usability.
4. Efficiency
It is the capability of the software product to provide appropriate performance, relative to the
number of resources used, under stated conditions.
• Time behavior It is the capability of the software product to provide appropriate response
and processing times and throughput rates when performing its function, under stated
conditions
• Resource utilization It is the capability of the software product to use appropriate amounts
and types of resources when the software performs its function under stated conditions.
• Efficiency compliance It is the capability of the software product to adhere to standards and
conventions relating to efficiency.
5. Maintainability
It is the capability of the software product to be modified. Modifications may include corrections,
improvements or adaptation of the software to changes in environment, and in requirements and
functional specifications.
10
• Analyzability It is the capability of the software product to be diagnosed for deficiencies or
causes of failures in the software, or for the parts to be modified to be identified
• Changeability It is the capability of the software product to enable a specified modification
to be implemented Stability It is the capability of the software product to avoid unexpected
effects from modifications of the software.
• Testability It is the capability of the software product to enable modified software to be
validated
• Maintainability compliance It is the capability of the software product to adhere to
standards or conventions relating to maintainability.
6. Portability
It is the capability of the software product to be transferred from one environment to another.
• Adaptability It is the capability of the software product to be adapted for different
specified environments without applying actions or means other than those provided for
this purpose for the software considered.
• Installability It is the capability of the software product to be installed in a specified
environment.
• Co-existence It is the capability of the software product to co-exist with other independent
software in a common environment sharing common resources.
• Replaceability It is the capability of the software product to be used in place of another
specified software product for the same purpose in the same environment.
• Portability compliance It is the capability of the software product to adhere to standards or
conventions relating to portability.
11
Prescriptive Process Models
These process models include:
• Traditional process models
• Specialized process models
• The unified process
From the generic process framework, modeling represents analysis and design.
12
Prescriptive Process Model
• Defines a distinct set of activities, actions, tasks, milestones, and work products that are
required to engineer high-quality software
• The activities may be linear, incremental, or evolutionary
Disadvantages
• Doesn't support iteration, so changes can cause confusion
• Difficult for customers to state all requirements explicitly and up front
• Requires customer patience because a working version of the program doesn't occur until
the final phase
• Problems can be somewhat alleviated in the model through the addition of feedback loops
(see the next slide)
13
Waterfall Model with Feedback (Diagram)
14
Incremental Model (Diagram)
15
Prototyping Model (Description)
• Follows an evolutionary and iterative approach
• Used when requirements are not well understood
• Serves as a mechanism for identifying software requirements
• Focuses on those aspects of the software that are visible to the customer/user
• Feedback is used to refine the prototype
Disadvantages
• The customer sees a "working version" of the software, wants to stop all development and
then buy the prototype after a "few fixes" are made
• Developers often make implementation compromises to get the software running quickly
(e.g., language choice, user interface, operating system choice, inefficient algorithms)
• Lesson learned
– Define the rules up front on the final disposition of the prototype before it is built
– In most circumstances, plan to discard the prototype and engineer the actual production
software with a goal toward quality
16
• Outer spirals take on a classical waterfall approach after requirements have been defined,
but permit iterative growth of the software
• Operates as a risk-driven model…a go/no-go decision occurs after each complete spiral in
order to react to risk determinations
• Requires considerable expertise in risk assessment
• Serves as a realistic model for large-scale software development
18
Inception Phase
• Encompasses both customer communication and planning activities of the generic process
• Business requirements for the software are identified
• A rough architecture for the system is proposed
• A plan is created for an incremental, iterative development
• Fundamental business requirements are described through preliminary use cases ▪ A use
case describes a sequence of actions that are performed by a user
Elaboration Phase
• Encompasses both the planning and modeling activities of the generic process
• Refines and expands the preliminary use cases
• Expands the architectural representation to include five views
• Use-case model
• Analysis model
• Design model
• Implementation model
• Deployment model
• Often results in an executable architectural baseline that represents a first cut executable
system
• The baseline demonstrates the viability of the architecture but does not provide all
features and functions required to use the system
Construction Phase
• Encompasses the construction activity of the generic process
• Uses the architectural model from the elaboration phase as input
19
• Develops or acquires the software components that make each use-case operational
• Analysis and design models from the previous phase are completed to reflect the final
version of the increment
• Use cases are used to derive a set of acceptance tests that are executed prior to the next
phase
Transition Phase
• Encompasses the last part of the construction activity and the first part of the deployment
activity of the generic process
• Software is given to end users for beta testing and user feedback reports on defects and
necessary changes
• The software teams create necessary support documentation (user manuals, troubleshooting
guides, installation procedures)
• At the conclusion of this phase, the software increment becomes a usable software release
Production Phase
• Encompasses the last part of the deployment activity of the generic process
• On-going use of the software is monitored
• Support for the operating environment (infrastructure) is provided
• Defect reports and requests for changes are submitted and evaluated
Management process
20
▪ The people
▪ Deals with the cultivation of motivated, highly skilled people
▪ Consists of the stakeholders, the team leaders, and the software team
▪ The product
▪ Product objectives and scope should be established before a project can be planned
▪ The process
▪ The software process provides the framework from which a comprehensive plan for software
development can be established
▪ The project
▪ Planning and controlling a software project is done for one primary reason…it is the only
known way to manage complexity
21
The People: The Software Team
• Seven project factors to consider when structuring a software development team
▪ The difficulty of the problem to be solved
▪ The size of the resultant program(s) in source lines of code
▪ The time that the team will stay together
▪ The degree to which the problem can be modularized
▪ The required quality and reliability of the system to be built
▪ The rigidity of the delivery date
▪ The degree of sociability (communication) required for the project
• Four organizational paradigms for software development teams
▪ Closed paradigm – traditional hierarchy of authority; works well when producing
software similar to past efforts; members are less likely to be innovative
▪ Random paradigm – depends on individual initiative of team members; works well for
projects requiring innovation or technological breakthrough; members may struggle when
orderly performance is required
▪ Open paradigm – hybrid of the closed and random paradigm; works well for solving
complex problems; requires collaboration, communication, and consensus among
members
▪ Synchronous paradigm – organizes team members based on the natural pieces of the
problem; members have little communication outside of their subgroups
• Five factors that cause team toxity (i.e., a toxic team environment)
▪ A frenzied work atmosphere
▪ High frustration that causes friction among team members
▪ A fragmented or poorly coordinated software process
▪ An unclear definition of roles on the software team
▪ Continuous and repeated exposure to failure
• How to avoid these problems
▪ Give the team access to all information required to do the job
▪ Do not modify major goals and objectives, once they are defined, unless absolutely
necessary
▪ Give the team as much responsibility for decision making as possible
▪ Let the team recommend its own process model
▪ Let the team establish its own mechanisms for accountability (i.e., reviews) ▪
Establish team-based techniques for feedback and problem solving
22
The Product
• The scope of the software development must be established and bounded
▪ Context – How does the software to be built fit into a larger system, product, or business
context, and what constraints are imposed as a result of the context?
▪ Information objectives – What customer-visible data objects are produced as output from
the software? What data objects are required for input?
▪ Function and performance – What functions does the software perform to transform input
data into output? Are there any special performance characteristics to be addressed?
• Software project scope must be unambiguous and understandable at both the managerial and
technical levels
• Problem decomposition
▪ Also referred to as partitioning or problem elaboration
▪ Sits at the core of software requirements analysis
• Two major areas of problem decomposition
▪ The functionality that must be delivered
▪ The process that will be used to deliver it
The Process
• The project manager must decide which process model is most appropriate based on
▪ The customers who have requested the product and the people who will do the work
▪ The characteristics of the product itself
▪ The project environment in which the software team works
• Once a process model is selected, a preliminary project plan is established based on the process
framework activities
• Process decomposition then begins
• The result is a complete plan reflecting the work tasks required to populate the framework
activities
• Project planning begins as a melding of the product and the process based on the various
framework activities
23
The Project: Signs that it is in Jeopardy
• Software people don't understand their customer's needs
• The product scope is poorly defined
• Changes are managed poorly
• The chosen technology changes
• Business needs change (or are poorly defined)
• Deadlines are unrealistic
• Users are resistant
• Sponsorship is lost (or was never properly obtained)
• The project team lacks people with appropriate skills
• Managers (and practitioners) avoid best practices and lessons learned
Project Planning
• Software project planning encompasses five major activities
▪ Estimation, scheduling, risk analysis, quality management planning, and change
management planning
• Estimation determines how much money, effort, resources, and time it will take to build a specific
system or product
24
• The software team first estimates ▪ The work to be done
▪ The resources required
▪ The time that will elapse from start to finish
• Then they establish a project schedule that
▪ Defines tasks and milestones
▪ Identifies who is responsible for conducting each task ▪
Specifies the inter-task dependencies
Observations on Estimation
• Planning requires technical managers and the software team to make an initial commitment
• Process and project metrics can provide a historical perspective and valuable input for generation
of quantitative estimates
• Past experience can aid greatly
• Estimation carries inherent risk, and this risk leads to uncertainty
• The availability of historical information has a strong influence on estimation risk
• When software metrics are available from past projects
▪ Estimates can be made with greater assurance
▪ Schedules can be established to avoid past difficulties
▪ Overall risk is reduced
• Estimation risk is measured by the degree of uncertainty in the quantitative estimates for cost,
schedule, and resources
• Nevertheless, a project manager should not become obsessive about estimation
• Plans should be iterative and allow adjustments as time passes and more is made certain
Software Scope
• Software scope describes
25
▪ The functions and features that are to be delivered to end users
▪ The data that are input to and output from the system
▪ The "content" that is presented to users as a consequence of using the software
▪ The performance, constraints, interfaces, and reliability that bound the system
• Scope can be define using two techniques
▪ A narrative description of software scope is developed after communication with all
stakeholders
▪ A set of use cases is developed by end users
• After the scope has been identified, two questions are asked ▪ Can we build software to meet
this scope? ▪ Is the project feasible?
• Software engineers too often rush (or are pushed) past these questions
• Later they become mired in a project that is doomed from the onset
Feasibility
• After the scope is resolved, feasibility is addressed
• Software feasibility has four dimensions
▪ Technology – Is the project technically feasible? Is it within the state of the art? Can defects
be reduced to a level matching the application's needs?
▪ Finance – Is is financially feasible? Can development be completed at a cost that the
software organization, its client, or the market can afford?
▪ Time – Will the project's time-to-market beat the competition?
▪ Resources – Does the software organization have the resources needed to succeed in doing
the project?
Project Resources
Resource Estimation
• Three major categories of software engineering resources are:
▪ People
▪ Development environment
▪ Reusable software components
Categories of Resources
26
Human Resources
• Planners need to select the number and the kind of people skills needed to complete the project
• They need to specify the organizational position and job specialty for each person
• Small projects of a few person-months may only need one individual
• Large projects spanning many person-months or years require the location of the person to be
specified also
• The number of people required can be determined only after an estimate of the development effort
27
Reusable Software Resources
• Off-the-shelf components
▪ Components are from a third party or were developed for a previous project
▪ Ready to use; fully validated and documented; virtually no risk
• Full-experience components
▪ Components are similar to the software that needs to be built
▪ Software team has full experience in the application area of these components
▪ Modification of components will incur relatively low risk
• Partial-experience components
▪ Components are related somehow to the software that needs to be built but will require
substantial modification
▪ Software team has only limited experience in the application area of these components ▪
Modifications that are required have a fair degree of risk
• New components
▪ Components must be built from scratch by the software team specifically for the needs of
the current project
▪ Software team has no practical experience in the application area ▪ Software development
of components has a high degree of risk
28
Project Estimation Approaches
• Decomposition techniques
▪ These take a "divide and conquer" approach
▪ Cost and effort estimation are performed in a stepwise fashion by breaking down a project
into major functions and related software engineering activities Empirical estimation models
▪ Offer a potentially valuable estimation approach if the historical data used to seed the
estimate is good
Problem-Based Estimation
1) Start with a bounded statement of scope
2) Decompose the software into problem functions that can each be estimated individually
3) Compute an LOC or FP value for each function
4) Derive cost or effort estimates by applying the LOC or FP values to your baseline productivity
metrics (e.g., LOC/person-month or FP/person-month)
5) Combine function estimates to produce an overall estimate for the entire project
6) In general, the LOC/pm and FP/pm metrics should be computed by project domain
▪ Important factors are team size, application area, and complexity
7) LOC and FP estimation differ in the level of detail required for decomposition with each value
▪ For LOC, decomposition of functions is essential and should go into considerable detail
(the more detail, the more accurate the estimate)
▪ For FP, decomposition occurs for the five information domain characteristics and the 14
adjustment factors
External inputs, external outputs, external inquiries, internal logical files, external
interface files
29
8) For both approaches, the planner uses lessons learned to estimate an optimistic, most likely, and
pessimistic size value for each function or count (for each information domain value) 9) Then
the expected size value S is computed as follows:
Process-Based Estimation
1) Identify the set of functions that the software needs to perform as obtained from the project
scope
2) Identify the series of framework activities that need to be performed for each function
3) Estimate the effort (in person months) that will be required to accomplish each software process
activity for each function
4) Apply average labor rates (i.e., cost/unit effort) to the effort estimated for each process activity
5) Compute the total cost and effort for each function and each framework
6) Compare the resulting values to those obtained by way of the LOC and FP estimates
• If both sets of estimates agree, then your numbers are highly reliable
• Otherwise, conduct further investigation and analysis concerning the function
and activity breakdown
Reconciling Estimates
• The results gathered from the various estimation techniques must be reconciled to produce a
single estimate of effort, project duration, and cost
• If widely divergent estimates occur, investigate the following causes
• The scope of the project is not adequately understood or has been misinterpreted by the
planner
• Productivity data used for problem-based estimation techniques is inappropriate for the
application, obsolete (i.e., outdated for the current organization), or has been misapplied
• The planner must determine the cause of divergence and then reconcile the estimates
Introduction
• Estimation models for computer software use empirically derived formulas to predict effort as a
function of LOC or FP
• Resultant values computed for LOC or FP are entered into an estimation model
• The empirical data for these models are derived from a limited sample of projects
▪ Consequently, the models should be calibrated to reflect local software development
conditions
COCOMO
• Stands for COnstructive COst MOdel
• Introduced by Barry Boehm in 1981 in his book “Software Engineering Economics”
• Became one of the well-known and widely-used estimation models in the industry
30
• It has evolved into a more comprehensive estimation model called COCOMO II
• COCOMO II is actually a hierarchy of three estimation models
• As with all estimation models, it requires sizing information and accepts it in three forms: object
points, function points, and lines of source code
COCOMO Models
• Application composition model - Used during the early stages of software engineering when
the following are important
▪ Prototyping of user interfaces
▪ Consideration of software and system interaction
▪ Assessment of performance
▪ Evaluation of technology maturity
• Early design stage model – Used once requirements have been stabilized and basic software
architecture has been established
• Post-architecture stage model – Used during the construction of the software
31
Make/Buy Decision
• It is often more cost effective to acquire rather than develop software
• Managers have many acquisition options
▪ Software may be purchased (or licensed) off the shelf
▪ “Full-experience” or “partial-experience” software components may be acquired and
integrated to meet specific needs
▪ Software may be custom built by an outside contractor to meet the purchaser’s
specifications
• The make/buy decision can be made based on the following conditions
▪ Will the software product be available sooner than internally developed software?
▪ Will the cost of acquisition plus the cost of customization be less than the cost of
developing the software internally?
▪ Will the cost of outside support (e.g., a maintenance contract) be less than the cost of
internal support?
- Introduction
- Project scheduling
- Task network
- Timeline chart
- Earned value analysis
Introduction
Eight Reasons for Late Software Delivery
• An unrealistic deadline established by someone outside the software engineering group and
forced on managers and practitioners within the group
• Changing customer requirements that are not reflected in schedule changes
• An honest underestimate of the amount of effort and /or the number of resources that will be
required to do the job
• Predictable and/or unpredictable risks that were not considered when the project commenced
• Technical difficulties that could not have been foreseen in advance
• Human difficulties that could not have been foreseen in advance
• Miscommunication among project staff that results in delays
• A failure by project management to recognize that the project is falling behind schedule and a
lack of action to correct the problem
32
• Meet with the customer and (using the detailed estimate) explain why the imposed deadline is
unrealistic
▪ Be certain to note that all estimates are based on performance on past projects
▪ Also be certain to indicate the percent improvement that would be required to achieve
the deadline as it currently exists
Offer the incremental development strategy as an alternative and offer some options
▪ Increase the budget and bring on additional resources to try to finish sooner
▪ Remove many of the software functions and capabilities that were requested
▪ Dispense with reality and wish the project complete using the prescribed schedule;
then point out that project history and your estimates show that this is unrealistic and
will result in a disaster
33
• Interdependency
▪ The interdependency of each compartmentalized activity, action, or task must be
determined
▪ Some tasks must occur in sequence while others can occur in parallel
▪ Some actions or activities cannot commence until the work product produced by another is
available
• Time allocation
▪ Each task to be scheduled must be allocated some number of work units
▪ In addition, each task must be assigned a start date and a completion date that are a
function of the interdependencies
▪ Start and stop dates are also established based on whether work will be conducted on a
full-time or part-time basis
• Effort validation
▪ Every project has a defined number of people on the team
▪ As time allocation occurs, the project manager must ensure that no more than the allocated
number of people have been scheduled at any given time
• Defined responsibilities
▪ Every task that is scheduled should be assigned to a specific team member
• Defined outcomes
▪ Every task that is scheduled should have a defined outcome for software projects such as a
work product or part of a work product
▪ Work products are often combined in deliverables
• Defined milestones
▪ Every task or group of tasks should be associated with a project milestone
▪ A milestone is accomplished when one or more work products has been reviewed for
quality and has been approved
34
▪ The people who teach them are the same people who were earlier doing the work
▪ During teaching, no work is being accomplished
▪ Lines of communication (and the inherent delays) increase for each new person added
35
Task Network
36
• Points out inter-task dependencies to help the manager ensure continuous progress toward project
completion
• The critical path
▪ A single path leading from start to finish in a task network
▪ It contains the sequence of tasks that must be completed on schedule if the project as a
whole is to be completed on schedule
▪ It also determines the minimum duration of the project
Timeline Chart
37
Mechanics of a Timeline Chart
• Also called a Gantt chart; invented by Henry Gantt, industrial engineer, 1917
• All project tasks are listed in the far left column
• The next few columns may list the following for each task: projected start date, projected stop
date, projected duration, actual start date, actual stop date, actual duration, task interdependencies
(i.e., predecessors)
• To the far right are columns representing dates on a calendar
• The length of a horizontal bar on the calendar indicates the duration of the task
• When multiple bars occur at the same time interval on the calendar, this implies task concurrency
• A diamond in the calendar area of a specific task indicates that the task is a milestone; a milestone
has a time duration of zero
38
Methods for Tracking the Schedule
• Qualitative approaches
▪ Conduct periodic project status meetings in which each team member reports progress
and problems
▪ Evaluate the results of all reviews conducted throughout the software engineering process
▪ Determine whether formal project milestones (i.e., diamonds) have been accomplished by
the scheduled date
▪ Compare actual start date to planned start date for each project task listed in the timeline
chart
▪ Meet informally with the software engineering team to obtain their subjective assessment
of progress to date and problems on the horizon
• Quantitative approach
▪ Use earned value analysis to assess progress quantitatively
39
• Because the object-oriented process is an iterative process, each of these milestones may be
revisited as different increments are delivered to the customer
40
• CV = BCWP – ACWP
- The cost variance is an absolute indication of cost savings (against planned costs) or shortfall at a
particular stage of a project
Risk Management
- Introduction
- Risk identification
- Risk projection (estimation)
- Risk mitigation, monitoring, and management
41
▪ Those risks that can be uncovered after careful evaluation of the project plan, the business and
technical environment in which the project is being developed, and other reliable information
sources (e.g., unrealistic delivery date)
• Predictable risks
▪ Those risks that are extrapolated from past project experience (e.g., past turnover)
• Unpredictable risks
▪ Those risks that can and do occur, but are extremely difficult to identify in advance
42
– A list containing a set of risk component and drivers and their probability of occurrence
43
Risk Projection (Estimation)
Background
• Risk projection (or estimation) attempts to rate each risk in two ways
▪ The probability that the risk is real
▪ The consequence of the problems associated with the risk, should it occur
• The project planner, managers, and technical staff perform four risk projection steps
• The intent of these steps is to consider risks in a manner that leads to prioritization
• Be prioritizing risks, the software team can allocate limited resources where they will have the
most impact
44
Assessing Risk Impact
• Three factors affect the consequences that are likely if a risk does occur
– Its nature – This indicates the problems that are likely if the risk occurs
– Its scope – This combines the severity of the risk (how serious was it) with its overall
distribution (how much was affected)
– Its timing – This considers when and for how long the impact will be felt
• The overall risk exposure formula is RE = P x C
– P = the probability of occurrence for a risk
– C = the cost to the project should the risk actually occur
• Example
– P = 80% probability that 18 of 60 software components will have to be developed
– C = Total cost of developing 18 components is $25,000
– RE = .80 x $25,000 = $20,000
Background
• An effective strategy for dealing with risk must consider three issues
(Note: these are not mutually exclusive)
▪ Risk mitigation (i.e., avoidance)
▪ Risk monitoring
▪ Risk management and contingency planning
• Risk mitigation (avoidance) is the primary strategy and is achieved through a plan
▪ Example: Risk of high staff turnover
• During risk monitoring, the project manager monitors factors that may provide an indication of
whether a risk is becoming more or less likely
• Risk management and contingency planning assume that mitigation efforts have failed and that
the risk has become a reality
• RMMM steps incur additional project cost
– Large projects may have identified 30 – 40 risks
45
• Risk is not limited to the software project itself
– Risks can occur after the software has been delivered to the user
• Software safety and hazard analysis
– These are software quality assurance activities that focus on the identification and
assessment of potential hazards that may affect software negatively and cause an entire
system to fail
– If hazards can be identified early in the software process, software design features can be
specified that will either eliminate or control potential hazards
Quality Management
- Quality concepts
- Software quality assurance
- Software reviews
- Statistical software quality assurance
- Software reliability, availability, and safety
- SQA plan
46
Quality Concepts
What is Quality Management
• Also called software quality assurance (SQA)
• Serves as an umbrella activity that is applied throughout the software process
• Involves doing the software development correctly versus doing it over again
• Reduces the amount of rework, which results in lower costs and improved time to market
• Encompasses
– A software quality assurance process
– Specific quality assurance and quality control tasks (including formal technical reviews
and a multi-tiered testing strategy)
– Effective software engineering practices (methods and tools)
– Control of all software work products and the changes made to them
– A procedure to ensure compliance with software development standards
– Measurement and reporting mechanisms
Quality Defined
• Defined as a characteristic or attribute of something
• Refers to measurable characteristics that we can compare to known standards
• In software it involves such measures as cyclomatic complexity, cohesion, coupling, function
points, and source lines of code
• Includes variation control
– A software development organization should strive to minimize the variation between the
predicted and the actual values for cost, schedule, and resources
– They should make sure their testing program covers a known percentage of the software
from one release to another
– One goal is to ensure that the variance in the number of bugs is also minimized from one
release to another
• Two kinds of quality are sought out
– Quality of design
• The characteristic that designers specify for an item
• This encompasses requirements, specifications, and the design of the system
– Quality of conformance (i.e., implementation)
• The degree to which the design specifications are followed during manufacturing
• This focuses on how well the implementation follows the design and how well the resulting
system meets its requirements
• Quality also can be looked at in terms of user satisfaction
Quality Control
• Involves a series of inspections, reviews, and tests used throughout the software process
• Ensures that each work product meets the requirements placed on it
47
• Includes a feedback loop to the process that created the work product
– This is essential in minimizing the errors produced
• Combines measurement and feedback in order to adjust the process when product specifications
are not met
• Requires all work products to have defined, measurable specifications to which practitioners may
compare to the output of each process
48
• This definition emphasizes three points
– Software requirements are the foundation from which quality is measured; lack of
conformance to requirements is lack of quality
– Specified standards define a set of development criteria that guide the manner in which
software is engineered; if the criteria are not followed, lack of quality will almost surely
result
– A set of implicit requirements often goes unmentioned; if software fails to meet implicit
requirements, software quality is suspect
• Software quality is no longer the sole responsibility of the programmer
– It extends to software engineers, project managers, customers, salespeople, and the SQA
group
– Software engineers apply solid technical methods and measures, conduct formal technical
reviews, and perform well-planned software testing
SQA Activities
• Prepares an SQA plan for a project
• Participates in the development of the project's software process description
• Reviews software engineering activities to verify compliance with the defined software process
• Audits designated software work products to verify compliance with those defined as part of the
software process
• Ensures that deviations in software work and work products are documented and handled
according to a documented procedure
• Records any noncompliance and reports to senior management
• Coordinates the control and management of change
• Helps to collect and analyze software metrics
Software Reviews
Purpose of Reviews
• Serve as a filter for the software process
• Are applied at various points during the software process
• Uncover errors that can then be removed
• Purify the software analysis, design, coding, and testing activities
• Catch large classes of errors that escape the originator more than other practitioners
49
• Include the formal technical review (also called a walkthrough or inspection)
– Acts as the most effective SQA filter
– Conducted by software engineers for software engineers
– Effectively uncovers errors and improves software quality
– Has been shown to be up to 75% effective in uncovering design flaws (which constitute
50-65% of all errors in software)
• Require the software engineers to expend time and effort, and the organization to cover the costs
50
– One of the reviewers also serves as the recorder for all issues and decisions concerning
the product
– After a brief introduction by the review leader, the producer proceeds to "walk through"
the work product while reviewers ask questions and raise issues
– The recorder notes any valid problems or errors that are discovered; no time or effort is
spent in this meeting to solve any of these problems or errors
• Activities at the conclusion of the meeting
– All attendees must decide whether to
• Accept the product without further modification
• Reject the product due to severe errors (After these errors are corrected, another review will then
occur)
• Accept the product provisionally (Minor errors need to be corrected but no additional review is
required)
– All attendees then complete a sign-off in which they indicate that they took part in the
review and that they concur with the findings
• Activities following the meeting
– The recorder produces a list of review issues that
• Identifies problem areas within the product
• Serves as an action item checklist to guide the producer in making corrections
– The recorder includes the list in an FTR summary report
• This one to two-page report describes what was reviewed, who reviewed it, and what were the
findings and conclusions
– The review leader follows up on the findings to ensure that the producer makes the
requested corrections
FTR Guidelines
1) Review the product, not the producer
2) Set an agenda and maintain it
3) Limit debate and rebuttal; conduct in-depth discussions off-line
4) Enunciate problem areas, but don't attempt to solve the problem noted
5) Take written notes; utilize a wall board to capture comments
6) Limit the number of participants and insist upon advance preparation
7) Develop a checklist for each product in order to structure and focus the review
8) Allocate resources and schedule time for FTRs
9) Conduct meaningful training for all reviewers
10) Review your earlier reviews to improve the overall review process
51
A Sample of Possible Causes for Defects
• Incomplete or erroneous specifications
• Misinterpretation of customer communication
• Intentional deviation from specifications
• Violation of programming standards
• Errors in data representation
• Inconsistent component interface
• Errors in design logic
• Incomplete or erroneous testing
• Inaccurate or incomplete documentation
• Errors in programming language translation of design
• Ambiguous or inconsistent human/computer interface
Six Sigma
• Popularized by Motorola in the 1980s
• Is the most widely used strategy for statistical quality assurance
• Uses data and statistical analysis to measure and improve a company's operational performance
• Identifies and eliminates defects in manufacturing and service-related processes
• The "Six Sigma" refers to six standard deviations (3.4 defects per a million occurrences)
– Define customer requirements, deliverables, and project goals via well-defined methods
of customer communication
– Measure the existing process and its output to determine current quality performance
(collect defect metrics)
– Analyze defect metrics and determine the vital few causes (the 20%)
• Two additional steps are added for existing processes (and can be done in parallel)
– Improve the process by eliminating the root causes of defects
– Control the process to ensure that future work does not reintroduce the causes of defects
• All of these steps need to be performed so that you can manage the process to accomplish
something
You cannot effectively manage and improve a process until you first do these steps (in this order):
52
Software Reliability, Availability, and Safety
Reliability and Availability
• Software failure
– Defined: Nonconformance to software requirements
– Given a set of valid requirements, all software failures can be traced to design or
implementation problems (i.e., nothing wears out like it does in hardware)
• Software reliability
– Defined: The probability of failure-free operation of a software application in a specified
environment for a specified time
– Estimated using historical and development data
– A simple measure is MTBF = MTTF + MTTR = Uptime + Downtime – Example:
• MTBF = 68 days + 3 days = 71 days
• Failures per 100 days = (1/71) * 100 = 1.4
• Software availability
– Defined: The probability that a software application is operating according to
requirements at a given point in time
– Availability = [MTTF/ (MTTF + MTTR)] * 100% – Example:
▪ Avail. = [68 days / (68 days + 3 days)] * 100 % = 96%
Software Safety
• Focuses on identification and assessment of potential hazards to software operation
• It differs from software reliability
– Software reliability uses statistical analysis to determine the likelihood that a software
failure will occur; however, the failure may not necessarily result in a hazard or mishap
– Software safety examines the ways in which failures result in conditions that can lead to a
hazard or mishap; it identifies faults that may lead to failures
53
• Software failures are evaluated in the context of an entire computer-based system and its
environment through the process of fault tree analysis or hazard analysis
SQA Plan
Purpose and Layout
• organization
• Developed by the SQA group to serve as a template for SQA activities that are instituted for each
software project in an organization
• Structured as follows:
– The purpose and scope of the plan
– A description of all software engineering work products that fall within the purview of
SQA
– All applicable standards and practices that are applied during the software process
– SQA actions and tasks (including reviews and audits) and their placement throughout the
software process
– The tools and methods that support SQA actions and tasks
– Methods for assembling, safeguarding, and maintaining all SQA-related records
– Organizational roles and responsibilities relative to product quality
Change Management
- Introduction
- SCM repository
- The SCM process
Introduction
What is Change Management
• Also called software configuration management (SCM)
• It is an umbrella activity that is applied throughout the software process
• It's goal is to maximize productivity by minimizing mistakes caused by confusion when
coordinating software development
• SCM identifies, organizes, and controls modifications to the software being built by a software
development team
• SCM activities are formulated to identify change, control change, ensure that change is being
properly implemented, and report changes to others who may have an interest
• SCM is initiated when the project begins and terminates when the software is taken out of
operation
• View of SCM from various roles
• Project manager -> an auditing mechanism
• SCM manager -> a controlling, tracking, and policy making mechanism
• Software engineer -> a changing, building, and access control mechanism • Customer -> a
quality assurance and product identification mechanism
Software Configuration
• The Output from the software process makes up the software configuration
54
– Computer programs (both source code files and executable files)
– Work products that describe the computer programs (documents targeted at both
technical practitioners and users)
– Data (contained within the programs themselves or in external files)
• The major danger to a software configuration is change
– First Law of System Engineering: "No matter where you are in the system life cycle, the
system will change, and the desire to change it will persist throughout the life cycle"
Baseline
• An SCM concept that helps practitioners to control change without seriously impeding justifiable
change
• IEEE Definition: A specification or product that has been formally reviewed and agreed upon,
and that thereafter serves as the basis for further development, and that can be changed only
through formal change control procedures
• It is a milestone in the development of software and is marked by the delivery of one or more
computer software configuration items (CSCIs) that have been approved as a consequence of a
formal technical review
• A CSCI may be such work products as a document (as listed in MIL-STD-498), a test suite, or a
software component
Baselining Process
1) A series of software engineering tasks produces a CSCI
55
2) The CSCI is reviewed and possibly approved
3) The approved CSCI is given a new version number and placed in a project database (i.e., software
repository)
4) A copy of the CSCI is taken from the project database and examined/modified by a software
engineer
5) The baselining of the modified CSCI goes back to Step #2
56
• Information sharing
– Shares information among developers and tools, manages and controls multi-user access
• Tool integration
– Establishes a data model that can be accessed by many software engineering tools,
controls access to the data
• Data integration
– Allows various SCM tasks to be performed on one or more CSCIs
• Methodology enforcement
– Defines an entity-relationship model for the repository that implies a specific process
model for software engineering
• Document standardization
– Defines objects in the repository to guarantee a standard approach for creation of
software engineering documents
SCM Questions
• How does a software team identify the discrete elements of a software configuration?
• How does an organization manage the many existing versions of a program (and its
documentation) in a manner that will enable change to be accommodated efficiently?
• How does an organization control changes before and after software is released to a customer?
57
• Who has responsibility for approving and ranking changes?
• How can we ensure that changes have been made properly?
• What mechanism is used to appraise others of changes that are made?
SCM Tasks
Identification Task
• Identification separately names each CSCI and then organizes it in the SCM repository using an
object-oriented approach
• Objects start out as basic objects and are then grouped into aggregate objects
• Each object has a set of distinct features that identify it
– A name that is unambiguous to all other objects
– A description that contains the CSCI type, a project identifier, and change and/or version
information
– List of resources needed by the object
– The object realization (i.e., the document, the file, the model, etc.)
58
Change Control Task
• Change control is a procedural activity that ensures quality and consistency as changes are made
to a configuration object
• A change request is submitted to a configuration control authority, which is usually a change
control board (CCB)
– The request is evaluated for technical merit, potential side effects, overall impact on other
configuration objects and system functions, and projected cost in terms of money, time,
and resources
• An engineering change order (ECO) is issued for each approved change request
– Describes the change to be made, the constraints to follow, and the criteria for review and
audit
• The baselined CSCI is obtained from the SCM repository
– Access control governs which software engineers have the authority to access and modify
a particular configuration object
– Synchronization control helps to ensure that parallel changes performed by two different
people don't overwrite one another
59
– Have SCM procedures for noting the change, recording it, and reporting it been
followed?
– Have all related CSCIs been properly updated?
• A configuration audit ensures that
– The correct CSCIs (by version) have been incorporated into a specific build
– That all documentation is up-to-date and consistent with the version that has been built
Introduction
Uses of Measurement
• Can be applied to the software process with the intent of improving it on a continuous basis
• Can be used throughout a software project to assist in estimation, quality control, productivity
assessment, and project control
• Can be used to help assess the quality of software work products and to assist in tactical decision
making as a project proceeds
60
Reasons to Measure
• To characterize in order to
– Gain an understanding of processes, products, resources, and environments
– Establish baselines for comparisons with future assessments
• To evaluate in order to
– Determine status with respect to plans
• To predict in order to
– Gain understanding of relationships among processes and products
– Build models of these relationships
• To improve in order to
– Identify roadblocks, root causes, inefficiencies, and other opportunities for improving
product quality and process performance
61
Advantages of code inspection
● Improves overall product quality.
● Discovers the bugs/defects in software code.
● Marks any process enhancement in any case.
● Finds and removes defective efficiently and quickly.
● Helps to learn from previous defeats.
3. Naming conventions for local variables, global variables, constants and functions:
Some of the naming conventions are given below:
62
○ Meaningful and understandable variables help anyone understand the reason for using
them.
○ Local variables should be named using camel case lettering starting with a small letter
(e.g. local data), whereas Global variables names should start with a capital letter (e.g.
GlobalData). Constant names should be formed using capital letters only (e.g.
CONSDATA).
○ It is better to avoid the use of digits in variable names.
○ The names of the functions should be written in camel case, starting with small letters.
○ The name of the function must describe the reason for using the function clearly and
briefly.
4. Indentation:
Proper indentation is very important to increase the readability of the code. To make the code
readable, programmers should use White spaces properly. Some of the spacing conventions are
given below:
○ There must be a space after giving a comma between two function arguments.
○ Each nested block should be indented appropriately and spaced.
○ Proper indentation should be present at the beginning and the end of each block in the
program.
○ All braces should start from a new line, and the code following the end of braces also
starts from a new line.
63
● Coding guidelines help detect errors in the early phases, so it helps to reduce the extra cost
incurred by the software project.
● If coding guidelines are maintained properly, then the software code increases readability and
understandability thus, it reduces the complexity of the code.
● It reduces the hidden cost of developing the software.
Incremental code development is a software development approach that emphasizes building and
improving software systems gradually over time through iterative cycles of planning, development,
testing, and deployment. This methodology stands in contrast to traditional "big bang" development
approaches, where entire systems are developed and deployed at once.
1. Requirement analysis: In the first phase of the incremental model, the product analysis expertise
identifies the requirements. The requirement analysis team understands the system's functional
requirements. This phase plays a crucial role in developing the software under the incremental model.
2. Design & Development: In this phase of the Incremental model of SDLC, the design of the system
functionality and the development method are finished with success. When software develops new
practicality, the incremental model uses style and development phase.
3. Testing: In the incremental model, the testing phase checks the performance of each existing function
and additional functionality. In the testing phase, various methods are used to test the behaviour of each
task.
4. Implementation: The implementation phase enables the coding phase of the development system. It
involves the final coding that is designed in the designing and development phase and tests the
functionality in the testing phase. After completion of this phase, the number of products working is
enhanced and upgraded to the final system product.
i) Iterative Approach
It deals with Breaking down the development process into smaller iterations or increments.
v) Risk Management
64
Here, the risks are mitigated by addressing high-priority and high-risk features early in the development
process.
a) Faster Time to Market - Delivering usable functionality in smaller increments allows for
quicker deployment and feedback gathering.
b) Adaptability - Flexibility to accommodate changing requirements and priorities throughout the
development process.
c) Reduced Risk - Early detection and mitigation of defects and issues through continuous testing
and validation.
d) Improved Stakeholder Satisfaction - Regular delivery of functional increments fosters
stakeholder engagement and satisfaction.
e) Enhanced Quality - Incremental development encourages continuous improvement and
refinement of code and design.
User Stories or Features: Break down requirements into manageable user stories or features that can be
implemented incrementally.
Iterations or Sprints: Organizes development into time-boxed iterations or sprints, typically ranging
from one to four weeks.
Continuous Integration and Deployment: Automates the process of integrating and deploying code
changes frequently to ensure stability and reliability.
Feedback Loops: Establish mechanisms for gathering feedback from users and stakeholders at each
increment to inform subsequent iterations.
Incremental Testing Conduct testing activities continuously throughout the development process to
identify and address defects early.
a) Scope Creep - Difficulty in managing evolving requirements and scope changes over multiple
iterations.
b) Integration Complexity - Ensuring seamless integration of new increments with existing codebase
and dependencies.
c) Dependency Management- There is a challenge in Coordinating dependencies between different
increments and teams.
d) Technical Debt - Risk of accumulating technical debt if proper refactoring and maintenance
practices are not followed.
e) Resource Allocation - Balancing resources and priorities across multiple increments and projects
is usually difficult..
65
a) Prioritize Features - Focus on implementing high-priority and high-value features early in
development.
b) Modular Design: Design software systems with modularity in mind to facilitate incremental
development and maintainability.
c) Automated Testing: Invest in automated testing frameworks to ensure the quality and stability of
incremental releases.
d) Continuous Integration/Deployment: Implement CI/CD pipelines to automate code changes'
integration, testing, and deployment.
e) Collaboration and Communication: Foster collaboration and communication among team
members and stakeholders to ensure alignment and transparency.
Version Control Systems: Facilitate collaboration and manage code changes across iterations; they
include tools such as git.
Issue Tracking Systems: Track and prioritize user stories, tasks, and defects across iterations.
Continuous Integration/Deployment Tools: Automate the build, test, and deployment processes.
Collaboration Platforms: Facilitate communication and collaboration among team members.
Retrospectives: Conduct regular retrospectives at the end of each iteration to reflect on what went well,
what didn't, and areas for improvement.
Feedback Analysis: Analyze user and stakeholder feedback to identify enhancements and refinements
opportunities.
Refactoring and Technical Debt Management: This involves Allocating time for refactoring and
addressing technical debt to maintain code quality and scalability.
Knowledge Sharing: Encourages knowledge sharing and learning within the team to improve
development practices and skills continuously.
In conclusion, Incremental code development offers a pragmatic and flexible approach to software
development, allowing teams to deliver value incrementally while managing risks and uncertainties
effectively. By embracing the principles, best practices, and tools associated with incremental
development, organizations can adapt to changing requirements, deliver high-quality software, and
maintain a competitive edge in today's dynamic market landscape.
66
1. Version Control System (VCS):
a. Use a version control system such as Git to track changes to your codebase.
b. Create branches for new features or bug fixes to isolate changes and prevent interference
with the main codebase.
c. Regularly commit changes with descriptive commit messages to maintain a clear history
of modifications.
2. Code Reviews:
a. Implement code review processes where team members review each other's code before
merging it into the main branch.
b. Conduct thorough reviews to ensure code quality, adhere to coding standards, and
identify potential issues.
3. Automated Testing:
a. Develop and maintain a comprehensive suite of automated tests, including unit,
integration, and end-to-end tests.
b. Run automated tests regularly, especially before merging code changes, to catch bugs and
regressions early.
4. Continuous Integration/Continuous Deployment (CI/CD):
a. Set up CI/CD pipelines to automate build, test, and deployment processes.
b. Use tools like Jenkins, GitLab CI/CD, or GitHub Actions to streamline development
workflows and ensure consistent code delivery.
5. Refactoring and Code Cleanup:
a. Regularly refactor code to improve its structure, readability, and maintainability.
b. Remove obsolete code, fix code smells, and apply best practices to keep the codebase
clean and efficient.
6. Documentation:
a. Maintain comprehensive documentation for your codebase, including API
documentation, architecture diagrams, and coding guidelines.
b. Document code changes, dependencies, and configuration settings to facilitate
understanding and collaboration.
7. Versioning and Release Management:
a. Follow semantic versioning principles to assign meaningful version numbers to releases
based on the significance of changes (e.g., major, minor, patch).
b. Planned and coordinated releases to ensure smooth deployment and minimized
disruptions for users.
8. Monitoring and Feedback:
a. Monitor application performance, error logs, and user feedback to identify areas for
improvement and prioritize future development efforts.
b. Use metrics and analytics to assess the impact of code changes and make data-driven
decisions.
67
Unit Testing
• Unit testing is a software testing method where individual units or components of a software application
are tested in isolation to ensure they function correctly.
• A unit is typically the smallest testable part of an application, such as a function, method, or class.
What are unit testing best practices?
• Use a Unit Test Framework: Employ automated testing frameworks like jest to streamline the unit
testing process and ensure project consistency.
• Assert Once: Each unit test should have only one true or false outcome. Make sure that there is only one
assert statement within your test.
• Implement Unit Testing from the Start: Make unit testing a standard practice from the beginning of your
projects. Even if time constraints initially lead to skipping unit tests, establishing this practice early on
makes it easier to follow consistently in future projects.
• Automate Unit Testing: Integrate unit testing into your development workflow by automating tests to run
before pushing changes or deploying updates. This ensures thorough testing throughout the development
lifecycle.
68
• Security testing checks the software against known vulnerabilities and threats. This includes analysis of
the threat surface, including third-party entry points to the software.
Coding metrics
Coding metrics are quantitative measures that aim to assess the quality, complexity, performance, and
other attributes of software code. These metrics provide insights that can help developers and teams to
improve code quality, maintainability, and efficiency. Several key coding metrics commonly used
include:
1). Lines of Code (LOC): Measures a software program's total number of lines. While easy to calculate, it
doesn’t always correlate well with code complexity or quality.
2). Cyclomatic Complexity: Measures the complexity of a program by calculating the number of linearly
independent paths through a program's source code. It helps identify overly complex methods that may
need simplification or refactoring.
3). Halstead Complexity Measures: These involve several metrics (like Halstead Length, Volume,
Difficulty, and Effort) calculated based on the number of operators and operands in the code. They aim to
measure the potential difficulty in understanding and maintaining the code.
4). Code Churn: Measures the amount of code changes over time, indicating the stability and maturity of
the codebase. Frequent changes can suggest instability or continuous improvement.
5) Technical Debt: Technical Debt is not a direct metric but an important concept, indicating the cost of
rework caused by choosing an easy (quick and dirty) solution now instead of using a better approach that
would take longer.
6). Test Coverage: Measures the percentage of code executed by automated tests, indicating the extent to
which the codebase is tested. High test coverage can suggest a lower likelihood of undetected bugs.
7). Maintainability Index: A composite measure that combines lines of code, cyclomatic complexity, and
Halstead volume to assess how easy it is to maintain the code. Higher scores indicate easier maintenance.
8). Dependency Measures: Assess the degree of interdependence between modules or components. High
dependency can make the code more complex and harder to maintain.
9). Code Duplication: Measures the amount of code duplicated across the codebase. Reducing duplication
can improve maintainability and reduce the likelihood of bugs.
10). Function Points: A measure of the functionality provided by the software, independent of the
language used to implement it. It’s useful for comparing productivity and efficiency across different
projects or languages.
By tracking these and other relevant metrics, development teams can gain valuable insights into their
codebase, enabling them to make informed decisions about improvements, optimizations, and refactorings
necessary to ensure the delivery of high-quality software.
Testing Concepts:
1. Test Coverage: This metric measures the extent to which the source code of a program has been
tested. It helps in identifying areas of the code that have not been exercised during testing.
69
2. Defect Density: Defect density is a metric that indicates the number of defects identified in a
specific component or software system. It is calculated by dividing the number of defects by the size of the
component.
3. Regression Testing: Regression testing ensures that new code changes do not adversely affect
existing functionality. It involves re-running tests to detect any unexpected side effects.
Testing Metrics:
1. Defect Density: As mentioned earlier, defect density is a key metric that helps in measuring the
quality of the software by identifying the number of defects per unit size of the software.
2. Test Case Effectiveness: This metric evaluates the efficiency of test cases in detecting defects. It
measures the percentage of defects found by a test case out of the total defects present.
3. Test Execution Time: Test execution time measures how long it takes to execute a set of test cases.
It is an important metric for assessing the efficiency of the testing process.
Testing is a crucial part of the software development process that helps ensure the quality and reliability of
the final product.
Some key testing concepts include:
1. Test Case: A set of conditions or variables under which a tester will determine whether a system under
test satisfies requirements or works correctly.
2. Test Plan: A document describing the scope, approach, resources, and schedule of intended testing
activities.
3. Test Strategy: An outline that describes the testing approach to achieve testing objectives.
4. Types of Testing: Different types of testing such as unit testing, integration testing, system testing,
acceptance testing, etc., each serving a specific purpose in the testing process.
5. Bug: Any variance between actual and expected results.
6. Regression Testing: Testing existing software applications to make sure that a change or addition hasn't
broken any existing functionality.
Metrics are used to measure various aspects of the testing process and provide insights into the quality of
the software being tested. Some common testing metrics include:
1. Defect Density: The number of defects identified in a component or system divided by the size of the
component or system.
2. Test Coverage: The extent to which testing covers all specified requirements.
3. Defect Removal Efficiency (DRE): The percentage of defects removed by a phase of development
relative to the total defects discovered.
70
4. Test Execution Productivity: The number of test cases executed per unit time.
6. Test Efficiency: The percentage of test cases executed successfully without any defect.
Types of Testing
1. Unit Testing: In unit testing, individual units or components of the software are tested in isolation.
It involves testing small pieces of code to ensure they work correctly. Unit tests are typically automated
and are run frequently during the development process.
2. Integration Testing: Integration testing focuses on testing how different components/modules
work together when integrated. It helps identify issues related to the interaction between modules, such as
data flow, communication, and interfaces.
3. System Testing: System testing is conducted on a complete, integrated system to evaluate its
compliance with specified requirements. It verifies that the system meets functional and non-functional
requirements and is ready for deployment.
4. Acceptance Testing: Acceptance testing, or User Acceptance Testing (UAT), is performed by end-
users to validate whether the system meets their requirements and is ready for production use. It ensures
that the software meets business needs and functions as expected.
5. Regression Testing: Regression testing is carried out to ensure that new code changes do not
introduce defects or negatively impact existing functionality. It involves retesting previously working
features to ensure they still work as intended after modifications.
6. Performance Testing: Performance testing assesses how a system performs under various
conditions, such as load, stress, and scalability. It helps identify performance bottlenecks, response times,
and resource utilization to ensure the system meets performance requirements.
7. Security Testing: Security testing is performed to identify vulnerabilities in the software that could
be exploited by attackers. It includes testing for authentication, authorization, data protection, and other
security features to ensure the software is secure and data is protected.
8. Usability Testing: Usability testing evaluates how user-friendly and intuitive the software is for
endusers. It involves observing users interacting with the system to identify usability issues, such as
navigation difficulties, confusing interfaces, and accessibility barriers. The percentage of defects found by
a test phase divided by the total number of defects found.
Common software testing techniques used to identify defects in software applications.
Black box testing is a software testing method where the internal structure, code, and logic of the
application are not known to the tester. Testers focus on the functionality of the software without
considering its internal workings. Test cases are designed based on requirements and specifications, and
the tester evaluates the output against the expected results. The goal of black box testing is to ensure that
the software behaves as expected from the end user's perspective.
Types of Black Box Testing:
71
• Functional Testing: Focuses on testing the functionality of the software without knowing
its internal code structure.
• Non-Functional Testing: Tests aspects like performance, usability, reliability, etc., without
delving into the internal code. Techniques used in Black Box Testing:
• Equivalence Partitioning: Divides input data into partitions of equivalent data from which
test cases can be derived.
• Boundary Value Analysis: Tests boundaries of equivalence partitions, ensuring that inputs
at boundaries are handled correctly.
• Decision Table Testing: Tests combinations of different inputs to determine outcomes
based on decision rules.
• State Transition Testing: Tests the behavior of the system when it changes from one state
to another.
• Use Case Testing: Focuses on testing scenarios that represent typical user interactions with
the software.
Advantages of Black Box Testing:
White box testing, also known as clear box testing, glass box testing, or structural testing, is a
software testing method where the internal structure, code, and logic of the application are known
to the tester. Testers design test cases based on the internal workings of the software, such as code
paths, branches, and conditions. The goal of white box testing is to ensure that all code paths are
tested and that the software functions correctly according to its design and implementation.
Types of White Box Testing:
• Statement Coverage: Ensures each statement in the code is executed at least once during
testing.
72
• Branch Coverage: Ensures that every branch of the code is executed at least once during
testing.
• Path Coverage: Tests every possible path from start to end within the code.
• Code Walkthroughs and Inspections: Involves peer reviews of the code to identify
potential issues and defects.
• Code Reviews: Formal evaluation of the code by team members to ensure adherence to
coding standards and identify potential defects.
• Static Analysis: Automated analysis of the code without executing it to identify issues
such as syntax errors, security vulnerabilities, etc.
Advantages of White Box Testing:
• Provides thorough coverage of code paths, ensuring that all lines of code are tested.
• Helps in identifying and fixing issues related to code structure, logic errors, and
performance bottlenecks.
• Facilitates early detection of defects, reducing the cost of fixing errors in later stages of
development.
Challenges of White Box Testing:
• Requires in-depth knowledge of the code, making it difficult for testers without
programming expertise.
• Testing every possible code path may be time-consuming and resource-intensive.
• Risk of bias as testers may unintentionally overlook certain code paths or conditions.
73
TOOLS IN SOFTWARE ENGINEERING
There are number of CASE tools available to simplify various stages of Software Development
Life Cycle such as Analysis tools, Design tools, Project management tools, Database
Management tools, Documentation tools are to name a few.
Use of CASE tools accelerates the development of project to produce desired result and helps to
uncover flaws before moving ahead with next stage in software development.
1. Upper Case Tools – Upper CASE tools are used in planning, analysis and design stages of
SDLC.
2. Lower Case Tools – Lower CASE tools are used in implementation, testing and maintenance.
3. Integrated Case Tools – Integrated CASE tools are helpful in all the stages of SDLC, from
Requirement gathering to Testing and documentation.
CASE tools can be grouped together if they have similar functionality, process activities and
capability of getting integrated with other tools.
.
Diagram tools
These tools are used to represent system components, data and control flow among various
software components and system structure in a graphical form. For example, Flow Chart Maker
tool for creating state-of-the-art flowcharts.
74
Project Management Tools
These tools are used for project planning, cost and effort estimation, project scheduling and
resource planning. Managers have to strictly comply project execution with every mentioned step
in software project management. Project management tools help in storing and sharing project
information in real-time throughout the organization. For example, Creative Pro Office.
Documentation Tools
Documentation tools generate documents for technical users and end users. Technical users are
mostly in-house professionals of the development team who refer to system manual, reference
manual, training manual, installation manuals etc. The end user documents describe the
functioning and how-to of the system such as user manual.
Analysis Tools
These tools help to gather requirements, automatically check for any inconsistency, inaccuracy in
the diagrams, data redundancies or erroneous omissions.
Design Tools
These tools help software designers to design the block structure of the software, which may
further be broken down in smaller modules using refinement techniques. These tools provides
detailing of each module and interconnections among modules. For example, Animated Software
Design
Programming Tools
These tools consist of programming environments like IDE (Integrated Development
Environment), in-built modules library and simulation tools. These tools provide comprehensive
aid in building software product and include features for simulation and testing. For example,
Cscope to search code in C, Eclipse.
Prototyping Tools
Software prototype is simulated version of the intended software product. Prototype provides
initial look and feel of the product and simulates few aspect of actual product.
Prototyping CASE tools essentially come with graphical libraries. They can create hardware
independent user interfaces and design. These tools help us to build rapid prototypes based on
existing information. In addition, they provide simulation of software prototype. For example,
Serena prototype composer, Mockup Builder.
75
Web Development Tools
These tools assist in designing web pages with all allied elements like forms, text, script, graphic
and so on. Web tools also provide live preview of what is being developed and how will it look
after completion. For example, Fontello.
Maintenance Tools
Software maintenance includes modifications in the software product after it is delivered.
Automatic logging and error reporting techniques, automatic error ticket generation and root
cause Analysis are few CASE tools, which help software organization in maintenance phase of
SDLC. For example, Bugzilla for defect tracking..
1. StarUML
Features:
⚫ It let you create Object, Use case, Deployment, Sequence, Collaboration, Activity, and
Profile diagrams.
⚫ It is a UML 2.x standard compliant.
⚫ It offers multiplatform support (MacOS, Windows, and Linux).
2. Umbrello
Umbrello is a Unified Modeling language tool, which is based on KDE technology. It supports
both reverse engineering and code generation for C++ and Java.
Features:
⚫ It implements both structural and behavioral diagrams.
⚫ It imports C++ and can export up to a wider range of languages.
76
The UML designer tool helps in modifying and envisioning UML2.5 models. It allows you to
create all of the UML diagrams.
Features:
⚫ It provides transparency to work on DSL as well as UML models.
⚫ With the UML designer tool, the user can reuse the provided presentations.
⚫ It implements Component, Class, and Composite structure diagrams.
4. Altova
Features:
⚫ It provides a dedicated toolbar for an individual diagram.
⚫ It offers unlimited undo/redo, which inspires to discover new ideas.
⚫ In UML diagrams, you can easily add a hyperlink to any element.
⚫ It also provides an intuitive color-coding, icons, customized alignment grid, and cascading
styles for colors, fonts line size.
5. Umple
Umple is an object-oriented and modeling language that textually supports state diagrams and
class diagrams. It adapts JAVA, C++, and PHP, which results in more readable and short lines of
code.
Features:
⚫ It includes Singleton pattern, keys, immutability, mixins, and aspect-oriented code injection,
which makes UML more understandable to the users.
⚫ It enforces referential integrity by supporting UML multiplicity.
1. A context that defines the limited situation in which the statement is valid
2. A property that represents some characteristics of the context (e.g., if the context is a class, a
property might be an attribute)
3. An operation (e.g., arithmetic, set-oriented) that manipulates or qualifies a property, and
4. Keywords (e.g., if, then, else, and, or, not, implies) that are used to specify conditional
expressions.
OCL allows developers to define rules and constraints that objects must follow, enhancing the
precision and correctness of a software model.
77
TLA+(Temporal Logic of Actions)
TLA+ is a formal specification language used for designing, modelling, documentation, and
verification of programs, especially concurrent systems and distributed systems. TLA+ is
considered to be exhaustively-testable pseudocode and its use likened to drawing blueprints for
software systems.
For design and documentation, TLA+ fulfills the same purpose as informal technical
specifications. However, TLA+ specifications are written in a formal language of logic and
mathematics, and the precision of specifications written in this language is intended to uncover
design flaws before system implementation is underway.
Since TLA+ specifications are written in a formal language, they are amenable to finite model
checking. The model checker finds all possible system behaviours up to some number of
execution steps, and examines them for violations of desired invariance properties such as safety
and liveness. TLA+ specifications use basic set theory to define safety (bad things won’t happen)
and temporal logic to define liveness (good things eventually happen).
TLA+ is also used to write machine-checked proofs of correctness both for algorithms and
mathematical theorems. The proofs are written in a declarative, hierarchical style independent of
any single theorem prover backend. Both formal and informal structured mathematical proofs can
be written in TLA+.
Developers use numerous tools throughout software code creation, building and testing.
Development tools often include text editors, code libraries, compilers and test platforms. Without
an IDE, a developer must select, deploy, integrate and manage all of these tools separately. An
IDE brings many of those development-related tools together as a single framework, application
or service. The integrated toolset is designed to simplify software development and can identify
and minimize coding mistakes and typos..
⚫ An IDE can also contain features such as programmable editors, object and data modeling,
unit testing, a source code library and build automation tools.
78
⚫ An IDE’s toolbar looks much like a word processor’s toolbar. The toolbar facilitates color-
based organization, source-code formatting, error diagnostics and reporting, and intelligent
code completion.
An IDE can support model-driven development (MDD). A developer working with an IDE starts
with a model, which the IDE translates into suitable code.
⚫ Saves time when deciding what tools to use for various tasks, configuring the tools and
learning how to use them.
⚫ IDEs are also designed with all their tools under one user interface. An IDE can standardize
the development process by organizing the necessary features for software development in
the UI.
1. General-Purpose IDEs:
These IDEs support multiple programming languages and offer a wide range of features such as
code editing, debugging, version control integration, and project management.
Examples: Eclipse, IntelliJ IDEA, NetBeans, Visual Studio.
2. Language-Specific IDEs:
These IDEs are tailored for specific programming languages or frameworks, providing
specialized tools and features optimized for development in that language.
Examples: PyCharm (Python), Android Studio (Android development),
79
These IDEs are specialized for creating games and interactive multimedia applications, offering
tools for graphics, physics, audio, and game logic development.
Examples: Unity (3D game development), Unreal Engine (3D game development),
7. Cloud-Based IDEs:
These IDEs run entirely in the cloud, allowing developers to access and work on projects from
any device with an internet connection.
Examples: AWS Cloud9, Google Cloud Shell, Eclipse Che.
Each type of IDE offers a unique set of features and integrations tailored to the specific needs of
developers working in different domains and technologies.
THE END
80