Software Engineering
Software Engineering
CT 52
SECTION 5
CERTIFIED
INFORMATION COMMUNICATION
TECHNOLOGISTS
(CICT)
SOFTWARE ENGINEERING
STUDY TEXT
LEARNING OUTCOMES
CONTENT page
5. Software quality………………………………………………………….……………63
- Quality control and assurance
- Software quality factors and metrics
- Formal technical reviews
- Verification and validation
- Cost of quality
6. Software coding………………………………………………………………………82
- Coding styles and characteristics
- Coding in high-level languages
- Coding standards
- User interface
9. Conversion strategies…………………………….…………………….………131
- Conversion planning
- Parallel running
- Direct cut over
- Pilot study
- Phased approach
Software Evolution
The process of developing a software product using software
software engineering principles and
methods is referred to as software evolution. This includes the initial development of
software and its maintenance and updates, till desired software product is developed,
which satisfies the expected requirements.
Evolution starts from the requirement gathering process. After which developers create a
prototype of the intended software and show it to the users to get their feedback at the
early stage of software product development. The users suggest changes, on which
severall consecutive updates and maintenance keep on changing too. This process
changes to the original software, till the desired software is accomplished.
Even after the user has desired software in hand, the advancing technology and the
changing requirements force
orce the software product to change accordingly. Re-creating
Re creating
software from scratch and to go one-on-one
one one with requirement is not feasible. The only
feasible and economical solution is to update the existing software so that it matches the
latest requirements.
Software Evolution Laws
Lehman has given laws for software evolution. He divided the software into three
different categories:
S-type (static-type) - This is a software, which works strictly according to
defined specifications and solutions. The solution and the method to achieve it,
both are immediately understood before coding. The s-type software is least
subjected to changes hence this is the simplest of all. For example, calculator
program for mathematical computation.
P-type (practical-type) - This is a software with a collection of procedures. This
is defined by exactly what procedures can do. In this software, the specifications
can be described but the solution is not obvious instantly. For example, gaming
software.
E-type (embedded-type) - This software works closely as the requirement of
real-world environment. This software has a high degree of evolution as there are
various changes in laws, taxes etc. in the real world situations. For example,
Online trading software.
Software Paradigms
Software paradigms refer to the methods and steps, which are taken while designing the
software. There are many methods proposed and are in work today, but we need to see
where in the software engineering these paradigms stand. These can be combined into
various categories, though each of them is contained in one another:
Programming Paradigm
This paradigm is related closely to programming aspect of software development. This
includes:
Coding
Testing
Integration
Operational
This tells us how well software works in operations. It can be measured on:
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety
Transitional
This aspect is important when the software is moved from one platform to another:
Portability
Interoperability
Reusability
Adaptability
Maintenance
This aspect briefs about how well software has the capabilities to maintain itself in the
ever-changing environment:
Modularity
Maintainability
1) Project identification
2) Feasibility study
3) System analysis
4) System design
5) System development
6) System testing
7) System implementation
8) System maintenance
9) System documentation
1) Project identification: - in this phase the analyst focus the basic objective and
identification need for the corresponding software. In this phase the analyst set up
some meeting with the corresponding client for making the desired software.
2) Feasibility study: - feasibility defines in the three views for making particular
software for the client.
Technical
financial
Social feasibility.
3) System analysis: - analysis defines how and what type of desired software we have
to make for the client. It has some pen and paper base. Exercise through which the
analyst focused for the desired goals.
4) System design: - in this phase the analyst draw the corresponding diagrams related
to the particular software. In this phase the design include in the form of flow chat,
data flow diagram, net relationship diagram (NRD).
5) System development: - development refers in the form of coding, error checking
and debarking for the particular software. This phase deals with the developer
activity for making a successfully software.
6) System testing: - testing refers whatever analyst and developer done will it be
correct and error free to the desired software. In the S.E there some testing
technique to which we can check whether project is error free.
• Specification
• Design
• Validation
• Evolution
a) The waterfall model - Separate and distinct phases of specification and development.
b) Evolutionary development - Specification and development are interleaved.
c) Formal systems development - A mathematical system model is formally
transformed to an implementation.
d) Reuse-based development - The system is assembled from existing components
Linear/waterfall model
This model assumes that everything is carried out and taken place perfectly as planned in
the previous stage and there is no need to think about the past issues that may arise in the
next phase. This model does not work smoothly if there are some issues left at the
This model is best suited when developers already have designed and developed similar
software in the past and is aware of all its domains. The drawbacks of the waterfall
model are:
Rapid prototyping
Rapid Prototyping (RP) can be defined as a group of techniques used to quickly fabricate
a scale model of a part or assembly using three-dimensional computer aided design
(CAD) data. What is commonly considered to be the first RP technique,
Stereolithography, was developed by 3D Systems of Valencia, CA, USA. The company
was founded in 1986, and since then, a number of different RP techniques have become
available.
Rapid Prototyping has also been referred to as solid free-form manufacturing; computer
automated manufacturing, and layered manufacturing. RP has obvious use as a vehicle
for visualization. In addition, RP models can be used for testing, such as when an airfoil
shape is put into a wind tunnel. RP models can be used to create male models for tooling,
such as silicone rubber molds and investment casts. In some cases, the RP part can be the
final part, but typically the RP material is not strong or accurate enough. When the RP
material is suitable, highly convoluted shapes (including parts nested within parts) can be
produced because of the nature of RP.
The basic methodology for all current rapid prototyping techniques can be summarized
as follows:
1. A CAD model is constructed, and then converted to STL format. The resolution
can be set to minimize stair stepping.
2. The RP machine processes the .STL file by creating sliced layers of the model.
3. The first layer of the physical model is created. The model is then lowered by the
thickness of the next layer, and the process is repeated until completion of the
model.
4. The model and any supports are removed. The surface of the model is then
finished and cleaned.
Evolutionary models
1. Exploratory programming
here, the objective of the process is to work with you to explore their
requirements and deliver a final system. The development starts with better
understood components of the system. The software evolves by adding new
features as they are proposed.
2. Throwaway prototyping
here, the purpose of the evolutionary development process is to understand your
requirements and thus develop a better requirements definition for the system.
The prototype concentrates on experimenting with those components of the
requirements which are poorly understood.
Advantages
This is the only method appropriate for situations where a detailed system specification
is unavailable. Effective in rapidly producing small systems, software with short life
spans, and developing sub-components of larger systems.
Disadvantages
It is difficult to measure progress and produce documentation reflecting every version of
the system as it evolves. This paradigm usually results in badly structured programs due
to continual code modification. Production of good quality software using this method
requires highly skilled and motivated programmers.
• Component analysis
• Requirements modification
• System design with reuse
• Development and integration
This approach is becoming more important but still limited experience with it.
Other models
Process Iteration
• Spiral development
Incremental Development
Rather than deliver the system as a single delivery, the development and delivery is
broken down into increments with each increment delivering part of the required
functionality. User requirements
equirements are prioritized and the highest priority requirements are
included in early increments. Once the development of an increment is started, the
requirements are frozen though requirements for later increments can continue to evolve.
Advantages
vantages of the incremental development include:
The highest priority system services tend to receive the most testing.
Spiral Development
CASE
CASE Classification
Classification helps us understand the different types of CASE tools and their support for
process activities.
Software Requirement
1. A condition of capability needed by a user to solve a problem or achieve an objective
2. A condition or a capability that must be met or possessed by a system to satisfy a contract,
standard, specification, or other formally imposed document."
Definition 2
Requirements engineering processes: -The processes used for RE vary widely depending on
the application domain, the people involved and the organisation developing the requirements.
However, there are a number of generic activities common to all processes.
Requirements elicitation
Requirements analysis
Requirements documentation
Requirements review
Elicitation: - It is also called requirement discovery. Requirements are identified with the help
of customer and exiting system processes, if they are available.
Requirement Elicitation is most difficult, perhaps most critical, most error prone and most
communication intensive aspect of software development.
1. Interviews
• After receiving the problem statement from the customer, the first step is to arrange a meeting
with the customer.
• During the meeting or interview, both the parties would like to understand each other.
• The objective of conducting an interview is to understand the customer’s expectation from the
software
Selection of stakeholder
1. Entry level personnel
2. Middle level stakeholder
3. Managers
4. Users of the software (Most important)
2. Brainstorming Sessions
• Brainstorming is a group technique that may be used during requirement elicitation to
understand the requirement
• Every idea will be documented in a way that every can see it
Requirements specification
Requirements specification is a complete description of the behavior of a system to be
developed. It includes a set of use cases that describe all the interactions the users will have with
the software. Use cases are also known as functional requirements. In addition to use cases, the
SRS also contains non-functional (or supplementary) requirements. Nonfunctional requirements
are requirements which impose constraints on the design or implementation (such as
performance engineering requirements, quality standards, or design constraints)
Characteristics of good SRS document: -Some of the identified desirable qualities of the
SRS documents are the following-
• Concise- The SRS document should be concise and at the same time unambiguous,
consistent, and complete. An SRS is unambiguous if and only if every requirement stated
has one and only one interpretation.
• Structured- The SRS document should be well-structured. A well-structured document
is easy to understand and modify. In practice, the SRS document undergoes several
revisions to cope with the customer requirements.
• Black-box view- It should only specify what the system should do and refrain from
stating how to do. This means that the SRS document should specify the external
behaviours of the system and not discuss the implementation issues.
• Conceptual integrity- The SRS document should exhibit conceptual integrity so that the
reader can easily understand the contents.
• Verifiable- All requirements of the system as documented in the SRs document should
beverifiable. This means that it should be possible to determine whether or not
requirements have been met in an implementation.
System flowcharts
System flowcharts are a way of displaying how data flows in a system and how decisions are
made to control events.
To illustrate this, symbols are used. They are connected together to show what happens to data
and where it goes. The basic ones include:
Note that system flow charts are very similar to data flow charts. Data flow charts do not include
decisions, they just show the path that data takes, where it is held, processed, and then output.
This system flowchart is a diagram for a 'cruise control' for a car. The cruise control keeps the
car at a steady speed that has been set by the driver.
The flowchart shows what the outcome is if the car is going too fast or too slow. The system is
designed to add fuel, or take it away and so keep the car's speed constant. The output (the car's
new speed) is then fed back into the system via the speed sensor.
aircraft control
central heating
automatic washing machines
booking systems for airlines
Types of flowchart
Sterneckert (2003) suggested that flowcharts can be modeled from the perspective of different
user groups (such as managers, system analysts and clerks) and that there are four general types:
Notice that every type of flowchart focuses on some kind of control, rather than on the particular
flow itself.
In addition, many diagram techniques exist that are similar to flowcharts but carry a different
name, such as UMLactivity diagrams.
Common Shapes
The following are some of the commonly used shapes used in flowcharts. Generally, flowcharts
flow from top to bottom and left to right.
A typical flowchart from older basic computer science textbooks may have the following kinds
of symbols:
Labeled connectors
Represented by an identifying label inside a circle. Labeled connectors are used in complex or
multi-sheet diagrams to substitute for arrows. For each label, the "outflow" connector must
always be unique, but there may be any number of "inflow" connectors. In this case, a junction in
control flow is implied.
Concurrency symbol
Represented by a double transverse line with any number of entry and exit arrows. These
symbols are used whenever two or more control flows must operate simultaneously. The exit
flows are activated concurrently, when all of the entry flows have reached the concurrency
symbol. A concurrency symbol with a single entry flow is a fork; one with a single exit flow is a
join.
Data-flow extensions
A number of symbols have been standardized for data flow diagrams to represent data flow,
rather than control flow. These symbols may also be used in control flowcharts (e.g. to substitute
for the parallelogram symbol).
Case tools
Computer-aided software engineering (CASE) is the domain of software tools used to design
and implement applications. CASE tools are similar to and were partly inspired by Computer
Aided Design (CAD) tools used to design hardware products. CASE tools are used to develop
software that is high-quality, defect-free, and maintainable. CASE software is often associated
with methodologies for the development of information systems together with automated tools
that can be used in the software development process.
Tools
CASE tools support specific tasks in the software development life-cycle. They can be divided
into the following categories:
1. Business and Analysis modeling. Graphical modeling tools. E.g., E/R modeling, object
modeling, etc.
2. Development. Design and construction phases of the life-cycle. Debugging environments.
E.g., GNUDebugger.
3. Verification and validation. Analyze code and specifications for correctness,
performance, etc.
4. Configuration management. Control the check-in and check-out of repository objects and
files. E.g., SCCS, CMS.
5. Metrics and measurement. Analyze code for complexity, modularity (e.g., no "go to's"),
performance, etc.
6. Project management. Manage project plans, task assignments, scheduling.
Another common way to distinguish CASE tools is the distinction between Upper CASE and
Lower CASE. Upper CASE Tools support business and analysis modeling. They support
traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure charts,
Decision Trees, Decision tables, etc. Lower CASE Tools support development activities, such as
physical design, debugging, construction, testing, component integration, maintenance, and
reverse engineering. All other activities span the entire life-cycle and apply equally to upper and
lower CASE.
Workbenches
Workbenches integrate two or more CASE tools and support specific software-process activities.
Hence they achieve:
Environments
1. Toolkits. Loosely coupled collections of tools. These typically build on operating system
workbenches such as the Unix Programmer's Workbench or the VMS VAX set. They
typically perform integration via piping or some other basic mechanism to share data and
pass control. The strength of easy integration is also one of the drawbacks. Simple
passing of parameters via technologies such as shell scripting can't provide the kind of
sophisticated integration that a common repository database can.
2. Fourth generation. These environments are also known as 4GL standing for fourth
generation language environments due to the fact that the early environments were
designed around specific languages such as Visual Basic. They were the first
environments to provide deep integration of multiple tools. Typically these environments
were focused on specific types of applications. For example, user-interface driven
applications that did standard atomic transactions to a relational database. Examples are
Informix 4GL, and Focus.
3. Language-centered. Environments based on a single often object-oriented language such
as the Symbolics Lisp Genera environment or Visual Works Smalltalk from Parcplace. In
these environments all the operating system resources were objects in the object-oriented
language. This provides powerful debugging and graphical opportunities but the code
developed is mostly limited to the specific language. For this reason, these environments
were mostly a niche within CASE. Their use was mostly for prototyping and R&D
projects. A common core idea for these environments was the model-view-controller user
interface that facilitated keeping multiple presentations of the same design consistent
with the underlying model. The MVC architecture was adopted by the other types of
CASE environments as well as many of the applications that were built with them.
4. Integrated. These environments are an example of what most IT people tend to think of
first when they think of CASE. Environments such as IBM's AD/Cycle, Andersen
Consulting's FOUNDATION, the ICL CADES system, and DEC Cohesion. These
environments attempt to cover the complete life-cycle from analysis to maintenance and
provide an integrated database repository for storing all artifacts of the software process.
In practice, the distinction between workbenches and environments was flexible. Visual Basic
for example was a programming workbench but was also considered a 4GL environment by
many. The features that distinguished workbenches from environments were deep integration via
a shared repository or common language and some kind of methodology (integrated and process-
centered environments) or domain (4GL) specificity.
Some of the most significant risk factors for organizations adopting CASE technology include:
Functional decomposition
Functional Decomposition is the process of taking a complex process and breaking it down into
its smaller, simpler parts. For instance, think about using an ATM. You could decompose the
process into:
You can think of programming the same way. Think of the software running that ATM:
Each of which can be broken down further. Once you've reached the most decomposed pieces of
a subsystem, you can think about how to start coding those pieces. You then compose those
small parts into the greater whole.
The benefit of functional decomposition is that once you start coding, you are working on the
simplest components you can possibly work with for your application. Therefore developing and
testing those components becomes much easier (not to mention you are better able to architect
your code and project to fit your needs).
Overview
More generally, functional decomposition in computer science is a technique for mastering the
complexity of the function of a model. A functional model of a system is thereby replaced by a
series of functional models of subsystems.
Decomposition topics
Decomposition paradigm
Most decomposition paradigms suggest breaking down a program into parts so as to minimize
the static dependencies among those parts, and to maximize the cohesiveness of each part. Some
popular decomposition paradigms are the procedural, modules, abstract data type and object
oriented ones.
The concept of decomposition paradigm is entirely independent and different from that of model
of computation, but the two are often confused, most often in the cases of the functional model
of computation being confused with procedural decomposition, and of the actor model of
computation being confused with object oriented decomposition.
Decomposition Structure
Modules design
Terminology
The term package is sometimes used instead of module (as in Dart, Go, or Java). In other
implementations, this is a distinct concept; in Python a package is a collection of modules, while
in the upcoming Java 9 it is planned to be introduced a new concept of module (a collection of
packages with enhanced access control).
Furthermore, the term "package" has other uses in software. A component is a similar concept,
but typically refers to a higher level; a component is a piece of a whole system, while a module is
a piece of an individual program. The scale of the term "module" varies significantly between
languages; in Python it is very small-scale and each file is a module, while in Java 9 it is planned
to be large-scale, where a module is a collection of packages, which are in turn collections of
files.
Module design which is also called "low level design" has to consider the programming language
which shall be used for implementation. This will determine the kind of interfaces you can use
and a number of other subjects.
Structured walkthrough
A walkthrough differs from software technical reviews in its openness of structure and its
objective of familiarization. It differs from software inspection in its ability to suggest direct
alterations to the product reviewed, its lack of a direct focus on training and process
improvement, and its omission of process and product measurement.
In general, a walkthrough has one or two broad objectives: to gain feedback about the technical
quality or content of the document; and/or to familiarize the audience with the content.
A walkthrough is normally organized and directed by the author of the technical document. Any
combination of interested or technically qualified personnel (from within or outside the project)
may be included as seems appropriate.
The Author, who presents the software product in step-by-step manner at the walk-
through meeting, and is probably responsible for completing most action items;
The Walkthrough Leader, who conducts the walkthrough, handles administrative tasks,
and ensures orderly conduct (and who is often the Author); and
The Recorder, who notes all anomalies (potential defects), decisions, and action items
identified during the walkthrough meetings.
Structured walkthroughs are usually NOT used for technical discussions or to discuss the
solutions for the issues found. As explained, the aim is to detect error and not to correct errors.
When the walkthrough is finished, the author of the output is responsible for fixing the issues.
Benefits:
Saves time and money as defects are found and rectified very early in the lifecycle.
It notifies the project management team about the progress of the development process.
Presenter - The presenter usually develops the agenda for the walkthrough and presents
the output being reviewed.
Moderator - The moderator facilitates the walkthrough session, ensures the walkthrough
agenda is followed, and encourages all the reviewers to participate.
Scribe - The scribe is the recorder of the structured walkthrough outcomes who records
the issues identified and any other technical comments, suggestions, and unresolved
questions.
Decision tables
Decision tables are a precise yet compact way to model complex rule sets and their
corresponding actions.
Decision tables, like flowcharts and if-then-else and switch-case statements, associate conditions
with actions to perform, but in many cases do so in a more elegant way.
In the 1960s and 1970s a range of "decision table based" languages such as Filetab were popular
for business programming.
Structure
Aside from the basic four quadrant structure, decision tables vary widely in the way the
condition alternatives and action entries are represented. Some decision tables use simple
true/false values to represent the alternatives to a condition (akin to if-then-else), other tables
may use numbered alternatives (akin to switch-case), and some tables even use fuzzy logic or
probabilistic representations for condition alternatives. In a similar way, action entries can
simply represent whether an action is to be performed (check the actions to perform), or in more
advanced decision tables, the sequencing of actions to perform (number the actions to perform).
Example
The limited-entry decision table is the simplest to describe. The condition alternatives are simple
Boolean values, and the action entries are check-marks, representing which of the actions in a
given column are to be performed.
A technical support company writes a decision table to diagnose printer problems based upon
symptoms described to them over the phone from their clients.
Printer troubleshooter
Rules
Conditions Printer does not print Y Y Y Y N N N N
A red light is flashing Y Y N N Y Y N N
Printer is unrecognized Y N Y N Y N Y N
Actions Check the power cable X
Check the printer-computer cable X X
Ensure printer software is installed X X X X
Check/replace ink X X X X
Check for paper jam X X
Decision tables, especially when coupled with the use of a domain-specific language, allow
developers and policy experts to work from the same information, the decision tables them.
Tools to render nested if statements from traditional programming languages into decision tables
can also be used as a debugging tool.
Decision tables have proven to be easier to understand and review than code, and have been used
extensively and successfully to produce specifications for complex systems.
Decision tables can be, and often are, embedded within computer programs and used to 'drive'
the logic of the program. A simple example might be a lookup table containing a range of
possible input values and a function pointer to the section of code to process that input.
Multiple conditions can be coded for in similar manner to encapsulate the entire program logic in
the form of an 'executable' decision table or control table.
Implementations
Decision table provides a handy and compact way to represent complex business logic. In a
decision table, business logic is well divided into conditions, actions (decisions) and rules for
representing the various components that form the business logic.
Overview
The organization chart is a diagram showing graphically the relation of one official to another, or
others, of a company. It is also used to show the relation of one department to another, or others,
or of one function of an organization to another, or others. This chart is valuable in that it enables
one to visualize a complete organization, by means of the picture it presents.
Hierarchical
Matrix
Flat (also known as Horizontal)
There is no accepted form for making organization charts other than putting the principal
official, department or function first, or at the head of the sheet, and the others below, in the
order of their rank. The titles of officials and sometimes their names are enclosed in boxes or
circles. Lines are generally drawn from one box or circle to another to show the relation of one
official or department to the others.
History
The Scottish-American engineer Daniel McCallum (1815–1878) is credited for creating the first
organizational charts of American businessaround 1854. This chart was drawn by George Holt
Henshaw.
The term "organization chart" came into use in the early twentieth century. In 1914
Brintondeclared "organization charts are not nearly so widely used as they should be. As
organization charts are an excellent example of the division of a total into its components, a
number of examples are given here in the hope that the presentation of organization charts in
convenient form will lead to their more widespread use." In those years industrial engineers
promoted the use of organization charts.
In the 1920s a survey revealed that organizational charts were still not common among ordinary
business concerns, but they were beginning to find their way into administrative and business
enterprises.
Limitations
An example of a "line relationship" (or chain of command in military relationships) in this chart
would be between the general and the two colonels - the colonels are directly responsible to the
general.
An example of a "lateral relationship" in this chart would be between "Captain A", and "Captain
B" who both work on level and both report to the "Colonel B".
Various shapes such as rectangles, squares, triangles, circles can be used to indicate different
roles. Color can be used both for shape borders and connection lines to indicate differences in
authority and responsibility, and possibly formal, advisory and informal links between people. A
department or position yet to be created or currently vacant might be shown as a shape with a
dotted outline. Importance of the position may be shown both with a change in size of the shape
in addition to its vertical placement on the chart.
A structure chart (module chart, hierarchy chart) is a graphic depiction of the decomposition
of a problem. It is a tool to aid in software design.. It is particularly helpful on large pr
problems.
A structure chart illustrates the partitioning of a problem into subproblems and shows the
hierarchical relationships among the parts. A classic "organization chart" for a company is an
example of a structure chart.
The top of the chart is a box representing the entire problem, the bottom of the chart shows a
number of boxes representing the less complicated subproblems. (Left-right
(Left right on the chart is
irrelevant.)
A structure chart is NOT a flowchart. It has nothing to do with the logical sequence oof tasks. It
does NOT show the order in which tasks are performed. It does NOT illustrate an algorithm.
44 www.someakenya.com Contact: 0707 737 890
Each block represents some function in the system, and thus should contain a verb phrase, e.g.
"Print report heading."
There is a prominent difference between DFD and Flowchart. The flowchart depicts flow of
control in program modules. DFDs depict flow of data in the system at various levels. DFD does
not contain any control or branch elements.
Logical DFD - This type of DFD concentrates on the system process and flow of data in
the system.For example in a Banking software system, how data is moved between
different entities.
Physical DFD - This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and close to the implementation.
DFD Components
DFD can represent Source, destination, storage and flow of data using the following set of
components -
Entities - Entities are source and destination of information data. Entities are represented
by a rectangle with their respective names.
Process - Activities and action taken on the data are represented by Circle or Round
Round-
edged rectangles.
Data Storage - There
here are two variants of data storage - it can either be represented as a
rectangle with absence of both smaller sides or as an open-sided
open sided rectangle with only one
side missing.
Data Flow - Movement of data is shown by pointed arrows. Data movement is shown
from the base of arrow as its source towards head of the arrow as destination.
Levels of DFD
Level 0 - Highest abstraction level DFD is known as Level 0 DFD, which depicts the
entire information system as one diagram concealing all the underlying details. Level 0
DFDs are also known as context level DFDs.
Higher level DFDs can be transformed into more specific lower level DFDs with deeper level of
understanding unless the desired level of specification is achieved.
Data Flow Diagrams (DFD) helps us in identifying existing business processes. It is a technique
we benefit from particularly before we go through business process re-engineering.
At its simplest, a data flow diagram looks at how data flows through a system. It concerns things
like where the data will come from and go to as well as where it will be stored. But you won't
find information about the processing timing (e.g. whether the processes happen in sequence or
in parallel).
We usually begin with drawing a context diagram, a simple representation of the whole system.
To elaborate further from that, we drill down to a level 1 diagram with additional information
about the major functions of the system. This could continue to evolve to become a level 2
diagram when further analysis is required. Progression to level 3, 4 and so on is possible but
anything beyond level 3 is not very common. Please bear in mind that the level of detail asked
for depends on your process change plan.
Diagram Notations
Now we'd like to briefly introduce to you a few diagram notations which you'll see in the tutorial
below.
External Entity
An external entity can represent a human, system or subsystem. It is where certain data comes
from or goes to. It is external to the system we study, in terms of the business process. For this
reason, people used to draw external entities on the edge of a diagram.
A process is a business activity or function where the manipulation and transformation of data
takes place. A process can be decomposed to finer level of details, for representing how data is
being processed within the process.
Data Store
A data store represents the storage of persistent data required and/or produced by the process.
Here are some examples of data stores: membership forms, database table, etc.
Data Flow
A data flow represents the flow of information, with its direction represented by an arrow head
that shows at the end(s) of flow connector.
In this tutorial we will show you how to draw a context diagram, along with a level 1 diagram.
Note: The software we are using here is Visual Paradigm Standard Edition. You are welcome to
download a free 30-dayevaluation copy of Visual Paradigm to walk through the example below.
No registration, email address or obligation is required.
1. To create new DFD, select Diagram > New from the toolbar.
2. In the New Diagram window, select Data Flow Diagram and click Next.
3. Enter Context as diagram name and click OK to confirm.
5. Next, let's create an external entity. Please your mouse pointer over System
System. Press and
drag out the Resource Catalog button at top right.
6. Release the mouse button and select Bidirectional Data Flow ->
> External Entity from
Resource Catalog.
8. Now we'll model the database accessed by the system. Use Resource
Resource Catalog to create a
Data Store from System,, with a bidirectional data flow in between.
10. Create two more data stores, Customer and Transaction,, as shown below. We have just
completed the Context diagram.
1. Instead of creating another diagram from scratch, we will decompose the System process
to form a new DFD. Right click on System and select Decompose from the popup menu.
The remaining steps in this section are about connecting the model elements in the diagram. For
example, Customer provides order information when placing an order for processing.
1. Place your mouse pointer over Customer. Drag out the Resource Catalog icon and
release your mouse button on Process Order.
4. Meanwhile the Process Order process also receives customer information from the
database in order to process the order.
or
Use Resource Catalog to create a data flow from Customer to Process Order
Order.
Optional:: You can label the data flow "customer information" if you like. But since this
data flow is quite self-explanatory
explanatory visually, we are going to omit it here.
5. By combining the order information from Customer (external entity) and the customer
information from Customer (data store), Process Order (process) then creates a
transaction record in the database. Create a data flow from Process Order to Transaction.
6. Once a transaction is stored, the shipping process follows. Therefore, create a data flow
from Process Order (process) to Ship Good (process).
7. Ship Good needs to read the transaction information (i.e. The order_ in order to pack the
rightt product for delivery. Create a data flow from Transaction (data store) to Ship Good
(process).
Note: If there is a lack of space, feel free to move the shapes around to make room.
9. Ship Good then updates the Inventory database to reflect the goods shipped. Create a data
flow from Ship Good (process) to Inventory (data store). Name it updated produc
product
record.
10. Once the order arrives in the customer's hands, the Issue Receipt process begins. In it, a
receipt is prepared based on the transaction record stored in the database. So let's create a
data flow from Transaction (data store) to Issue Receipt (process).
11. Then a receipt is issued to the customer. Let's create a data flow from Issue Receipt
(process) to Customer (external entity). Name the data flow receipt.
The completed diagram above looks a bit rigid and busy. In this section we are going to make
some changes to the connectors to increase readability.
2. Move the shapes around so that the diagram looks less crowded.
According to the popular guide Unified Process, OOAD in modern software engineering is best
conducted in an iterative and incremental way. Iteration by iteration, the outputs of OOAD
activities, analysis models for OOA and design models for OOD respectively,
respectively, will be refined and
evolve continuously driven by key factors like risks and business value.
The software life cycle is typically divided up into stages going from abstract descriptions of the
problem to designs then to code and testing and finally
finally to deployment. The earliest stages of this
process are analysis and design. The analysis phase is also often called "requirements
acquisition".
OOAD
OAD is conducted in an iterative and incremental manner, as formulated by the Unified
Process.
The alternative to waterfall models is iterative models. This distinction was popularized by Barry
Boehm in a very influential paper on his Spiral Model for iterative software development. With
iterative models it is possible to do work in various stages of the model in parallel. So for
example it is possible—and not seen as a source of error—to work on analysis, design, and even
code all on the same day and to have issues from one stage impact issues from another. The
emphasis on iterative models is that software development is a knowledge-intensive process and
that things like analysis can't really be completely understood without understanding design
issues, that coding issues can affect design, that testing can yield information about how the code
or even the design should be modified, etc.
The object-oriented paradigm emphasizes modularity and re-usability. The goal of an object-
oriented approach is to satisfy the "open closed principle". A module is open if it supports
extension. If the module provides standardized ways to add new behaviors or describe new
states. In the object-oriented paradigm this is often accomplished by creating a new subclass of
an existing class. A module is closed if it has a well-defined stable interface that all other
modules must use and that limits the interaction and potential errors that can be introduced into
one module by changes in another. In the object-oriented paradigm this is accomplished by
defining methods that invoke services on objects. Methods can be either public or private, i.e.,
certain behaviors that are unique to the object are not exposed to other objects. This reduces a
source of many common errors in computer programming.
The software life cycle is typically divided up into stages going from abstract descriptions of the
problem to designs then to code and testing and finally to deployment. The earliest stages of this
process are analysis and design. The distinction between analysis and design is often described
as "what vs. how". In analysis developers work with users and domain experts to define what the
system is supposed to do. Implementation details are supposed to be mostly or totally (depending
on the particular method) ignored at this phase. The goal of the analysis phase is to create a
functional model of the system regardless of constraints such as appropriate technology. In
object-oriented analysis this is typically done via use cases and abstract definitions of the most
important objects. The subsequent design phase refines the analysis model and makes the needed
technology and other implementation choices. In object-oriented design the emphasis is on
describing the various objects, their data, behavior, and interactions. The design model should
have all the details required so that programmers can implement the design in code.
The purpose of any analysis activity in the software life-cycle is to create a model of the system's
functional requirements that is independent of implementation constraints.
The main difference between object-oriented analysis and other forms of analysis is that by the
object-oriented approach we organize requirements around objects, which integrate both
behaviors (processes) and states (data) modeled after real world objects that the system interacts
with. In other or traditional analysis methodologies, the two aspects: processes and data are
considered separately. For example, data may be modeled by ER diagrams, and behaviors by
flow charts or structure charts.
Common models used in OOA are use cases and object models. Use cases describe scenarios for
standard domain functions that the system must accomplish. Object models describe the names,
class relations (e.g. Circle is a subclass of Shape), operations, and properties of the main objects.
User-interface mockups or prototypes can also be created to help understanding.
Object-oriented design
Important topics during OOD also include the design of software architectures by applying
architectural patterns and design patterns with object-oriented design principles.
Object-oriented modeling
Object-oriented modeling typically divides into two aspects of work: the modeling of dynamic
behaviors like business processes and use cases, and the modeling of static structures like classes
and components. OOA and OOD are the two distinct abstract levels (i.e. the analysis level and
the design level) during OOM. The Unified Modeling Language (UML) and SysML are the two
popular international standard languages used for object-oriented modeling.
Modeling helps coding. A goal of most modern software methodologies is to first address "what"
questions and then address "how" questions, i.e. first determine the functionality the system is to
provide without consideration of implementation constraints, and then consider how to make
specific solutions to these abstract requirements, and refine them into detailed designs and codes
by constraints such as technology and budget. Object-oriented modeling enables this by
producing abstract and accessible descriptions of both system requirements and designs, i.e.
models that define their essential structures and behaviors like processes and objects, which are
important and valuable development assets with higher abstraction levels above concrete and
complex source code.
SOFTWARE QUALITY
Software Quality Control is the set of procedures used by organizations to ensure that a
software product will meet its quality goals at the best value to the customer, and to continually
improve the organization’s ability to produce software products in the future.
Definition 2
Software Quality Control is a function that checks whether a software component or supporting
artifact meets requirements, or is "fit for use". Software Quality Control is commonly referred to
as Testing.
Check that assumptions and criteria for the selection of data and the different factors
related to data are documented.
Check for transcription errors in data input and reference.
Check the integrity of database files.
Check for consistency in data.
Check that the movement of inventory data among processing steps is correct.
Check for uncertainties in data, database files etc.
Undertake review of internal documentation.
Check methodological and data changes resulting in recalculations.
Undertake completeness checks.
Compare Results to previous Results.
These specified procedures and outlined requirements leads to the idea of Verification and
Validation and software testing.
It is distinct from software quality assurance which encompasses processes and standards for
ongoing maintenance of high quality of products, e.g. software deliverables, documentation and
processes - avoiding defects. Whereas software quality control is a validation of artifacts
compliance against established criteria - finding defects
SQA encompasses the entire software development process, which includes processes such as
requirements definition, software design, coding, source code control, code reviews, software
configuration management, testing, release management, and product integration. SQA is
organized into goals, commitments, abilities, activities, measurements, and verifications
Many people still use the term Quality Assurance (QA) and Quality Control (QC)
interchangeably but this should be discouraged.
Reliability - A set of attributes that bear on the capability of software to maintain its level of
performance under stated conditions for a stated period of time.
o Maturity
o Recoverability
Efficiency- A set of attributes that bear on the relationship between the level of performance
of the software and the amount of resources used, under stated conditions.
o Time Behavior
o Resource Behavior
Maintainability- A set of attributes that bear on the effort needed to make specified
modifications.
o Stability
o Analyzability
o Changeability
o Testability
Portability- A set of attributes that bear on the ability of software to be transferred from one
environment to another.
o Installability
o Replaceability
o Adaptability
General principles for selecting product measures and metrics are discussed in this section. The
generic measurement process activities parallel the scientific method taught in natural science
classes (formulation, collection, analysis, interpretation, feedback).
If the measurement process is too time consuming, no data will ever be collected during the
development process. Metrics should be easy to compute or developers will not take the time to
compute them.
The tricky part is that in addition to being easy compute, the metrics need to be perceived as
being important to predicting whether product quality can be improved or not.
Measurement Principles
Measurement Process
Formulation. The derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
Collection. The mechanism used to accumulate data required to derive the formulated
metrics.
Analysis. The computation of metrics and the application of mathematical tools.
Interpretation. The evaluation of metrics results in an effort to gain insight into the
quality of the representation.
Feedback. Recommendations derived from the interpretation of productmetrics
transmitted to the software team.
S/W metrics will be useful only if they are characterized effectively and validated to that their
worth is proven.
Simple and computable. It should be relatively easy to learn how to derive the metric, and
its computation should not demand inordinate effort or time
Empirically and intuitively persuasive. The metric should satisfy the engineer’s intuitive
notions about the product attribute under consideration
Consistent and objective. The metric should always yield results that are unambiguous.
Consistent in its use of units and dimensions. The mathematical computation of the
metric should use measures that do not lead to bizarre combinations of unit.
Programming language independent. Metrics should be based on the analysis model, the
design model, or the structure of the program itself.
An effective mechanism for quality feedback. That is, the metric should provide a
software engineer with information that can lead to a higher quality end product
The function point metric (FP), first proposed by Albrecht [ALB79], can be used
effectively as a means for measuring the functionality delivered by a system.
Function points are derived using an empirical relationship based on countable (direct)
measures of software's information domain and assessments of software complexity
Information domain values are defined in the following manner:
o number of external inputs (EIs)
o number of external outputs (EOs)
o number of external inquiries (EQs)
o number of internal logical files (ILFs)
o number of external interface files (EIFs)
Count total
Metrics for the Design Model
Design metrics for computer S/W, like all other S/W metrics, are not perfect. And yet,
design without measurement is an unacceptable alternative.
Architectural Design Metrics
Cohesion: The degree to which all operations working together to achieve a single, well-
defined purpose
Primitiveness: Applied to both operations and classes, the degree to which an operation
is atomic
Similarity: The degree to which two or more classes are similar in terms of their
structure, function, behavior, or purpose
Volatility: Measures the likelihood that a change will occur
Weighted methods per class (WMC): The number of methods and their complexity are
reasonable indicator of the amount of effort required to implement and test a class.
Depth of the inheritance tree (DIT): The maximum length from the node to root of the tree.
A software technical review is a form of peer review in which a team of qualified personnel ...
examines the suitability of the software product for its intended use and identifies discrepancies
from specifications and standards. Technical reviews may also provide recommendations of
alternatives and examination of various alternatives.
"Software product" normally refers to some kind of technical document. This might be a
software design document or program source code, but use cases, business process definitions,
test case specifications, and a variety of other technical documentation, may also be subject to
technical review.
Technical review differs from software walkthroughs in its specific focus on the technical quality
of the product reviewed. It differs from software inspection in its ability to suggest direct
alterations to the product reviewed, and its lack of a direct focus on training and process
improvement.
The term formal technical review is sometimes used to mean a software inspection. A
'Technical Review' may also refer to an acquisition lifecycle event or Design review.
The purpose of a technical review is to arrive at a technically superior version of the work
product reviewed, whether by correction of defects or by recommendation or introduction of
alternative approaches. While the latter aspect may offer facilities that software inspection lacks,
there may be a penalty in time lost to technical discussions or disputes which may be beyond the
capacity of some participants.
IEEE 1028 recommends the inclusion of participants to fill the following roles:
Process
A formal technical review will follow a series of activities similar to that specified in clause 5 of
IEEE 1028, essentially summarized in the article on software review.
Software Validation
Validation is the process of examining whether or not the software satisfies the user
requirements. It is carried out at the end of the SDLC. If the software matches requirements for
which it was made, it is validated.
Validation ensures the product under development is as per the user requirements.
Validation answers the question: Are we developing the product which attempts all that
user needs from this software?
Validation emphasizes on user requirements.
Software Verification
Verification is the process of confirming if the software is meeting the business requirements,
and is developed adhering to the proper specifications and methodologies.
Errors: These are actual coding mistakes made by developers. In addition, there is a
difference in output of software and desired output, is considered as an error.
Fault: When error exists fault occurs. A fault, also known as a bug, is a result of an error
which can cause system to fail.
Failure: Failure is said to be the inability of the system to perform the desired task.
Failure occurs when fault exists in the system.
Verification: “Are we building the product right?” The software should conform to its
specification.
Validation: “Are we building the right product?” The software should do what the user really
needs / wants. Types of V & V include:
Static V&V - Software inspections / reviews: where one analyzes static system representations
such as requirements, design, source code, etc.
Dynamic V&V - Software testing: where one executes an implementation of the software to
examine outputs and operational behavior.
b) Statistical testing: Tests designed to assess system reliability and performance under
operational conditions. Makes use of an operational profile.
V & V Goals
Verification and validation should establish confidence that the software is “fit for purpose”.
This does NOT usually mean that the software must be completely free of defects. The level of
confidence required depends on at least three factors:
1. Software function / purpose: Safety-critical systems require a much higher level of
confidence than demonstration-of-concept prototypes.
2. User expectations: Users may tolerate shortcomings when the benefits of use are high.
3. Marketing environment: Getting a product to market early may be more important than
finding additional defects.
These involve people examining a system representation (requirements, design, source code,
etc.) with the aim of discovering anomalies and defects. They do not require execution so may be
used before system implementation. Can be more effective than testing after system
implementation.
• Many different defects may be discovered in a single inspection. (In testing, one defect
may mask others so several executions may be required.)
Inspections and testing are complementary in that inspections can be used early with non-
executable entities and with source code at the module and component levels while testing can
validate dynamic behaviour and is the only effective technique at the sub-system and system
code levels. Inspections cannot directly check nonfunctional requirements such as performance,
usability, etc.
Inspection pre-conditions
• Management must accept the fact that inspection will increase costs early in the software
process.
Inspection Process
Inspection Teams
Checklist of common errors should be used to drive individual preparation. Error checklist for is
programming language dependent. The “weaker” the type checking (by the compiler), the larger
the checklist. Examples: initialization, constant naming, loop termination, array bounds, etc.
• Formal specification
• Static verification using correctness arguments
• Statistical testing to certify program reliability
• NO defect testing!
• Specification team: responsible for developing and maintaining the system specification
• Development team: responsible for developing and verifying the software. The software
is NOT executed or even compiled during this process.
• Certification team: responsible for developing a set of statistical tests to
measurereliability after development.
Results in IBM and elsewhere have been very impressive with very few discovered faults in
delivered systems.
Independent assessment shows that the process is no more expensive than other approaches. It’s
however not clear how this approach can be transferred to an environment with less skilled
engineers.
Cost of quality
Definition
Cost of Quality (COQ) is a measure that quantifies the cost of control/conformance and the cost
of failure of control/non-conformance. In other words, it sums up the costs related to prevention
and detection of defects and the costs due to occurrences of defects.
Definition by ISTQB: cost of quality: The total costs incurred on quality activities and
issues and often split into prevention costs, appraisal costs, internal failure costs and
external failure costs.
Definition by QAI: Money spent beyond expected production costs (labor, materials, and
equipment) to ensure that the product the customer receives is a quality (defect free)
product. The Cost of Quality includes prevention, appraisal, and correction or repair
costs.
Explanation
FORMULA / CALCULATION
where
and
NOTES
Cost of quality
A very significant question is: does quality assurance add any value. That is, is worthspending a
lot of money in quality assurance practices? In order to understand the impact of quality
assurance practices, we have to understand the cost of quality (or lack thereof) in a system.
Quality has a direct and indirect cost in the form of cost of prevention, appraisal, and failure.
If we try to prevent problems, obviously we will have to incur cost. This cost includes:
• Quality planning
• Formal technical reviews
• Test equipment
• Training
We will discuss these in more detail in the later sections.
The cost of appraisal includes activities to gain insight into the product condition. It involves in-
process and inter-process inspection and testing.
And finally, failure cost. Failure cost has two components: internal failure cost and external
failure cost. Internal failure cost requires rework, repair, and failure mode analysis. On the other
hand, external failure cost involves cost for complaint resolution, product return and
replacement, help-line support, warranty work, and law suits.
It is trivial to see that cost increases as we go from prevention to detection to internal failure to
external failure. This is demonstrated with the help of the following example:
Let us assume that a total of 7053 hours were spent inspecting 200,000 lines of code with the
result that 3112 potential defects were prevented. Assuming a programmer cost of $40 per hour,
the total cost of preventing 3112 defects was $382,120, or roughly $91 per defect.
Let us now compare these numbers to the cost of defect removal once the product has been
shipped to the customer. Suppose that there had been no inspections, and the programmers had
been extra careful and only one defect one 1000 lines escaped into the product shipment. That
would mean that 200 defects would still have to be fixed in the field. As an estimated cost of
$25000 per fix, the cost would be $5 Million or approximately 18 times more expensive than the
total cost of defect prevention
That means, quality translates to cost savings and an improved bottom line.
80 www.someakenya.com Contact: 0707 737 890
SQA Activities
There are two different groups involved in SQA related activities:
• Software engineers who do the technical work
• SQA group who is responsible for QA planning, oversight, record keeping, analysis, and
reporting
Software engineers address quality by applying solid technical methods and measures,
conducting formal and technical reviews, and performing well planned software testing.
The SQA group assists the software team in achieving a high quality product.
The SQA group also reviews software engineering activities to verify compliance with the
defined software process. It identifies, documents, and tracks deviations from the process and
verifies that the corrections have been made. In addition, it audits designated software work
products to verify compliance with those defined as part of the software process. It, reviews
selected work products, identifies, documents, and tracks deviations; verifies that corrections
have been made; and reports the results of its work to the project manager.
The basic purpose is to ensure that deviations in software work and work products are
documented and handled according to documented procedures. These deviations may
beencountered in the project plan, process description, applicable standards, or technical work
products. The group records any non-compliance and reports to senior management and non-
compliant items are recorded and tracked until they are resolved.
Another very important role of the group is to coordinate the control and management of change
and help to collect and analyze software metrics.
SOFTWARE CODING
Coding
• Coding is undertaken once the design phase is complete and the design documents have
been successfully reviewed.
• In the coding phase every module identified and specified in the design document is
independently coded and unit tested.
• Good software development organizations normally require their programmers to adhere
to some well-defined and standard style of coding called coding standards.
• Most software development organizations formulate their own coding standards that suit
them most, and require their engineers to follow these standards rigorously.
Good software development organizations usually develop their own coding standards and
guidelines depending on what best suits their organization and the type of products they develop.
• representative coding standards
• representative coding Guidelines
Coding Standards
Programmers spend more time reading code than writing code
• They read their own code as well as other programmers’ code.
• Readability is enhanced if some coding conventions are followed by all.
• Coding standards provide these guidelines for programmers.
• Generally are regarding naming, file organization, statements/declarations,
• Naming conventions.
Coding Guidelines
• Package name should be in lower case (mypackage, edu.iitk.maths)
Type names should be nouns and start with uppercase (Day, DateOfBirth,…)
Var names should be nouns in lowercase; vars with large scope should have long names;
loop iterators should be i, j, k…
Const names should be all caps
Method names should be verbs starting with lower case (eg getValue())
Prefix is should be used for Boolean method
Programming style is a set of rules or guidelines used when writing the source code for a
computer program. It is often claimed that following a particular programming style will help
A classic work on the subject was The Elements of Programming Style, written in the 1970s, and
illustrated with examples from the FORTRAN and PL/I languages prevalent at the time.
The programming style used in a particular program may be derived from the coding
conventions of a company or other computing organization, as well as the preferences of the
author of the code. Programming styles are often designed for a specific programming language
(or language family): style considered good in C source code may not be appropriate for BASIC
source code, and so on. However, some rules are commonly applied to many languages.
Good style is a subjective matter, and is difficult to define. However, there are several elements
common to a large number of programming styles. The issues usually considered as part of
programming style include the layout of the source code, including indentation; the use of white
space around operators and keywords; the capitalization or otherwise of keywords and variable
names; the style and spelling of user-defined identifiers, such as function, procedure and variable
names; and the use and style of comments.
Code appearance
Programming styles commonly deal with the visual appearance of source code, with the goal of
readability. Software has long been available that formats source code automatically, leaving
coders to concentrate on naming, logic, and higher techniques. As a practical point, using a
computer to format source code saves time, and it is possible to then enforce company-wide
standards without debates.
Indentation
Indent styles assist in identifying control flow and blocks of code. In some programming
languages indentation is used to delimit logical blocks of code; correct indentation in these cases
is more than a matter of style. In other languages indentation and white space do not affect
function, although logical and consistent indentation makes code more readable
Coding styles- Coding guidelines provide only general suggestions regarding the coding style to
be followed.
1) Do not use a coding style that is too clever or too difficult to understand: Code should be
easy to understand. Clever coding can obscure meaning of the code and hamper
understanding. It also makes maintenance difficult.
Very early in the development of computers attempts were made to make programming easier by
reducing the amount of knowledge of the internal workings of the computer that was needed to
write programs. If programs could be presented in a language that was more familiar to the
person solving the problem, then fewer mistakes would be made. High-level programming
languages allow the specification of a problem solution in terms closer to those used by human
beings. These languages were designed to make programming far easier, less error-prone and to
remove the programmer from having to know the details of the internal structure of a particular
computer. These high-level languages were much closer to human language. One of the first of
these languages was Fortran II which was introduced in about 1958. In Fortran II our program
above would be written as:
C=A+B
which is obviously much more readable, quicker to write and less error-prone. As with assembly
languages the computer does not understand these high-level languages directly and hence they
have to be processed by passing them through a program called a compiler which translates
them into internal machine language before they can be executed.
Another advantage accrues from the use of high-level languages if the languages are
standardized by some international body. Then each manufacturer produces a compiler to
compile programs that conform to the standard into their own internal machine language. Then it
As with assembly language human time is saved at the expense of the compilation time required
to translate the program to internal machine language. The compilation time used in the
computer is trivial compared with the human time saved, typically seconds as compared with
weeks.
Many high level languages have appeared since Fortran II (and many have also disappeared!),
among the most widely used have been:
Coding standards
Coding conventions are a set of guidelines for a specific programming language that
recommend programming style, practices and methods for each aspect of a piece program
written in this language. These conventions usually cover file organization, indentation,
comments, declarations, statements, white space, naming conventions, programming practices,
programming principles, programming rules of thumb, architectural best practices, etc. These are
guidelines for software structural quality. Software programmers are highly recommended to
follow these guidelines to help improve the readability of their source code and make software
maintenance easier. Coding conventions are only applicable to the human maintainers and peer
reviewers of a software project. Conventions may be formalized in a documented set of rules that
an entire team or company follows, or may be as informal as the habitual coding practices of an
individual. Coding conventions are not enforced by compilers. As a result, not following some or
all of the rules has no impact on the executable programs created from the source code.
Where coding conventions have been specifically designed to produce high-quality code, and
have then been formally adopted, they then become coding standards. Specific styles,
irrespective of whether they are commonly adopted, do not automatically produce good quality
code. It is only if they are designed to produce good quality code that they actually result in good
quality code being produced, i.e., they must be very logical in every aspect of their design -
every aspect justified and resulting in quality code being produced.
Good procedures, good methodology and good coding standards can be used to drive a project
such that the quality is maximized and the overall development time and development and
maintenance cost is minimized.
User interface
User interface is the front-end application view to which user interacts in order to use the
software. User can manipulate and control the software as well as hardware by means of user
interface. Today, user interface is found at almost every place where digital technology exists,
right from computers, mobile phones, cars, music players, airplanes, ships etc.
User interface is part of software and is designed such a way that it is expected to provide the
user insight of the software. UI provides fundamental platform for human-computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the underlying hardware and
software combination. UI can be hardware or software or a combination of both.The software
becomes more popular if its user interface is:
Attractive
Simple to use
Responsive in short time
Clear to understand
Consistent on all interfacing screens
CLI has been a great tool of interaction with computers until the video display monitors came
into existence. CLI is first choice of many technical users and programmers. CLI is minimum
interface software can provide to its users.
CLI provides a command prompt, the place where the user types the command
command and feeds to the
system. The user needs to remember the syntax of command and its use. Earlier CLI were not
programmed to handle the user errors effectively.A command is a text-based
text based reference to set of
instructions, which are expected to be executed by the system. There are methods like macros,
scripts that make it easy for the user to operate.
CLI Elements
A text-based
based command line interface can have the following elements:
Graphical User Interface provides the user graphical means to interact with the system. GUI can
be combination of both hardware and software. Using GUI, user interprets the software.
software.
Typically, GUI is more resource consuming than that of CLI. With advancing technology, the
programmers and designers create complex GUI designs that work with more efficiency,
accuracy and speed.
GUI Elements
Every graphical component provides a way to work with the system. A GUI system has
following elements such as:
Radio-button - Displays available options for selection. Only one can be selected among
all offered.
Check-box - Functions similar to list-box.
list box. When an option is selected, the box is marked
as checked. Multiple options represented by check boxes can be selected.
List-box - Provides list of available items for selection. More than one item can be
selected.
Sliders
Combo-box
Data-grid
Drop-down list
There are a number of activities performed for designing user interface. The process of GUI
design and implementation is alike SDLC. Any model can be used for GUI implementation
among Waterfall, Iterative or Spiral Model.
GUI Requirement Gathering - The designers may like to have list of all functional and
non-functional
functional requirements of GUI. This can be taken from user and their existing
software solution.
User Analysis - The designer studies who is going to use the software GUI. The target
audience matters as the design details change according to the knowledge and
competency level of the user. If user is technical savvy, advanced and complex GUI can
be incorporated. Forr a novice user, more information is included on how-
how-to of software.
Task Analysis - Designers have to analyze what task is to be done by the software
solution. Here in GUI, it does not matter how it will be done. Tasks can be represented in
hierarchical manner
anner taking one major task and dividing it further into smaller sub
sub-tasks.
There are several tools available using which the designers can create entire GUI on a mouse
click. Some tools can be embedded into the software environment (IDE).
GUI implementation tools provide powerful array of GUI controls. For software customization,
designers can change the code accordingly.There are different segments of GUI tools according
to their different use and platform.
Example
Mobile GUI, Computer GUI, Touch-Screen GUI etc. Here is a list of few tools which come
handy to build GUI:
FLUID
AppInventor (Android)
LucidChart
Wavemaker
Visual Studio
The following rules are mentioned to be the golden rules for GUI design, described by
Shneiderman and Plaisant in their book (Designing the User Interface).
SOFTWARE TESTING
Contrary to popular belief, Software Testing is not a just a single activity. It consists of series of
activities carried out methodologically to help certify your software product. These activities
(stages) constitute the Software Testing Life Cycle (STLC).
Each of these stages has a definite Entry and Exit criteria, Activities & Deliverables associated
with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is
met. But practically this is not always possible. So for this tutorial , we will focus of activities
and deliverables for the different stages in STLC
Software Testing Life Cycle refers to a testing process which has specific steps to be executed in
a definite sequence to ensure that the quality goals have been met. In STLC process, each
activity is carried out in a planned and systematic way. Each phase has different goals and
deliverables. Different organizations have different phases in STLC; however the basis remains
the same.
1. Requirements phase
2. Planning Phase
3. Analysis phase
4. Design Phase
5. Implementation Phase
6. Execution Phase
7. Conclusion Phase
8. Closure Phase
1. Requirement Phase:
During this phase of STLC, analyze and study the requirements. Have brain storming sessions
with other teams and try to find out whether the requirements are testable or not. This phase
helps to identify the scope of the testing. If any feature is not testable, communicate it during this
phase so that the mitigation strategy can be planned.
2. Planning Phase:
In practical scenarios, Test planning is the first step of the testing process. In this phase we
identify the activities and resources which would help to meet the testing objectives. During
planning we also try to identify the metrics, the method of gathering and tracking those metrics.
The answer is NO. Requirements do form one of the bases but there are 2 other very important
factors which influence test planning. These are:
3. Analysis Phase:
This STLC phase defines “WHAT” to be tested. We basically identify the test conditions
through the requirements document, product risks and other test basis. The test condition should
be traceable back to the requirement. There are various factors which effect the identification of
test conditions:
We should try to write down the test conditions in a detailed way. For example, for an e-
commerce web application, you can have a test condition as “User should be able to make a
payment”. Or you can detail it out by saying “User should be able to make payment through
NEFT, debit card and credit card”. The most important advantage of writing the detailed test
condition is that it increases the test coverage, since the test cases will be written on the basis of
the test condition, these details will trigger to write more detailed test cases which will eventually
increase the coverage. Also identify the exit criteria of the testing, i.e. determine some conditions
when you will stop the testing.
Software testing methods (Black box testing and White box testing)
Manual - This testing is performed without taking help of automated testing tools. The
software tester prepares test cases for different sections and levels of the code, executes
the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm whether or
not right test cases are used. Major portion of testing involves manual testing.
Automated This testing is a testing procedure done with aid of automated testing tools.
The limitations with manual testing can be overcome using automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer. This can be easily done
with manual testing. But to check if the web-server can take the load of 1 million users, it is quite
impossible to test manually.
There are software and hardware tools which helps tester in conducting load testing, stress
testing, regression testing.
Testing Approaches
Tests can be conducted based on two approaches –
Functionality testing
Whenn functionality is being tested without taking the actual implementation in concern it is
known as black-box testing.. The other side is known as white-box testing where not only
functionality is tested but the way it is implemented is also analyzed.
Black-box testing
In this testing method, the design and structure of the code are not known to the tester, and
testing engineers and end users conduct this test on the software.
Equivalence class - The input is divided into similar classes. If one element of a class
passes the test, it is assumed that all the class is passed.
Boundary values - The input is divided into higher and lower end values. If these values
pass the test, it is assumed that all values in between may pass too.
Cause-effect graphing - In both previous methods, only one input value at a time is
tested. Cause (input) – Effect (output) is a testing technique where combinations of input
values are tested in a systematic
syste way.
Pair-wise Testing - The behavior of software depends on multiple parameters. In
pairwise testing, the multiple parameters are tested pair-wise
pair wise for their different values.
State-based testing - The system changes state on provision of input. Thes These systems are
tested based on their states and input.
It is conducted to test program and its implementation, in order to improve code efficiency or
structure. It is also known as ‘Structural’ testing.
In this testing method, the design and structure of the code are known to the tester. Programmers
of the code conduct this test on the code.
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing process runs parallel to
software development. Before jumping on the next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden bugs or issues left in the
software. Software is tested on various levels
lev -
Unit Testing
Integration Testing
Even if the units of software are working fine individually, there is a need to find out if the units
if integrated together would also work without errors. For example, argument passing and data
updation etc.
System Testing
The software is compiled as product and then it is tested as a whole. This can be accomplished
using one or more of the following tests:
Functionality testing - Tests all functionalities of the software against the requirement.
Performance testing - This test proves how efficient the software is. It tests the
effectiveness and average time taken by the software to do desired task. Performance
testing is done by means of load testing and stress testing where the software is put under
high user and data load under various environment conditions.
Security & Portability - These tests are done when the software is meant to work on
various platforms and accessed by number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go through last phase of testing
where it is tested for user-interaction and response. This is important because even if the
software matches all user requirements and if user does not like the way it appears or works, it
may be rejected.
Alpha testing - The team of developer themselves perform alpha testing by using the
system as if it is being used in work environment. They try to find out how user would
react to some action in software and how the system should respond to inputs.
Beta testing - After the software is tested internally, it is handed over to the users to use
it under their production environment only for testing purpose. This is not as yet the
delivered product. Developers expect that users at this stage will bring minute problems,
which were skipped to attend.
Regression Testing
Testing Documentation
Before Testing
Testing starts with test cases generation. Following documents are needed for reference:
The following documents may be required while testing is started and is being done:
Test Case document - This document contains list of tests required to be conducted. It
includes Unit test plan, Integration test plan, System test plan and Acceptance test plan.
Test description - This document is a detailed description of all test cases and
procedures to execute them.
Test case report - This document contains test case report as a result of the test.
Test logs - This document contains test logs for every test case report.
After Testing
Test summary - This test summary is collective analysis of all test reports and logs. It
summarizes and concludes if the software is ready to be launched. The software is
released under version control system if it is ready to launch.
We need to understand that software testing is different from software quality assurance,
software quality control and software auditing.
Software costing
Introduction
It has been surveyed that nearly one-third projects overrun their budget and late delivered and
two-thirds of all major projects substantially overrun their original estimates. The accurate
prediction of software development costs is a critical issue to make the good management
decisions and accurately determining how much effort and time a project required for both
project managers as well as system analysts and developers. Without reasonably accurate cost
estimation capability, project managers cannot determine how much time and manpower cost the
project should take and that means the software portion of the project is out of control from its
beginning; system analysts cannot make realistic hardware-software tradeoff analyses during the
system design phase; software project personnel cannot tell managers and customers that their
proposed budget and schedule are unrealistic. This may lead to optimistic over promising on
software development and the inevitable overruns and performance compromises as a
consequence. But, actually huge overruns resulting from inaccurate estimates are believed to
occur frequently.
The overall process of developing a cost estimate for software is not different from the process
for estimating any other element of cost. There are, however, aspects of the process that are
peculiar to software estimating. Some of the unique aspects of software estimating are driven by
the nature of software as a product. Other problems are created by the nature of the estimating
methodologies. Software cost estimation is a continuing activity which starts at the proposal
stage and continues through the lift time of a project. Continual cost estimation is to ensure that
the spending is in line with the budget.
Cost estimation is one of the most challenging tasks in project management. It is to accurately
estimate needed resources and required schedules for software development projects. The
software estimation process includes estimating the size of the software product to be produced,
estimating the effort required, developing preliminary project schedules, and finally, estimating
overall cost of the project.
It is very difficult to estimate the cost of software development. Many of the problems that
plague the development effort itself are responsible for the difficulty encountered in estimating
that effort. One of the first steps in any estimate is to understand and define the system to be
estimated. Software, however, is intangible, invisible, and intractable. It is inherently more
After 20 years research, there are many software cost estimation methods available including
algorithmic methods, estimating by analogy, expert judgment method, price towin method, top-
down method, and bottom-up method. No one method is necessarily better or worse than the
other, in fact, their strengths and weaknesses are often complimentary to each other. To
understand their strengths and weaknesses is very important when you want to estimate your
projects.
Expert judgment techniques involve consulting with software cost estimation expert or a group
of the experts to use their experience and understanding of the proposed project to arrive at an
estimate of its cost.
Generally speaking, a group consensus technique, Delphi technique, is the best way to be used.
The strengths and weaknesses are complementary to the strengths and weaknesses of algorithmic
method.
To provide a sufficiently broad communication bandwidth for the experts to exchange the
volume of information necessary to calibrate their estimates with those of the other experts, a
wideband Delphi technique is introduced over standard Deliphi technique.
Estimating by Analogy
Estimating by analogy means comparing the proposed project to previously completed similar
project where the project development information id known. Actual data from the completed
projects are extrapolated to estimate the proposed project. This method can be used either at
system-level or at the component-level.
It has been estimated that estimating by analogy is superior technique to estimation via
algorithmic model in at least some circumstances. It is a more intuitive method so it is easier to
understand the reasoning behind a particular prediction.
Top-down estimating method is also called Macro Model. Using top-down estimating method,
an overall cost estimation for the project is derived from the global properties of the software
project, and then the project is partitioned into various low-level components. The leading
method using this approach is Putnam model. This method is more applicable to early cost
estimation when only global properties are known. In the early phase of the software
development, It is very useful because there are no detailed information available.
• It often does not identify difficult low-level problems that are likely to escalate costs and
sometime tends to overlook low-level components.
• It provides no detailed basis for justifying decisions or estimates.Because it provides a
global view of the software project, it usually embodies some effective features such as
cost-time trade off capability that exists in Putnam model.
Using bottom-up estimating method, the cost of each software components is estimated and then
combine the results to arrive at an estimated cost of overall project. It aims at constructing the
estimate of a system from the knowledge accumulated about the small software components and
their interactions. The leading method using this approach is COCOMO's detailed model.
The advantages:
• It permits the software group to handle an estimate in an almost traditional fashion and to
handle estimate components for which the group has a feel.
• It is more stable because the estimation errors in the various components have a chance to
balance out.
The disadvantages:
The algorithmic method is designed to provide some mathematical equations to perform software
estimation. These mathematical equations are based on research and historical data and use
inputs such as Source Lines of Code (SLOC), number of functions to perform, and other cost
drivers such as language, design methodology, skill-levels, risk assessments, etc. The algorithmic
methods have been largely studied and there are a lot of models have been developed, such as
COCOMO models, Putnam model, and function points based models.
General advantages:
General disadvantages:
COCOMO Models
One very widely used algorithmic software cost model is the Constructive Cost Model
(COCOMO). The basic COCOMO model has a very simple form:
Where K1 and K2 are two parameters dependent on the application and development
environment.
Estimates from the basic COCOMO model can be made more accurate by taking into account
other factors concerning the required characteristics of the software to be developed, the
qualification and experience of the development team, and the software development
environment. Some of these factors are:
Many of these factors affect the person months required by an order of magnitude or more.
COCOMO assumes that the system and software requirements have already been defined, and
that these requirements are stable. This is often not the case.
COCOMO model is a regression model. It is based on the analysis of 63 selected projects. The
primary input is KDSI. The problems are:
1. In early phase of system life-cycle, the size is estimated with great uncertainty value. So,
the accurate cost estimate cannot be arrived at. The cost estimation equation is derived
from the analysis of 63 selected projects. It usually have some problems outside of its
particular environment. For this reason, the recalibration is necessary.
According to Kemerer's research, the average error for all versions of the model is 601%.
The detailed model and Intermediate model seem not much better than basic model.
The first version of COCOMO model was originally developed in 1981. Now, it has been
experiencing increasing difficulties in estimating the cost of software developed to new life cycle
processes and capabilities including rapid-development process model, reuse-driven approaches,
object-oriented approaches and software process maturity initiative.
For these reasons, The newest version, COCOMO 2.0, was developed. The major new modeling
capabilities of COCOMO 2.0 are a tailorable family of software size models, involving object
points, function points and source lines of code; nonlinear models for software reuse and
reengineering; an exponent-driver approach for modeling relative software diseconomies of
scale; and several additions, deletions, and updates to previous COCOMO effort-multiplier cost
drivers. This new model is also serving as a framework for an extensive current data collection
and analysis effort to further refine and calibrate the model's estimation capabilities.
Another popular software cost model is the Putnam model. The form of this model is:
The Putnam model is very sensitive to the development time: decreasing the development time
can greatly increase the person-months needed for development.
One significant problem with the PUTNAM model is that it is based on knowing, or being able
to estimate accurately, the size (in lines of code) of the software to be developed. There is often
great uncertainty in the software size. It may result in the inaccuracy of cost estimation.
According to Kemerer's research, the error percentage of SLIM, a Putnam model based method,
is 772.87%
From above two algorithmic models, we found they require the estimators to estimate the
number of SLOC in order to get man-months and duration estimates. The Function Point
Analysis is another method of quantifying the size and complexity of a software system in terms
of the functions that the system delivers to the user. A number of proprietary models for cost
estimation have adopted a function point type of approach, such as ESTIMACS and SPQR/20.
The function point measurement method was developed by Allan Albrecht at IBM and published
in 1979. He believes function points offer several significant advantages over SLOC counts of
size measurement. There are two steps in counting function points:
• Counting the user functions. The raw function counts are arrived at by considering a
linear combination of five basic software components: external inputs, external outputs,
The collection of function point data has two primary motivations. One is the desire by managers
to monitor levels of productivity. Another use of it is in the estimation of software development
cost.
There are some cost estimation methods which are based on a function point type of
measurement, such as ESTIMACS and SPQR/20. SPQR/20 is based on a modified function
point method. Whereas traditional function point analysis is based on evaluating 14 factors,
SPQR/20 separates complexity into three categories: complexity of algorithms, complexity of
code, and complexity of data structures. ESTIMACS is a propriety system designed to give
development cost estimate at the conception stage of a project and it contains a module which
estimates function point as a primary input for estimating cost.
From Kemerer's research, the mean error percentage of ESTIMACS is only 85.48%. So,
considering the 601% with COCOMO and 771% with SLIM, I think the Function Point based
cost estimation methods is the better approach especially in the early phases of development.
From the above comparison, we know no one method is necessarily better or worse than the
other, in fact, their strengths and weaknesses are often complimentary to each other. According
For known projects and projects parts, we should use expert judgment method or analogy
method if the similarities of them can be got, since it is fast and under these circumstance,
reliable; For large, lesser known projects, it is better to use algorithmic model. In this case,
many researchers recommend the estimation models that do not required SLOC as an input. I
think COCOMO2.0 is the first candidate because COCOMO2.0 model not only can use Source
lines of code (SLOC) but also can use Object points, unadjusted function points as metrics for
sizing a project. If we approach cost estimation by parts, we may use expert judgment for some
known parts. This way we can take advantage of both: the rigor of models and the speed of
expert judgment or analogy. Because the advantages and disadvantages of each technique are
complementary, a combination will reduce the negative effect of any one technique, augment
their individual strengths and help to cross-check one method against another.
It is very common that we apply some cost estimation methods to estimate the cost of software
development. But what we have to note is that it is very important to continually re-estimate cost
and to compare targets against actual expenditure at each major milestone. This keeps the status
of the project visible and helps to identify necessary corrections to budget and schedule as soon
as they occur.
At every estimation and re-estimation point, iteration is an important tool to improve estimation
quality. The estimator can use several estimation techniques and check whether their estimates
converge. The other advantages are as following:
• Different estimation methods may use different data. This results in better coverage of the
knowledge base for the estimation process. It can help to identify cost components that
cannot be dealt with or were overlooked in one of the methods
• Different viewpoints and biases can be taken into account and reconciled. A competitive
contract bid, a high business priority to keep costs down, or a small market window with
the resulting tight deadlines tends to have optimistic estimates. A production schedule
established by the developers is usually more on the pessimistic side to avoid committing
to a schedule and budget one cannot meet.
It is also very important to compare actual cost and time to the estimates even if only one or two
techniques are used. It will also provide the necessary feedback to improve the estimation quality
Identifying the goals of the estimation process is very important because it will influence the
effort spent in estimating, its accuracy, and the models used. Tight schedules with high risks
require more accurate estimates than loosely defined projects with a relatively open-ended
schedule. The estimators should look at the quality of the data upon which estimates are based
and at the various objectives.
Model Calibration
The act of calibration standardizes a model. Many model are developed for specific situations
and are, by definition, calibrated to that situation. Such models usually are not useful outside of
their particular environment. So, the act of calibration is needed to increase the accuracy of one
of these general models by making it temporarily a specific model for whatever product it has
been calibrated for. Calibration is in a sense customizing a generic model. Items which can be
calibrated in a model include: product types, operating environments, labor rates and factors,
various relationships between functional cost items, and even the method of accounting used by
a contractor. All general models should be standardized (i.e. calibrated), unless used by an
experienced modeler with the appropriate education, skills and tools, and experience in the
technology being modeled.
Calibration is the process of determining the deviation from a standard in order to compute the
correction factors. For cost estimating models, the standard is considered historical actual costs.
The calibration procedure is theoretically very simple. It is simply running the model with
normal inputs (known parameters such as software lines of code) against items for which the
actual cost are known. These estimates are then compared with the actual costs and the average
deviation becomes a correction factor for the model. In essence, the calibration factor obtained is
really good only for the type of inputs that were used in the calibration runs. For a general total
model calibration, a wide range of components with actual costs need to be used. Better yet,
numerous calibrations should be performed with different types of components in order to obtain
a set of calibration factors for the various possible expected estimating situations
Conclusions
The accurate prediction of software development costs is a critical issue to make the good
management decisions and accurately determining how much effort and time a project required
for both project managers as well as system analysts and developers. There are many software
cost estimation methods available including algorithmic methods, estimating by analogy, expert
judgment method, top-down method, and bottom-up method. No one method is necessarily
For a specific project to be estimated, which estimation methods should be used depend on the
environment of the project. According to the weaknesses and strengths of the methods, you can
choose some methods to be used. I think a combination of the expert judgment or analogy
method and COCOMO2.0 is the best approach that you can choose. For known projects and
projects parts, we should use expert judgment method or analogy method if the similarities of
them can be got, since it is fast and under these circumstance, reliable; For large, lesser known
projects, it is better to use algorithmic model like COCOMO2.0 which will be available in early
1997. If COCOMO2.0 is not available, ESTIMACS or the other function point based methods
are highly recommended especially in the early phase of the software life-cycle because in the
early phase of software life-cycle SLOC based methods have great uncertainty values of size. If
there are many great uncertainty values of size, reuse, cost drivers etc., the analogous method or
wide-band Delphi technology should be considered as the first candidate. And , the COCOMO
2.0 has capabilities to deal with the current software process and is served as a framework for an
extensive current data collection and analysis effort to further refine and calibrate the model's
estimation capabilities. In general, the COCOMO2.0 will be very popular. Now Dr. Barry
Boehm and his students are developing COCOMO2.0. They expect to have it calibrated and
usable in early 1997.
Some recommendations:
Software outsourcing
The way of working in Software Outsourcing process is now changing. Day by day the process
is becoming more and more innovative as new ideas of the business process are taking place.
In Software Outsourcing the main reason behind the failure of the deal is the improper or poor
communication between the vendors and the clients. Communication gap between these two
parties leads to meet the needs and requirements of both the parties and the project faces the
failure. To make the overall process accurate the onshore relationship with overseas services
providers has also been a part of the new strategy. It also plays an important role. Client would
be able to communicate his needs in better way in Offshore Software Development process.
Again to make the deal run in the favorable way all the primary ideas must be clear in your mind
too. All the needs and requirements of the company must be clear with you along with the goal
of the benefits of lower price advantage. Otherwise you might be the main reason for the failure
of the deal and still you will continue to blame the Software Outsourcing vendors for no
reasons.
Software Outsourcing
So in Software Outsourcing process, be clear with yourself to remove the inaccuracy during the
project. Make proper communication with the service providers as and when required. Because a
minor mistake from your side also can take you to the improper result at the end of the project.
Idea of having the internal central project office can be helpful in making the communication
smooth. It will surely help in all the aspects of the business including the evaluation,
maintenance, delivery and many more regarding the overseas project. It is marked that
collaboration and proper communication are the most important keys to success in overall deal of
Software Outsourcing.
In Offshore Outsourcing, in past scenario client used to gather total information and sent it to
the overseas service providers and resources could also be shifted overseas to let the deal work.
But now total different process is taking place in these overseas deals. Important management
people form the service providing companies now stay in the clients place and help in running
overall Software Outsourcing process smoothly.
In short the overall Scenario in Software Outsourcing process is now changed. With the maturity
of the market the mindsets of the clients and vendors are also changing. More and more
innovations and generation of new ideas in overseas deal has really changed the whole business
process scenario in Software Outsourcing.
Generally, open source refers to a computer program in which the source code is available to the
general public for use and/or modification from its original design. Open-source code is meant to
be a collaborative effort, where programmers improve upon the source code and share the
changes within the community. Typically this is not the case, and code is merely released to the
public under some license. Others can then download, modify, and publish their version (fork)
back to the community. Today you find more projects with forked versions than unified projects
worked by large teams.
Many large formal institutions have sprung up to support the development of the open-source
movement, including the Apache Software Foundation, which supports projects such as the open
source framework behind big dataApache Hadoop and an open-source HTTP server Apache
HTTP.
The open-source model is based on a more decentralized model of production, in contrast with
more centralized models of development such as those typically used in commercial software
companies.
Customization
Cost savings typically range from 40% to 50% compared to wholly custom development
Enhanced portability
Vendor neutrality
Reduced development times
Flexibility for meeting specific needs (not available with proprietary licensed software)
Long term customer support, fixes, enhancements, and software updates from wide user
base
When a company needs a piece of software written they sometimes choose to use programmers
within their own company to write it. This is known as "in-house" development.
Pros
The level of customization is perhaps the biggest benefit of custom software. While a
commercial package may fit many of your business’s needs, it’s doubtful that it will have the
same efficiency as custom software. By meeting your exact specifications, you can cover every
aspect of your business without unnecessary extras. It gives you greater control, which is
important if your business has specific needs that your average commercial product can’t fulfill.
Having customized software should also make the interface more familiar and easy to use.
Because in-house software is developed by a team of your choosing, it also gives you access to
knowledgeable support. Rather than dealing with technicians who may not understand your
unique situation, you can get support from the individuals who have developed your software
firsthand. They will understand any subtle nuances and minimize downtime from technical
errors.
Cons
Your team of in-house developers may lack the knowledge and expertise to create sophisticated
software capable of handling all the tasks you require. If you only need basic software, this
probably won’t be an issue. However, if you need more sophisticated software, this could be
more trouble than it’s worth and lead to bugs and glitches. This may force you to bring in outside
consultants who lack familiarity with your business, which can also be detrimental.
Custom software also tends to lack scalability, and upgrades can be troublesome. Because
technology is constantly evolving, you may have difficulty adapting to new platforms in the
future. Although developed software may work for well for a while, it could become defunct in a
few years. This can force you to spend more money on developing new software.
Commercial-off-the-shelf (COTS) software and services are built and delivered usually from a
third party vendor. COTS can be purchased, leased or even licensed to the general public.
• Tangible Costs
- Direct investment in software & hardware (one time)
- IS installation & employee training (one time)
- Operating costs for an IS (recurring) – expenditures on software licences, labor costs of
IS staff, IS maintenance, overhead for facilities, expenses of communications carried out
by computer networks partaking in IS.
- Loss of money and time with new IS that does not perform as expected (opportunity
cost).
- Total Cost of Ownership sums up all the costs in a system life cycle.
- Effort put in learning a new IS and associated process
- Employees’ loss of work motivation due to new processes/IS
- Employees’ resistance to new processes/IS
- Lower customer satisfaction due to improperly performing IS
- Limitations in decision making when a new IS cannot deliver reports managers need to
make decisions.
- Note that intangible costs may result in tangible costs.
• Tangible Benefits
- Savings on many counts:
- savings on labor expenses
- savings due to reduced process time (e.g., reducing inventory costs in supply
chain process)
- savings due to avoiding to add more employees when improved process/IS can
carry a larger volume of operations
118 www.someakenya.com Contact: 0707 737 890
- Organizational performance gains (new IS/process organizational productivity
financial returns).
- Better decision making resulting in income increase (e.g., moving into new product and
geographical markets)
- Cutting losses by improved management control (e.g., ERPS case of detecting fraudulent
purchases)
- Data error reduction eliminating waste of business time & labour for repeated tasks.
- Customer value that does not translate directly into monetary gains for a company
- Better control and decision making, which do not translate readily into monetary gains
- Improvement in the appearance of reports and other business documentation (better
quality but no more money).
- Increased knowledge capabilities (note: these are a condition for making more attractive
products, but before these products are made and sold no monetary gains accrue).
3 Rent:
– Annual licencing of software or hardware
– Rent via the Cloud (partial or total IS services).
• Cloud Advantages:
– Reduce costs: pay-per-use, avoiding development & maintenance costs
– Client benefits from new IT as vendor keeps updating it to remain competitive
gains in client’s business processes.
• Cloud Disadvantages:
– Synchronizing business processes between client and vendor
– Risk of compromising confidentiality of business data
– Vendor lock-in (it is hard to get out of Cloud as a company relies more on a cloud
vendor)
– Unexpected changes in pricing services.
• Summary
• Costs of IS can be tangible (expressed in monetary terms) & intangible (all other forms).
Examples of tangible costs are investment in computer software and hardware, and
system’s operating costs.
• Benefits of Information Systems can be tangible & intangible. Examples of tangible
benefits are cost reduction and income gains.
• Financial Assessments of IS economy focuses on the size of returns (e.g., NPV) and on
timing of returns (e.g., payback period).
• Mixed Assessments of IS economy cover tangible and intangible C/B (portfolio analysis,
and balanced scorecard).
• Software can be developed by the company’s IS department, purchased, or rented;
hardware is usually purchased or rented. Each option has pros and cons.
• Cloud (cloud computing) is the trendy rental option with significant pros & cons.
A business case captures the reasoning for initiating a project or task. It is often presented in a
well-structured written document, but may also sometimes come in the form of a short verbal
argument or presentation. The logic of the business case is that, whenever resources such as
money or effort are consumed, they should be in support of a specific business need. An example
could be that a software upgrade might improve system performance, but the "business case" is
that better performance would improve customer satisfaction, require less task processing time,
or reduce system maintenance costs. A compelling business case adequately captures both the
quantifiable and unquantifiable characteristics of a proposed project. Business case depends on
business attitude and business volume.
Business cases can range from comprehensive and highly structured, as required by formal
project management methodologies, to informal and brief. Information included in a formal
business case could be the background of the project, the expected business benefits, the options
considered (with reasons for rejecting or carrying forward each option), the expected costs of the
project, agap analysis and the expected risks. Consideration should also be given to the option of
doing nothing including the costs and risks of inactivity. From this information, the justification
for the project is derived. Note that it is not the job of the project manager to build the business
case; this task is usually the responsibility of stakeholdersand sponsors
Capital budgeting methods rely on measures of cash flows into and out of the firm.
Capital projects generate cash flows into and out of the firm. The investment cost is animmediate
cash outflow caused by the purchase of the capital equipment. In subsequentyears, the
Tangible benefits can be quantified and assigned a monetary value. Intangible benefits,such as
more efficient customer service or enhanced employee goodwill, cannot beimmediately
quantified but may lead to quantifiable gains in the long run.
You are familiar with the concept of total cost of ownership (TCO), which is designedto identify
and measure the components of information technology expenditures beyondthe initial cost of
purchasing and installing hardware and software. However, TCO analysisprovides only part of
the information needed to evaluate an information technologyinvestment because it does not
typically deal with benefits, cost categories such as complexitycosts, and “soft” and strategic
factors discussed later in this section.
The weakness of this measure is its virtue: The method ignores the time value of money, the
amount of cash flow after the payback period, the disposal value (usually zero with computer
systems), and the profitability of the investment.
This net benefit is divided by the total initial investment to arrive at ROI. The formulais as
follows:
Net benefit = ROI
Total initial investment
The weakness of ROI is that it can ignore the time value of money. Future savings aresimply not
worth as much in today’s dollars as are current savings. However, ROI can bemodified (and
usually is) so that future benefits and costs are calculated in today’s dollars.
(The present value function on most spreadsheets can perform this conversion.)
Thus, to compare the investment (made in today’s dollars) with future savings or earnings, you
need to discount the earnings to their present value and then calculate the net present value of the
investment. The net present value is the amount of money an investment is worth, taking into
account its cost, earnings, and the time value of money. The formula for net present value is this:
Present value of expected cash flows – Initial investment cost = Net present value
The cost-benefit ratio can be used to rank several projects for comparison. Some firms establish a
minimum cost-benefit ratio that must be attained by capital projects. The cost-benefit ratio can,
of course, be calculatedusing present values to account for the time value of money.
PROFITABILITY INDEX
One limitation of net present value is that it provides no measure of profitability. Neither does it
provide a way to rank order different possible investments. One simple solution is provided by
the profitability index. The profitability index is calculated by dividing the present value of the
total cash inflow from an investment by the initial cost of the investment.
The result can be used to compare the profitability of alternative investments.
Total cost of ownership (TCO) is a financial estimate intended to help buyers and owners
determine the direct and indirect costs of a product or system. It is a management
accountingconcept that can be used in full cost accounting or even ecological economics where it
includes social costs.
TCO, when incorporated in any financial benefit analysis, provides a cost basis for determining
the total economic value of an investment. Examples include: return on investment, internal rate
of return, economic value added, return on information technology, and rapid economic
justification.
124 www.someakenya.com Contact: 0707 737 890
A TCO analysis includes total cost of acquisition and operating costs. A TCO analysis is used to
gauge the viability of any capital investment. An enterprise may use it as a product/process
comparison tool. It is also used by credit markets and financing agencies. TCO directly relates to
an enterprise's asset and/or related systems total costs across all projects and processes, thus
giving a picture of the profitability over time
For example, the total cost of ownership of a car is not just the purchase price, but also the
expenses incurred through its use, such as repairs, insurance and fuel. A used car that appears to
be a great bargain might actually have a total cost of ownership that is higher than that of a new
car, if the used car requires numerous repairs while the new car has a three-year warranty.
TCO quantifies the cost of the purchase across the product's entire lifecycle. Therefore, it offers a
more accurate basis for determining the value - cost vs. ROI -of an investment than the purchase
price alone. The overall TCO includes direct and indirect expenses, as well as some intangible
ones that may be assigned a monetary value. For example, a server's TCO might include an
expensive purchase price, a good deal on ongoing support, and low system management time
because of its user-friendly interface.
TCO factors in costs accumulated from purchase to decommissioning. For a data center server,
for example, this means initial acquisition price, repairs, maintenance, upgrades, service or
support contracts, network integration, security, software licenses (such as Windows Server 2012
R2) and user training. It can even include the credit terms on which the company purchased the
product. Through analysis, the purchasing manager might assign a monetary value to intangible
costs, such as systems management time, electricity used, downtime, insurance and other
overhead. The total cost of ownership must be compared to the total benefits of ownership
(TBO) to determine the viability of a purchase.
There are several methodologies and software tools to calculate total cost of ownership, but the
process is not perfect. Many enterprises fail to define a singular methodology. This is bad
because they cannot base purchasing decisions on uniform information. Another problem is that
it is difficult to determine the scope of operating costs for any piece of IT equipment; some cost
factors are easily overlooked or inaccurately compared from one product to another. For
example, support costs on one server include the cost of spare parts. This might make support
cost more than it does on another server, but eliminates an additional cost factor of parts
acquisition.
Cost of ownership analysis generally doesn't anticipate unpredictable rising costs over time, for
example, if upgrade part costs jump substantially more than expected due to a distributor change.
TCO calculations cannot account for the availability of upgrades and services, or the impact of
vendor relationships. If a vendor refuses to offer service after three years, no longer stocks parts
Enterprise managers and purchasing decision makers complete total cost of ownership analysis
for multiple options, then compare TCOs to determine the best long-term investment. For
example, one server's purchase price might be less expensive than a competitive model, but the
decision maker can see that anticipated upgrades and annual service contracts would drive the
total cost much higher. In turn, one model's TCO may be slightly higher than another model's,
but its TBO far exceeds that of the competitive offering.
Without TCO analysis, enterprises could greatly miscalculate IT budgets, or purchase servers
and other components unsuited to their computing needs, resulting in slow services, uncontrolled
downtime and other problems.
The balanced scorecard has evolved from its early use as a simple performance measurement
framework to a full strategic planning and management system. The “new” balanced scorecard
transforms an organization’s strategic plan from an attractive but passive document into the
"marching orders" for the organization on a daily basis. It provides a framework that not only
provides performance measurements, but helps planners identify what should be done and
measured. It enables executives to truly execute their strategies.
This new approach to strategic management was first detailed in a series of articles and books by
Drs. Kaplan and Norton. Recognizing some of the weaknesses and vagueness of previous
management approaches, the balanced scorecard approach provides a clear prescription as to
what companies should measure in order to 'balance' the financial perspective. The balanced
scorecard is a management system (not only a measurement system) that enables organizations
to clarify their vision and strategy and translate them into action. It provides feedback around
both the internal business processes and external outcomes in order to continuously improve
strategic performance and results. When fully deployed, the balanced scorecard transforms
strategic planning from an academic exercise into the nerve center of an enterprise.
Kaplan and Norton describe the innovation of the balanced scorecard as follows:
"The balanced scorecard retains traditional financial measures. But financial measures tell the
story of past events, an adequate story for industrial age companies for which investments in
long-term capabilities and customer relationships were not critical for success. These financial
measures are inadequate, however, for guiding and evaluating the journey that information age
companies must make to create future value through investment in customers, suppliers,
employees, processes, technology, and innovation."
Perspectives
The balanced scorecard suggests that we view the organization from four perspectives, and to
develop metrics, collect data and analyze it relative to each of these perspectives:
Kaplan and Norton emphasize that 'learning' is more than 'training'; it also includes things like
mentors and tutors within the organization, as well as that ease of communication
communication among
workers that allows them to readily get help on a problem when it is needed. It also includes
technological tools; what the Baldrige criteria call "high performance work systems."
This perspective refers to internal business processes. Metrics based on this perspective allow the
managers to know how well their business is running, and whether its products and services
conform to customer requirements (the mission). These metrics have to be carefully designed by
those who know these processes most intimately; with our unique missions these are not
something that can be developed by outside consultants.
Kaplan and Norton do not disregard the traditional need for financial data. Timely and accurate
funding data will always be a priority, and managers will do whatever necessary to provide it. In
fact, often there is more than enough handling and processing of financial data. With the
implementation of a corporate database, it is hoped that more of the processing can be
centralized and automated. But the point is that the current emphasis on financials leads to the
"unbalanced" situation with regard to other perspectives. There is perhaps a need to include
additional financial-related data, such as risk assessment and cost-benefit data, in this category.
Strategy Mapping
Strategy maps are communication tools used to tell a story of how value is created for the
organization. They show a logical, step-by-step connection between strategic objectives (shown
as ovals on the map) in the form of a cause-and-effect chain. Generally speaking, improving
performance in the objectives found in the Learning & Growth perspective (the bottom row)
enables the organization to improve its Internal Process perspective Objectives (the next row up),
which in turn enables the organization to create desirable results in the Customer and Financial
perspectives (the top two rows).
An accounting method that identifies the activities that a firm performs, and then assigns indirect
costs to products. An activity based costing (ABC) system recognizes the relations
relationship between
costs, activities and products, and through this relationship assigns indirect costs to products less
arbitrarily than traditional metho
130 www.someakenya.com Contact: 0707 737 890
o Tracking and allocating costs
Cost allocation is a process of providing relief to shared service organization's cost centers that
provide a product or service. In turn, the associated expense is assigned to internal clients' cost
centers that consume the products and services. For example, the CIO may provide all IT
services within the company and assign the costs back to the business units that consume each
offering.
The core components of a cost allocation system consist of a way to track which organizations
provides a product and/or service, the organizations that consume the products and/or services,
and a list of portfolio offerings (e.g. service catalog). Depending on the operating structure
within a company, the cost allocation data may generate an internal invoice or feed an ERP
system's chargeback module. Accessing the data via an invoice or chargeback module are the
typical methods that drive personnel behavior. In return, the consumption data becomes a great
source of quantitative information to make better business decisions. Today’s organizations face
growing pressure to control costs and enable responsible financial management of resources. In
this environment, an organization is expected to provide services cost-effectively and deliver
business value while operating under tight budgetary constraints. One way to contain costs is to
implement a cost allocation methodology, where your business units become directly
accountable for the services they consume.
An effective cost allocation methodology enables an organization to identify what services are
being provided and what they cost, to allocate costs to business units, and to manage cost
recovery. Under this model, both the service provider and its respective consumers become
aware of their service requirements and usage and how they directly influence the costs incurred.
This information, in turn, improves discipline within the business units and financial discipline
across the entire organization. With the organization articulating the costs of services provided,
the business units become empowered – and encouraged – to make informed decisions about the
services and availability levels they request. They can make trade-offs between service levels
and costs, and they can benchmark internal costs against outsourced providers.
CONVERSION STRATEGIES
Conversion planning
Overview
The Conversion Plan describes the strategies involved in converting data from an existing
system to another hardware or software environment. It is appropriate to reexamine the original
system’s functional requirements for the condition of the system before conversion to determine
if the original requirements are still valid. An outline of the Conversion Plan is shown below.
INTRODUCTION
This section provides a brief description of introductory material.
Points of Contact
This section identifies the System Proponent. Provide the name of the responsible organization
and staff (and alternates, if appropriate) who serve as points of contact for the system conversion.
Include telephone numbers of key staff and organizations.
Project References
This section provides a bibliography of key project references and deliverables that have been
produced before this point in the project development. These documents may have been
produced in a previous development life cycle that resulted in the initial version of the system
undergoing conversion or may have been produced in the current conversion effort as
appropriate.
Glossary
This section contains a glossary of all terms and abbreviations used in the plan. If it is several
pages in length, it may be placed in an appendix.
CONVERSION OVERVIEW
This section provides an overview of the aspects of the conversion effort, which are discussed in
the subsequent sections.
Conversion Description
This section provides a description of the system structure and major components. If only
selected parts of the system will undergo conversion, identify which components will and will
not be converted.
If the conversion process will be organized into discrete phases, this section should identify
which components will undergo conversion in each phase. Include hardware, software, and data
as appropriate. Charts, diagrams, and graphics may be included as necessary. Develop and
continuously update a milestone chart for the conversion process.
Type of Conversion
This section describes the type of conversion effort. The software part of the conversion effort
usually falls into one of the following categories:
In addition to the three categories of conversions described above, other types of conversions
may be defined as necessary.
Conversion Tasks
This section describes the major tasks associated with the conversion, including planning and
pre-conversion tasks.
• Analysis of the workload projected for the target conversion environment to ensure that the
projected environment can adequately handle that workload and meet performance and
capacity requirements
• Projection of the growth rate of the data processing needs in the target environment to ensure
that the system can handle the projected near-term growth, and that it has the expansion
capacity for future needs
• Analysis to identify missing features in the new (target) hardware and software environment
that were supported in the original hardware and software and used in the original system
• Development of a strategy for recoding, reprogramming, or redesigning the components of
the system that used hardware and software features not supported in the new (target)
hardware and software environment but used in the original system
Pre-Conversion Tasks
This section describes all tasks that are logically separate from the conversion effort itself but
that must be completed before the initiation, development, or completion of the conversion
effort. Examples of such pre-conversion tasks include:
Conversion Schedule
This section provides a schedule of activities to be accomplished during the conversion. Pre-
conversion tasks and major tasks for all hardware, software, and data conversions described in
Section 2.3,Conversion Tasks, should be described here and should show the beginning and end
dates of each task. Charts may be used as appropriate.
Security
If appropriate for the system to be implemented, provide an overview of the system security
features and the security during conversion.
CONVERSION SUPPORT
This section describes the support necessary to implement the system. If there are additional
support requirements not covered by the categories shown here, add other subsections as needed.
Hardware
This section lists support equipment, including all hardware to be used for the conversion.
Software
This section lists the software and databases required to support the conversion. It describes all
software tools used to support the conversion effort, including the following types of software
tools, if used:
• Automated conversion tools, such as software translation tools for translating among
different computer languages or translating within software families (such as, between
release versions of compilers and DBMSs)
Facilities
This section identifies the physical facilities and accommodations required during the conversion
period.
Materials
This section lists support materials.
Personnel
This section describes personnel requirements and any known or proposed staffing, if
appropriate. Also describe the training, if any, to be provided for the conversion staff.
Parallel running
Parallel conversion: The new system is introduced while the old one is still in use.
Both systems process all activity and the results are compared. Once there is confidence that the
new one operates properly, the old one is shut down. Because parallel conversion isn’t as useful
today as some people believe (and many authors suggest), it is discussed separately below.
Pilot study
Pilot conversion: Part of an organization uses the new system while the rest of it continues to
use the old. This localizes problems to the pilot group so support resources can focus on it.
However, there can be interface issues where organizational units share data.
Phased approach
Phased (modular) conversion: Part of the new system is introduced while the rest of the old
one remains in use. This localizes problems to the new module so support resources can focus on
it. However, there can be interface issues where modules share data.
Requirements
The question of what is the purpose of system documentation is difficult to answer. Below are
some possible uses of system documentation.
a) Introduction / overview
b) Disaster Recovery
Many systems are supported by disaster recovery arrangements, but even in such circumstances,
the recovery can still fail. There may be a need to re-build a system from scratch at least to the
point where a normal restore from backup can be done. To make a rebuild possible it will be
necessary to have documentation that provides answers to the configuration choices. For
example it is important to re-build the system with the correct size of file systems to avoid trying
to restore data to a file system that has been made too small. In some circumstances certain
parameters can be difficult to change at a later date. When rebuilding a system it may be
important to configure networking with the original parameters both to avoid conflicts with other
systems on the network and because these may be difficult to change subsequently.
c) OS or Application re-load
Even when a disaster has not occurred, there may be times when it is necessary to reload an
Operating System or Application; this can either be as part of a major version upgrade or a
drastic step necessary to solve problems. In such circumstances it is important to know how an
OS or Application has been configured.
The benefits of good system documentation, when trouble shooting, are fairly obvious. A
comprehensive description of how a system should be configured and should behave can be
invaluable when configuration information has become corrupted, when services have failed, or
components have failed.
Good system documentation will include a description of the physical hardware and its physical
configuration, which can avoid the need to shut down a system in order to examine items such as
jumper settings.
e) Planning tool
When planning changes or upgrades it will be necessary to assess the impact of changes on
existing systems. A good understanding of existing systems is necessary for assessing the impact
of any changes and for this good system documentation is required.
f) Other
System documentation can be used for many purposes including Auditing, Inventory,
Maintenance, etc. The documentation of individual systems forms an important component of
the overall network documentation.
Most operating systems include tools to report important system information and often the output
from such tools can be re-directed to a file; this provides a means of automating the creation of
system documentation.
Many Administrators have neither the time nor inclination to produce System Documentation
and given the importance of keeping such documentation current, automation of the creation of
system documentation is very desirable.
There are limitations to the extent to which system documentation can be automated. The
following cannot be documented automatically:
Most Operating Systems have tools for automating their deployment (e.g. unattend.txt for NT,
JumpStart for Solaris, Kickstart for Redhat Linux etc.). Although these tools are primarily
intended for deploying large numbers of systems they can be used for individual systems. While
the configuration scripts used in these tools are not very readable, they are in text form, and can
be read by technical staff. Such scripts offer the great advantage that a system is documented in
an unambiguous way that guarantees that the system can be rebuilt exactly the same way it was
first built.
For most systems it should be possible to create a simple script (or batch file) that uses several
system tools to report system information and out put the results to a file. The use of such tools
together with simple print commands (e.g. echo "line of text") can readily produce a useful
document. It should be possible to adapt such scripts to produce the documentation required in
terms of content and level of detail etc.
If standard tools do not provide sufficient information, there are many third party tools and free
tools that can be used, however the potential problems of using additional tools, rather than just
"built-in" tools, has to be weighed against the advantages. Alternative scripting languages (such
a Perl) may provide additional benefits.
Characteristics
What the characteristics of good system documentation are is difficult to decide. Some desirable
characteristics are described below:
The documentation should be created for the intended audience. While it may be appropriate for
a "Management overview" to be created for non-technical people, most system documentation
will be used by System Administrators and other technical people as an important reference.
System documentation should provide sufficient technical detail.
The system documentation should describe the specific implementation of a given system rather
than provide generic documentation.
Up to date
The documentation needs to be up to date, but does not necessarily have to be recent. If the
system has remained completely unchanged for a long period of time, the documentation can
remain unchanged for the same period of time. It is important that when systems are changed
documentation is updated to reflect the changes and this should be part of any change control
procedures.
Sufficiently comprehensive
Accessible
The documentation must be held in a location and format that makes it accessible. It is obviously
unacceptable to have the only copy of a system's documentation held on a drive that has failed or
on the system itself should it fail.
It is very desirable to hold the documentation in a universal standard format that does not require
access to a particular word processor; ASCII text may be most suitable.
Secure
Because system documentation could be useful to troublemakers, thought may need to be given
to controlling access to the documentation.
Understanding your organisations level of documentation debt can be difficult, but knowing
the standard of good documentation can aid in establishing the quality of your documentation
and identifying areas of concern.
Any of these qualities missing from your documentation is a sign of documentation debt.
Coverage
It is essential to know what parts of the code is documented and what is not. Releasing code to
other team members, maintenance teams or outside your organisation without understanding the
level of documentation coverage can dramatically reduce productivity or increase support issues.
This can affect developers taking them away from coding new features to fixing documentation
or getting involved in support. This also has an impact on ‘on boarding’ new developers.
Accuracy
The code comments accurately describe the code reflecting the last set of source code changes.
Code documentation should accurately reflect the actual code for API’s, classes, methods or
functions. For example, comments in your structured code documentation, like Javadoc, should
accurately reflect the method signature of defined methods with specific parameter types that
return a specific value type.
Checking the accuracy of code documentation for each class, method or function within a project
is difficult, so ends up being left to the individual developer. Inaccuracies usually surface when
another developer reports a problem after spending days trying to make their code conform and
failed, or the fix interacts with other parts of the software in unexpected ways causing new bugs.
Adding new features also becomes risky and highly likely to cause new bugs.
Accuracy is also dependent on the code reflecting the last set of source code changes. Modern
source code management tools make it easy to track code changes across a project and see the
Releasing documentation that does not reflect the functionality of the code might cause some
level of confusion for the user, but in the case of a library or API used by developers it can have
a significant impact on support, reputation and the adoption of code.
Clarity
The system documentation describes what the code does and why it is written that way.
Comments in source code are an important record of what the code does, but often doesn’t
explain why it was written that way or how it was implemented by a developer. Without this
“why” it is often difficult to understand why a previous developer (or even you) took a certain
approach when they wrote the code and this could potentially lead to unnecessary refactoring
only to realize why and scrapping the refactor mid process.
Part of the problem explaining the “why” is that it’s too difficult and time consuming to explain
in code documentation that only uses characters, words and symbols and lacks enriched,
diagrammatic means. Most developers would prefer to draw diagrams or [x] and embed them
into the code. Visual diagrams, photos or videos could provide key insights needed into the
“why”.
Maintainability
A single source is maintained to handle multiple output formats, product variants, localization or
translation
Producing help documentation would be simple if only one format and one language was needed
for a single product, but with trends like the growth in mobile applications making international
markets easily accessible the requirements of documentation have become more complex,
needing multiple output formats in multiple languages.
Managing multiple copies for each product, language or output can be a nightmare for any
development team, especially when changes are needed. The solution is to have a single source
for all documentation; we think that source is best at the level of the code.
Synchronization
The code and documentation are linked automatically to keep them in sync.
The problem with this approach is that the tools used can’t provide visibility into the level of
documentation debt within a project, the developer doesn’t link their code to the relevant
documentation and the process is exposed to all the flaws inherent with a non-automated process.
However, when the developer is at the heart of the process automatically creating links between
the code and the documentation they become synchronized providing the visibility needed to
eliminate documentation debt.
docfacto tools have been designed and developed to work seamlessly together to take the “too
hard” out of documentation without leaving the IDE, help business enrich their documentation
and reduce documentation debt. If you would like to try our tool kit you can get it on our
download page or try docfacto Links to keep your code and documentation in sync
Types of documentation
Software documentation, also referred to as source code documentation is a text that describes
computer software. It explains how software works but it can also explain how to use the
software properly. Several types of software documentation exist and can be classified into:
User Documentation
Also known as software manuals, user documentation is intended for end users and aims to help
them use software properly. It is usually arranged in a book-style and typically also features table
of contents, index and of course, the body which can be arranged in different ways, depending on
whom the software is intended for. For example, if the software is intended for beginners, it
usually uses a tutorial approach and guides the user step-by-step. Software manuals which are
intended for intermediate users, on the other hand, are typically arranged thematically, while
manuals for advanced users follow reference style.
Besides printed version, user documentation can also be available in an online version or PDF
format. Often, it is also accompanied by additional documentation such as video tutorials,
knowledge based articles, videos, etc.
Requirements Documentation
Architecture Documentation
Technical Documentation
Software commissioning
Process by which an equipment, facility, or plant (which is installed, or is complete or near
completion) is tested to verify if it functions according to its designobjectives or specifications.
Software maintenance is widely accepted part of SDLC now a days. It stands for all the
modifications and updations done after the delivery of software product. There are number of
reasons, why modifications are required, some of them are briefly mentioned below:
Market Conditions - Policies, which changes over the time, such as taxation and newly
introduced constraints like, how to maintain bookkeeping, may trigger need for
modification.
Client Requirements - Over the time, customer may ask for new features or functions in
the software.
Host Modifications - If any of the hardware and/or platform (such as operating system)
of the target host changes, software changes are needed to keep adaptability.
Organization Changes - If there is any business level change at client end, such as
reduction of organization strength, acquiring another company, organization venturing
into new business, need to modify in the original software may arise.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be just a routine
maintenance tasks as some bug discovered by some user or it may be a large event in itself based
on maintenance size or nature. Following are some types of maintenance based on their
characteristics:
On an average, the cost of software maintenance is more than 50% of all SDLC phases. There
are various
ous factors, which trigger maintenance cost goes high, such as:
Real-world
world factors affecting Maintenance Cost
Software-end
end factors affecting Maintenance Cost
C
IEEE provides a framework for sequential maintenance process activities. It can be used in
iterative manner and can be extended so that customized items and processes can be included.
Training facility is provided if required, in addition to the hard copy of user manual.
Software Re-engineering
When we need to update the software to keep it to the current market, without impacting its
functionality, it is called software re-engineering.
re engineering. It is a thorough process where the design of
software is changed and programs are re-written.
re
Legacy software cannot keep tuning with the latest technology available in the market. As the
hardware become obsolete, updating of software becomes a headache. Even if software grows
old with time, its functionality does not.
For example, initially Unix was developed in assembly language. When language C came into
existence, Unix was re-engineered
engineered in C, because working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software need more
maintenance than others
hers and they also need re-engineering.
re
Reverse Engineering
An existingg system is previously implemented design, about which we know nothing. Designers
then do reverse engineering by looking at the code and try to get the design. With design in hand,
they try to conclude the specifications. Thus, going in reverse from code to
to system specification.
Program Restructuring
It is a process to re-structure
structure and re-construct
re construct the existing software. It is all about re
re-arranging
the source code, either in same programming language or from one programming language to a
different one. Restructuring
estructuring can have either source code-restructuring
code and data-restructuring
restructuring or
both.
Re-structuring
structuring does not impact the functionality of the software but enhance reliability and
maintainability. Program components, which cause errors very frequently can be changed, or
updated with re-structuring.
Forward engineering is a process of obtaining desired software from the specifications in hand
which were brought down by means of reverse engineering. It assumes that there was some
software engineering already done in the past.
Forward engineering is same as software engineering process with only one difference – it is
carried out always after reverse
erse engineering.
Component Reusability
A component is a part of software program code, which executes an independent task in the
system. It can be a small module or sub-system
sub itself.
Example
Components have high cohesion of functionality and lower rate of coupling, i.e. they work
independently and can perform tasks without depending on other modules.
In modular programming, the modules are coded to perform specific tasks which can be used
across number of other software programs.
Software components provide interfaces, which can be used to establish communication among
different components.
Reuse Process
Two kinds of method can be adopted: either by keeping requirements same and adjusting
components or by keeping components same and modifying requirements.
Software evolution is the term used in software engineering (specifically software maintenance)
to refer to the process of developing software initially, then repeatedly updating it for various
reasons.
General introduction
Fred Brooks, in his key book The Mythical Man-Month, states that over 90% of the costs of a
typical system arise in the maintenance phase, and that any successful piece of software will
inevitably be maintained.
In fact, Agile methods stem from maintenance-like activities in and around web based
technologies, where the bulk of the capability comes from frameworks and standards
Software maintenance address bug fixes and minor enhancements and software evolution focus
on adaptation and migration.
Impact
The aim of software evolution would be to implement (and revalidate) the possible major
changes to the system without being able a priori to predict how user requirements will evolve.
The existing larger system is never complete and continues to evolve. As it evolves, the
complexity of the system will grow unless there is a better solution available to solve these
issues. The main objectives of software evolution are ensuring the reliability and flexibility of
the system. During the 20 years past, the lifespan of a system could be on average 6–10 years.
However, it was recentlyfound that a system should be evolved once every few months to ensure
it is adapted to the real-world environment. This is due to the rapid growth of World Wide Web
and Internet Resources that make it easier for users to find related information. The idea of
software evolution leads to open source development as anybody could download the source
Over time, software systems, programs as well as applications, continue to develop. These
changes will require new laws and theories to be created and justified. Some models as well
would require additional aspects in developing future programs. Innovations and improvements
do increase unexpected form of software development. The maintenance issues also would
probably change as to adapt to the evolution of the future software. Software process and
development are an ongoing experience that has a never-ending cycle. After going through
learning and refinements, it is always an arguable issue when it comes to matter of efficiency and
effectiveness of the programs.
E.B. Swanson initially identified three categories of maintenance: corrective, adaptive, and
perfective. Four categories of software were then catalogued by Lientz and Swanson (1980).
These have since been updated and normalized internationally in the ISO/IEC 14764:2006
All of the preceding take place when there is a known requirement for change.
Although these categories were supplemented by many authors like Warren et al. (1999) and
Chapin (2001), the ISO/IEC 14764:2006 international standard has kept the basic four
categories.
More recently the description of software maintenance and evolution has been done using
ontologies, which enrich the description of the many evolution activities.
Stage model
Current trends and practices are projected forward using a new model of software evolution
called the staged model. Staged model was introduced to replace conventional analysis which is
According to K.H. Bennett and V.T Rajlich, the key contribution is to separate the
'maintenance' phase into an evolution stage followed by a servicing and phase out stages.
The first version of software system which is lacking some features will be developed
during initial development or also known as alpha stage. However, the architecture has
already been possessed during this stage will bring for any future changes or
amendments. Most references in this stage will base on scenarios or case study.
Knowledge has defined as another important outcome of initial development. Such
knowledge including the knowledge of application domain, user requirements, business
rules, policies, solutions, algorithm, etc. Knowledge also seems as the important factor
for the subsequent phase of evolution.
Once the previous stage completed successfully (and must be completed successfully
before entering next stage), the next stage would be evolution. Users tend to change their
requirements as well as they prefer to see some improvements or changes. Due to this
factor, the software industry is facing the challenges of rapid changes environment.
Hence the goal of evolution is to adapt the application to the ever-changing user
requirements and operating environment. During the previous stage, the first version
application created might contain a lot of faults, and those faults will be fixed during
evolution stage based on more specified and accurate requirements due to the case study
or scenarios.
The software will continuously evolve until it is no longer evolvable and then enter stage
of servicing (also known as software maturity). During this stage, only minor changes
will be done.
Next stage which is phase-out, there is no more servicing available for that particular
software. However, the software still in production.
Lastly, close-down. The software use is disconnected or discontinue and the users are
directed towards a replacement.
Prof. Meir M. Lehman, who worked at Imperial College London from 1972 to 2002, and his
colleagues have identified a set of behaviours in the evolution of proprietary software. These
behaviours (or observations) are known as Lehman's Laws, and there are eight of them:
It is worth mentioning that the applicability of all of these laws for all types of software systems
has been studied by several researchers. For example, see a presentation by Nanjangud C
Narendrawhere he describes a case study of an enterprise Agile project in the light of Lehman’s
laws of software evolution. Some empirical observations coming from the study of open source
software development appear to challenge some of the laws
The laws predict that the need for functional change in a software system is inevitable, and not a
consequence of incomplete or incorrect analysis of requirements or bad programming. They state
that there are limits to what a software development team can achieve in terms of safely
implementing changes and new functionality.
Maturity Models specific to software evolution have been developed to improve processes, and
help to ensure continuous rejuvenation of the software as it evolves iteratively
The "global process" that is made by the many stakeholders (e.g. developers, users, their
managers) has many feedback loops. The evolution speed is a function of the feedback loop
structure and other characteristics of the global system. Process simulation techniques, such as
system dynamics can be useful in understanding and managing such global process.
Problems related to software change management tend to occur in five areas: analysis and
identification related problems, communication issues, decision-making challenges, effectiveness
roadblocks, traceability issues and problems with tools.If we examine each of these we can see a
number of important issues that frequently crop-up, especially within third-generation languages
(3GL).
Today, I’d like to focus on a high-level review of analysis and identification related problems in
software change management. How can we identify and analyze problems in our software to best
understand and realize where we have errors that require correction. More importantly, how can
we change our approach to application development in a way that reduces the impact and
likelihood of errors?
Analysis and identification problems can be seen in several areas. First of all, problems with
analysis and identification are driven by concurrent and parallel development approaches. The
problems occur because with concurrent efforts it becomes more difficult to determine root
causes of program errors. This is exacerbated by the fact that standalone testing does not find the
problems leading to the error conditions. Solutions can be found by reducing the number of
developers, engaging in more frequent cycles (such as with agile development or SCRUM),
testing without compiling and ultimately by delegating more basic functions to an application
platform in a post-3GL approach.
Another factor driving analysis and identification problems is code optimization. For one thing,
optimized code, especially optimized C, C# and C++ code is very difficult to understand. In
addition, with optimized code, object oriented development tends to create a ripple effect that is
not apparent in typical source. Code optimization issues can be avoided by leveraging pre-
optimized code, i.e., avoiding heavy 3GL development projects with more advanced
development platforms.
A third factor leading to analysis and identification problems comes from the use of shared
software components. Here we see impacts across the code base and ripple effects. These can
be avoided through better pre-planning, wise use of inheritance principles and by leveraging a
platform rather than resortingtoline-by-line coding.
The need for high reliability makes the problem of analysis and identification of the impact of
software changes particularly important. It is difficult to predict the impact of changes and at
times corrective actions may seem difficult or impossible. Avoid this sense of being
158 www.someakenya.com Contact: 0707 737 890
overwhelmed by engaging in iterative development and testing. Make use of an application
platform to better overcome challenges in the analysis and identification of software change
management problems
Overcoming organizational politics and resistance to change is a daunting challenge for any
organization implementing new software systems. First, managers have to work hard at agreeing
on the initiative and deciding what would be best for the organization as a whole and not only for
their particular area of expertise. Once on the same page and all politics are set aside, they have
to collaboratively deal with their staff.
Resistance to change is an ongoing problem at both the individual and the organizational level as
it “impairs concerted efforts to improve performance. Management and the project leadership
team must find ways to work with this resistance, overcome it, and successfully carry out their
new vision. The key to doing so is change management.
As defined by Nancy Lorenzi and Robert Riley, change management is “the process by which an
organization gets to its future state, its vision.” What makes it different from traditional
approaches to planning is that while they “delineate the steps on the journey,” change
management, by contrast, “attempts to facilitate that journey.
1) Top Management Support. The foundation of any successful organizational change revolves
around the leadership team, their ability to set politics aside, and to collaborate, agree, and
commit to the change process. Top management support should be included in each step of the
implementation and in all organizational levels.3 When there is consistent, managerial backing at
every level, the entire workforce is being driven toward the common goal of accepting and
adapting to the new system. Effective leadership can sharply reduce the behavioral resistance to
change, especially when dealing with new technologies.
4) Systematic Planning. The presence of a clear plan for change is a great way to boost software
implementation projects. A project vision specifies what the implementation project is meant to
achieve and how it can positively affect the organization and staff.Additionally, assessing the
readiness for change and developing a formal strategy allows for better planning and smoother
implementation.
After a clear vision is established, the leadership team must analyze their organization and assess
its readiness for change by analyzing the culture and behavior of the staff and overall
organization.If the leadership team takes the time to assess their staff and determine the
organization’s readiness for change, they can deal with the implementation and resistance from
staff much more effectively. They can also identify the key drivers of change and tie them into
all areas of the workplace, so that all staff members remain aligned to objectives.
5) Broad Participation. A company wants to engage staff within the whole life cycle of
implementation in order to keep them in the loop and responsive. As Lorenzi and Riley note,
“People who have low psychological ownership in a system and who vigorously resist its
implementation can bring a ‘technically best’ system to its knees.
Sidney Fuchs of IBM notes that to ensure that all staff adopts a specific change, they must feel
the demand for it. It is critical, therefore, to “make sure each person understands the problems
you are addressing and has a feeling of ownership for the solutions you’re proposing.”5 If
management can figure out how specific end users will benefit from the new system and convey
that to those users, they will strengthen the project significantly. The project leadership team
needs to work carefully and strategically to overcome resistance to change among staff and lead
a successful implementation process by enhancing user involvement in the process.
6) Effective Communication. Before and during any software implementation, meaningful and
effective communication at all levels of the organization is essential. This is mostly because
substantial communication allows for strong teamwork, effective planning, and end user
involvement.
Ample communication regarding the new implementation project helps to foster understanding
of the project’s vision and thus to overcome resistance to the project.4 Good communication also
heightens overall awareness of the system.2 Only with thorough and ongoing communication
among and between both management and staff can the implementation project be successful.
7) Feedback. A key to identifying the source of user resistance to a project is the feedback
management receives from staff.3 Project team leaders, the project champion, and management
should all make sure that they are providing feedback about the new system. More importantly,
they must gather feedback from their staff members and identify the consensus regarding the
new system.
Enforcers of the new system have to overcome change obstacles by considering all end user
complaints. Perhaps they are legitimate, and the system has a glitch. In any case, system issues
must be addressed immediately to avoid excessive pushback from staff. A final argument in
favor of gathering and responding to feedback is that people often respond favorably to the
implementation of a new technology when those in control of the process consider their input.4
Training should be offered prior to, during, and after the implementation to ensure operational
end user knowledge. Management must take training seriously to avoid the adoption of an
ineffective system. There is nothing worse than a useless, unused software program after a
company has spent much time and capital investing in it.
9) Incentives. Incentives help develop strong feelings toward accepting and adopting new
systems. Incentives should be offered to not only engage staff and overcome resistance to
change, but to retain key implementation staff as well. Revised titles, overtime pay, letters of
merit, and certificates of recognition can be used as forms of incentives to foster staff
involvement and commitment to the new project. Incentives are a great way to encourage end
user involvement, increase
participation, encourage training, and strengthen the overall system.
Having a strong, well-thought-out implementation process and aligned staff is key to the success
of the new product or system: “Unless these blocks are in place, technology introductions will
fail to satisfy expectations and may even produce adverse results.”5 The leadership team must
know their staff’s abilities and the culture of their organization. They must know how to assess
their organization and get through to staff members to ensure they get the most out of their
financial investment.
Change management is certainly the most difficult part of the implementation process. Yet once
resistance is phased out strategically, change can be phased in, and metrics can be used to track
the new installment’s progress and success. Leaders have to take the time to understand user
resistance, realize where it’s coming from, and figure out a way to remove it from the
implementation process. Companies spend lots of time and money when determining what and
when to implement a new technological advancement. By utilizing change management
techniques, they can make sure they don’t lose out.
An IT audit is different from a financial statement audit. While a financial audit's purpose is to
evaluate whether an organization is adhering to standard accounting practices, the purposes of an
IT audit are to evaluate the system's internal control design and effectiveness. This includes, but
is not limited to, efficiency and security protocols, development processes, and IT governance or
oversight. Installing controls are necessary but not sufficient to provide adequate security. People
responsible for security must consider if the controls are installed as intended, if they are
effective if any breach in security has occurred and if so, what actions can be done to prevent
future breaches. These inquiries must be answered by independent and unbiased observers.
These observers are performing the task of information systems auditing. In an Information
Systems (IS) environment, an audit is an examination of information systems, their inputs,
outputs, and processing.
The primary functions of an IT audit are to evaluate the systems that are in place to guard an
organization's information. Specifically, information technology audits are used to evaluate the
organization's ability to protect its information assets and to properly dispense information to
authorized parties. The IT audit aims to evaluate the following:
Will the organization's computer systems be available for the business at all times when
required? (known as availability) Will the information in the systems be disclosed only to
authorized users? (known as security and confidentiality) Will the information provided by the
system always be accurate, reliable, and timely? (Measures the integrity) In this way, the audit
hopes to assess the risk to the company's valuable asset (its information) and establish methods
of minimizing those risks.
Also Known As: Information Systems Audit, ADP audits, EDP audits, computer audits
Types of IT audits
Various authorities have created differing taxonomies to distinguish the various types of IT
audits. Goodman & Lawless state that there are three specific systematic approaches to carry out
an IT audit:
Technological innovation process audit. This audit constructs a risk profile for
existing and new projects. The audit will assess the length and depth of the
Systems and Applications: An audit to verify that systems and applications are
appropriate, are efficient, and are adequately controlled to ensure valid, reliable, timely,
and secure input, processing, and output at all levels of a system's activity.
Information Processing Facilities: An audit to verify that the processing facility is
controlled to ensure timely, accurate, and efficient processing of applications under
normal and potentially disruptive conditions.
Systems Development: An audit to verify that the systems under development meet the
objectives of the organization and to ensure that the systems are developed in
accordance with generally accepted standards for systems development.
Management of IT and Enterprise Architecture: An audit to verify that IT
management has developed an organizational structure and procedures to ensure a
controlled and efficient environment for information processing.
Client/Server, Telecommunications, Intranets, and Extranets: An audit to verify
that telecommunications controls are in place on the client (computer receiving
services), server, and on the network connecting the clients and servers.
And some lump all IT audits as being one of only two types: "general control review" audits or
"application control review" audits.
A number of IT Audit professionals from the Information Assurance realm consider there to be
three fundamental types of controls regardless of the type of audit to be performed, especially in
the IT realm. Many frameworks and standards try to break controls into different disciplines or
arenas, terming them “Security Controls“, ”Access Controls“, “IA Controls” in an effort to
define the types of controls involved. At a more fundamental level, these controls can be shown
to consist of three types of fundamental controls: Protective/Preventative Controls, Detective
Controls and Reactive/Corrective Controls.
IS auditing considers all the potential hazards and controls in information systems. It focuses on
issues like operations, data, integrity, software applications, security, privacy, budgets and
expenditures, cost control, and productivity. Guidelines are available to assist auditors in their
jobs, such as those from Information Systems Audit and Control Association.
IT Audit process
The following are basic steps in performing the Information Technology Audit Process:
1. Planning
2. Studying and Evaluating Controls
3. Testing and Evaluating Controls
4. Reporting
5. Follow-up
6. reports
Security
Auditing information security is a vital part of any IT audit and is often understood to be the
primary purpose of an IT Audit. The broad scope of auditing information security includes such
topics as data centers (the physical security of data centers and the logical security of databases,
servers and network infrastructure components), networks and application security. Like most
technical realms, these topics are always evolving; IT auditors must constantly continue to
expand their knowledge and understanding of the systems and environment& pursuit in system
company.
Several training and certification organizations have evolved. Currently, the major certifying
bodies, in the field, are the Institute of Internal Auditors (IIA), the SANS Institute (specifically,
the audit specific branch of SANS and GIAC) and ISACA. While CPAs and other traditional
auditors can be engaged for IT Audits, organizations are well advised to require that individuals
with some type of IT specific audit certification are employed when validating the controls
surrounding IT systems.
The concept of IT auditing was formed in the mid-1960s. Since that time, IT auditing has gone
through numerous changes, largely due to advances in technology and the incorporation of
technology into business.
Currently, there are many IT dependent companies that rely on the Information Technology in
order to operate their business e.g. Telecommunication or Banking company. For the other types
of business, IT plays the big part of company including the applying of workflow instead of
using the paper request form, using the application control instead of manual control which is
more reliable or implementing the ERP application to facilitate the organization by using only 1
application. According to these, the importance of IT Audit is constantly increased. One of the
most important roles of the IT Audit is to audit over the critical system in order to support the
Financial audit or to support the specific regulations announced e.g. SOX.
Audit personnel
Qualifications
The CISM and CAP credentials are the two newest security auditing credentials, offered by the
ISACAand (ISC) respectively. Strictly speaking, only the CISA or GSNA title would sufficiently
demonstrate competences regarding both information technology and audit aspects with the
CISA being more audit focused and the GSNA being more information technology focused.
Outside of the US, various credentials exist. For example, the Netherlands has the RE credential
(as granted by the NOREA [Dutch site] IT-auditors' association), which among others requires a
post-graduate IT-audit education from an accredited university, subscription to a Code of Ethics,
and adherence to continuous education requirement
1. Will the organization's computerized systems be available for the business at all times
when required? (Availability)
2. Will the information in the systems be disclosed only to authorized users?
(Confidentiality)
The performance of an IS Audit covers several facets of the financial and organizational
functions of our Clients. The diagram to the right gives you an overview of the Information
Systems Audit flow: From Financial Statements to the Control Environment and Information
Systems Platforms.
In this phase we plan the information system coverage to comply with the audit objectives
specified by the Client and ensure compliance to all Laws and Professional Standards. The first
thing is to obtain an Audit Charter from the Client detailing the purpose of the audit, the
management responsibility, authority and accountability of the Information Systems Audit
function as follows:
1. Responsibility: The Audit Charter should define the mission, aims, goals and objectives
of the Information System Audit. At this stage define the Key Performance Indicators
and an Audit Evaluation process;
2. Authority: The Audit Charter should clearly specify the Authority assigned to the
Information Systems Auditors with relation to the Risk Assessment work that will be
carried out, right to access the Client’s information, the scope and/or limitations to the
scope, the Client’s functions to be audited and the auditee expectations; and
3. Accountability: The Audit Charter should clearly define reporting lines, appraisals,
assessment of compliance and agreed actions.
The Audit Charter should be approved and agreed upon by an appropriate level within the
Client’s Organization.
In addition to the Audit Charter, we should be able to obtain a written representation (“Letter of
Representation”) from the Client’s Management acknowledging:
1. Their responsibility for the design and implementation of the Internal Control Systems
affecting the IT Systems and processes
2. Their willingness to disclose to the Information Systems Auditor their knowledge of
irregularities and/or illegal acts affecting their organisation pertaining to management and
employees with significant roles within the internal audit department.
3. Their willingness to disclose to the IS Auditor the results of any risk assessment that a
material misstatement may have occurred
Risk is the possibility of an act or event occurring that would have an adverse effect on the
organisation and its information systems. Risk can also be the potential that a given threat will
exploit vulnerabilities of an asset or group of assets to cause loss of, or damage to, the assets. It is
ordinarily measured by a combination of effect and likelihood of occurrence.
More and more organisations are moving to a risk-based audit approach that can be adapted to
develop and improve the continuous audit process. This approach is used to assess risk and to
assist an IS auditor’s decision to do either compliance testing or substantive testing. In a risk
based audit approach, IS auditors are not just relying on risk. They are also relying on internal
and operational controls as well as knowledge of the organisation. This type of risk assessment
decision can help relate the cost/benefit analysis of the control to the known risk, allowing
practical choices.
The process of quantifying risk is called Risk Assessment. Risk Assessment is useful in making
decisions such as:
Inherent Risk: Inherent risk is the susceptibility of an audit area to error which could be
material, individually or in combination with other errors, assuming that there were no related
internal controls. In assessing the inherent risk, the IS auditor should consider both pervasive and
detailed IS controls. This does not apply to circumstances where the IS auditor’s assignment is
related to pervasive IS controls only. A pervasive IS Control are general controls which are
designed to manage and monitor the IS environment and which therefore affect all IS-related
activities. Some of the pervasive IS Controls that an auditor may consider include:
Control Risk: Control risk is the risk that an error which could occur in an audit area, and which
could be material, individually or in combination with other errors, will not be prevented or
detected and corrected on a timely basis by the internal control system. For example, the control
risk associated with manual reviews of computer logs can be high because activities requiring
investigation are often easily missed owing to the volume of logged information. The control risk
associated with computerized data validation procedures is ordinarily low because the processes
are consistently applied. The IS auditor should assess the control risk as high unless relevant
internal controls are:
Identified
Evaluated as effective
Tested and proved to be operating appropriately
Detection Risk: Detection risk is the risk that the IS auditor’s substantive procedures will not
detect an error which could be material, individually or in combination with other errors. In
determining the level of substantive testing required, the IS auditor should consider both:
A risk based approach to an Information Systems Audit will enable us to develop an overall and
effective IS Audit plan which will consider all the potential weaknesses and /or absence of
Controls and determine whether this could lead to a significant deficiency or material weakness.
In order to perform an effective Risk Assessment, we will need to understand the Client’s
Business Environment and Operations. Usually the first phase in carrying out a Risk Based IS
Audit is to obtain an understanding of the Audit Universe. In understanding the Audit Universe
we perform the following:
The Chat to the right summarizes the business process analysis phase.
In the performance of Audit Work the Information Systems Audit Standards require us to
provide supervision, gather audit evidence and document our audit work. We achieve this
objective through:
Based on our risk assessment and upon the identification of the risky areas, we move ahead to
develop an Audit Plan and Audit Program. The Audit Plan will detail the nature, objectives,
timing and the extent of the resources required in the audit.
Based on the compliance testing carried out in the prior phase, we develop an audit program
detailing the nature, timing and extent of the audit procedures. In the Audit Plan various Control
Tests and Reviews can be done. They are sub-divided into:
The Chat below to the left shows the Control Review Tests that can be performed in the two
Control Tests above.
The Control Objectives for Information and related Technology (COBIT) is a set of best
practices (framework) for information (IT) management created by the Information Systems
Audit and Control Association (ISACA), and the IT Governance Institute (ITGI) in 1992.
COBIT provides managers, auditors, and IT users with a set of generally accepted measures,
indicators, processes and best practices to assist them in maximizing the benefits derived through
the use of information technology and developing appropriate IT governance and control in a
company.
The Framework comprises a set of 34 high-level Control Objectives, one for each of the IT
processes listed in the framework. These are then grouped into four domains: planning and
organisation, acquisition and implementation, delivery and support, and monitoring. This
structure covers all aspects of information processing and storage and the technology that
supports it. By addressing these 34 high-level control objectives, we will ensure that an adequate
control system is provided for the IT environment. A diagrammatic representation of the
framework is shown below.
We shall apply the COBIT framework in planning, executing and reporting the results of the
audit. This will enable us to review the General Controls Associated with IT Governance Issues.
Our review shall cover the following domains;
The above control objectives will be matched with the business control objectives to apply
specific audit procedures that will provide information on the controls built in the application,
indicating areas of improvement that we need to focus on achieving.
An Application Control Review will provide management with reasonable assurance that
transactions are processed as intended and the information from the system is accurate, complete
and timely. An Application Controls review will check whether:
1. Data Origination controls are controls established to prepare and authorize data to be
entered into an application. The evaluation will involve a review of source document
design and storage, User procedures and manuals, Special purpose forms, Transaction ID
codes, Cross reference indices and Alternate documents where applicable. It will also
involve a review of the authorization procedures and separation of duties in the data
capture process.
2. Input preparation controls are controls relating to Transaction numbering, Batch serial
numbering, Processing, Logs analysis and a review of transmittal and turnaround
documents
3. Transmission controls involve batch proofing and balancing, Processing schedules,
Review of Error messages, corrections monitoring and transaction security
4. Processing controls ensure the integrity of the data as it undergoes the processing phase
including Relational Database Controls, Data Storage and Retrieval
5. Output controls procedures involve procedures relating to report distribution,
reconciliation, output error processing, records retention.
The use of Computer Aided Audit Techniques (CAATS) in the performance of an IS Audit
The Information Systems Audit Standards require us that during the course of an audit, the IS
auditor should obtain sufficient, reliable and relevant evidence to achieve the audit objectives.
The audit findings and conclusions are to be supported by the appropriate analysis and
interpretation of this evidence. CAATs are useful in achieving this objective.
Computer Assisted Audit Techniques (CAATs) are important tools for the IS auditor in
performing audits. They include many types of tools and techniques, such as generalized audit
software, utility software, test data, application software tracing and mapping, and audit expert
systems. For us, our CAATs include ACL Data Analysis Software and the Information Systems
Audit Toolkit(ISAT).
CAATs may produce a large proportion of the audit evidence developed on IS audits and, as a
result, the IS auditor should carefully plan for and exhibit due professional care in the use of
1. Data files, such as detailed transaction files are retained and made available before the
onset of the audit.
2. You have obtained sufficient rights to the client’s IS facilities, programs/system, and data
3. Tests have been properly scheduled to minimize the effect on the organization’s
production environment.
4. The effect that changes to the production programs/system have been properly
considered.
See Template here for example tests that you can perform with ACL
PHASE 4: Reporting
Upon the performance of the audit test, the Information Systems Auditor is required to produce
and appropriate report communicating the results of the IS Audit. An IS Audit report should: