0% found this document useful (0 votes)
24 views

Application Development in Education

Uploaded by

omarfaroukgh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Application Development in Education

Uploaded by

omarfaroukgh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

APPLICATION DEVELOPMENT IN EDUCATION

INTRODUCTION: OVERVIEW OF SYSTEMS DEVELOPMENT PROCESS ------------------------------------------------------------2

SYSTEMS DEVELOPMENT MODELS AND METHODOLOGIES ----------------------------------------------------------------------- 6

SYSTEMS ANALYSIS & DESIGN ------------------------------------------------------------------------------------------------------------21

SYSTEMS DEVELOPMENT ------------------------------------------------------------------------------------------------------------------50

INTRODUCTION TO ROBOTICS ----------------------------------------------------------------------------------------------------------- 75

SYSTEMS DEPLOYMENT --------------------------------------------------------------------------------------------------------------------95

1
INTRODUCTION: OVERVIEW OF SYSTEMS DEVELOPMENT PROCESS

SYSTEMS DEVELOPMENT - OVERVIEW OF SYSTEMS AND DESIGN, SYSTEM


DEVELOPMENT MANAGEMENT LIFE-CYCLE

Systems development is the procedure of defining, designing, testing, and implementing a new
software application or program. It comprises of the internal development of customized systems,
the establishment of database systems, or the attainment of third party developed software. In
this system, written standards and techniques must monitor all information systems processing
functions. The management of company must describe and execute standards and embrace
suitable system development life cycle practise that manage the process of developing, acquiring,
implementing, and maintaining computerized information systems and associated technology.

System development methodologies are promoted in order to improve the management and
control of the software development process, structuring and simplifying the procedure, and
standardizing the development process and product by stipulating actions to be done and
methods to be used. It is often implicitly presumed that the use of a system development
methodology will increase system development output and excellence.

SOFTWARE DEVELOPMENT LIFE CYCLE

Software Development Life Cycle a.k.a. SDLC offers a systematic process for building as well
as delivering software applications. It is a multistep, iterative process. Development teams rely
on a system development life cycle to create efficacious software with as little issues as possible.

The generalized version of an SDLC has 6 distinct stages, namely: planning, analysis, designing,
development & testing, implementation, and maintenance. Each of them is briefly explained in
the following section.

PHASES OF THE SOFTWARE DEVELOPMENT LIFE CYCLE

1. Planning

This is the first phase in the systems development process. It identifies whether or not there is the
need for a new system to achieve a business"s strategic objectives. This is a preliminary plan (or
a feasibility study) for a company"s business initiative to acquire the resources to build on an
infrastructure to modify or improve a service. The company might be trying to meet or exceed
expectations for their employees, customers and stakeholders too. The purpose of this step is to
find out the scope of the problem and determine solutions. Resources, costs, time, benefits and
other items should be considered at this stage.

2. Systems Analysis and Requirements

2
The second phase is where businesses will work on the source of their problem or the need for a
change. In the event of a problem, possible solutions are submitted and analysed to identify the
best fit for the ultimate goal(s) of the project. This is where teams consider the functional
requirements of the project or solution. It is also where system analysis takes place—or
analysing the needs of the end users to ensure the new system can meet their expectations.
Systems analysis is vital in determining what a business’s needs are, as well as how they can be
met, who will be responsible for individual pieces of the project, and what sort of timeline should
be expected.

There are several tools businesses can use that are specific to the second phase. They include:

 CASE (Computer Aided Systems/Software Engineering)

 Requirements gathering

 Structured analysis

3. Systems Design

The third phase describes, in detail, the necessary specifications, features and operations that will
satisfy the functional requirements of the proposed system which will be in place. This is the step
for end users to discuss and determine their specific business information needs for the proposed
system. It"s during this phase that they will consider the essential components (hardware and/or
software) structure (networking capabilities), processing and procedures for the system to
accomplish its objectives.

4. Development

The fourth phase is when the real work begins—in particular, when a programmer, network
engineer and/or database developer are brought on to do the major work on the project. This
work includes using a flow chart to ensure that the process of the system is properly organized.
The development phase marks the end of the initial section of the process. Additionally, this
phase signifies the start of production. The development stage is also characterized by instillation
and change. Focusing on training can be a huge benefit during this phase.

5. Integration and Testing

The fifth phase involves systems integration and system testing (of programs and procedures)—
normally carried out by a Quality Assurance (QA) professional—to determine if the proposed
design meets the initial set of business goals. Testing may be repeated, specifically to check for
errors, bugs and interoperability. This testing will be performed until the end user finds it
acceptable. Another part of this phase is verification and validation, both of which will help
ensure the program’s successful completion.

6. Implementation
3
The sixth phase is when the majority of the code for the program is written. Additionally, this
phase involves the actual installation of the newly-developed system. This step puts the project
into production by moving the data and components from the old system and placing them in the
new system via a direct cutover. While this can be a risky (and complicated) move, the cutover
typically happens during off-peak hours, thus minimizing the risk. Both system analysts and end-
users should now see the realization of the project that has implemented changes.

7. Operations and Maintenance

In the operations and maintenance phase, developers watch software for bugs or defects. If they
find one, they create a bug report. During maintenance, it is important to consider opportunities
for when the development cycle starts over again.

A sign that this phase is working well when developers are able to quickly identify and resolve
problems.

During this stage, support specialists will report issues, product owners will help prioritize them,
and developers will work with testers to make improvements.

BENEFITS OF A WELL-DEFINED SYSTEM DEVELOPMENT LIFE CYCLE

There are numerous benefits for deploying a system development life cycle that include the
ability to pre-plan and analyze structured phases and goals. The goal-oriented processes of SDLC
are not limited to a one-size-fits-all methodology and can be adapted to meet changing needs.
However, if well-defined for your business, you can:

 Have a clear view of the entire project, the personnel involved, staffing requirements, a
defined timeline, and precise objectives to close each phase.

 Base costs and staffing decisions on concrete information and need.

 Provide verification, goals, and deliverables that meet design and development standards
for each step of the project, developing extensive documentation throughout.

 Provide developers a measure of control through the iterative, phased approach, which
usually begins with an analysis of costs and timelines.

 Improve the quality of the final system with verification at each phase.

DISADVANTAGES OF A STRUCTURED SYSTEM DEVELOPMENT LIFE CYCLE

In these same areas, there are some who find disadvantages when following a structured SDLC.
Some of the downfalls include:

 Many of the methods are considered inflexible, and some suffer from outdated
processes.

4
 Since you base the plan on requirements and assumptions made well ahead of the
project’s deployment, many practitioners identify difficulty in responding to changing
circumstances in the life cycle.

 Some consider the structured nature of SDLC to be time and cost prohibitive.

 Some teams find it too complex to estimate costs, are unable to define details early on in
the project, and do not like rigidly defined requirements.

 Testing at the end of the life cycle is not favourable to all development teams. Many
prefer to test throughout their process.

 The documentation involved in a structured SDLC approach can be overwhelming.

 Teams who prefer to move between stages quickly and even move back to a previous
phase find the structured phase approach challenging.

5
SYSTEMS DEVELOPMENT MODELS AND METHODOLOGIES

THE WATERFALL MODEL

WATERFALL MODEL is a sequential model that divides software development into pre-
defined phases. Each phase must be completed before the next phase can begin with no overlap
between the phases.

The Waterfall Model signifies a traditional type of system development project lifecycle. It
builds upon the basic steps associated with system development project lifecycle and uses a top-
down development cycle in completing the system

Different Phases of Waterfall Model in Software Engineering

DIFFERENT ACTIVITIES PERFORMED IN EACH STAGE


PHASES
Requirement  During this phase, detailed requirements of the software
Gathering stage system to be developed are gathered from client

Design Stage  Plan the programming language, for


Example Java, PHP, .net

 Or database like Oracle, MySQL, etc.

 Or other high-level technical details of the project

Built Stage  After design stage, it is built stage, that is nothing but
coding the software

Test Stage  In this phase, you test the software to verify that it is
built as per the specifications given by the client.

Deployment stage  Deploy the application in the respective environment

Maintenance stage  Once your system is ready to use, you may later require
change the code as per customer request

WHEN TO USE SDLC WATERFALL MODEL

Waterfall model can be used when

 Requirements are not changing frequently

6
 Application is not complicated and big

 Project is short

 Requirement is clear

 Environment is stable

 Technology and tools used are not dynamic and is stable

 Resources are available and trained

ADVANTAGES AND DISADVANTAGES OF WATERFALL-MODEL

ADVANTAGES DIS-ADVANTAGES

Before the next phase of development, each Error can be fixed only during the phase
phase must be completed

Suited for smaller projects where requirements It is not desirable for complex project where
are well defined requirement changes frequently

They should perform quality assurance test Testing period comes quite late in the
(Verification and Validation) before developmental process
completing each stage

Elaborate documentation is done at every Documentation occupies a lot of time of


phase of the software's development cycle developers and testers

Project is completely dependent on project Clients valuable feedback cannot be included


team with minimum client intervention with on-going development phase

Any changes in software is made during the Small changes or errors that arise in the
process of the development completed software may cause a lot of
problems

PROTOTYPING MODEL

Prototyping Model is a software development model in which prototype is built, tested, and
reworked until an acceptable prototype is achieved. It also creates base to produce the final
system or software. It works best in scenarios where the project's requirements are not known in
detail. It is an iterative, trial and error method which takes place between developer and client.

PROTOTYPING MODEL PHASES

7
1. Requirements gathering and analysis

A prototyping model starts with requirement analysis. In this phase, the requirements of the
system are defined in detail. During the process, the users of the system are interviewed to know
what their expectation from the system is.

2. Quick design

The second phase is a preliminary design or a quick design. In this stage, a simple design of the
system is created. However, it is not a complete design. It gives a brief idea of the system to the
user. The quick design helps in developing the prototype.

3. Build a Prototype

In this phase, an actual prototype is designed based on the information gathered from quick
design. It is a small working model of the required system.

4. Initial user evaluation

In this stage, the proposed system is presented to the client for an initial evaluation. It helps to
find out the strength and weakness of the working model. Comment and suggestion are collected
from the customer and provided to the developer.

5. Refining prototype

If the user is not happy with the current prototype, you need to refine the prototype according to
the user's feedback and suggestions.

This phase will not over until all the requirements specified by the user are met. Once the user is
satisfied with the developed prototype, a final system is developed based on the approved final
prototype.

6. Implement Product and Maintain

Once the final system is developed based on the final prototype, it is thoroughly tested and
deployed to production. The system undergoes routine maintenance for minimizing downtime
and prevents large-scale failures.

TYPES OF PROTOTYPING MODELS

Four types of Prototyping models are:

 Rapid Throwaway prototypes

 Evolutionary prototype

 Incremental prototype

8
 Extreme prototype

Rapid Throwaway Prototype

Rapid throwaway is based on the preliminary requirement. It is quickly developed to show how
the requirement will look visually. The customer's feedback helps drives changes to the
requirement, and the prototype is again created until the requirement is baselined.

In this method, a developed prototype will be discarded and will not be a part of the ultimately
accepted prototype. This technique is useful for exploring ideas and getting instant feedback for
customer requirements.

Evolutionary Prototyping

Here, the prototype developed is incrementally refined based on customer's feedback until it is
finally accepted. It helps you to save time as well as effort. That's because developing a
prototype from scratch for every interaction of the process can sometimes be very frustrating.

This model is helpful for a project which uses a new technology that is not well understood. It is
also used for a complex project where every functionality must be checked once. It is helpful
when the requirement is not stable or not understood clearly at the initial stage.

Incremental Prototyping

In incremental Prototyping, the final product is decimated into different small prototypes and
developed individually. Eventually, the different prototypes are merged into a single product.
This method is helpful to reduce the feedback time between the user and the application
development team.

Extreme Prototyping:

Extreme prototyping method is mostly used for web development. It is consists of three
sequential phases.

 Basic prototype with the entire existing page is present in the HTML format.

 You can simulate data process using a prototype services layer.

 The services are implemented and integrated into the final prototype.

BEST PRACTICES OF PROTOTYPING

Here, are a few things which you should watch for during the prototyping process:

 You should use Prototyping when the requirements are unclear

 It is important to perform planned and controlled Prototyping.

9
 Regular meetings are vital to keep the project on time and avoid costly delays.

 The users and the designers should be aware of the prototyping issues and pitfalls.

 At a very early stage, you need to approve a prototype and only then allow the team to
move to the next step.

 In software prototyping method, you should never be afraid to change earlier decisions if
new ideas need to be deployed.

 You should select the appropriate step size for each version.

 Implement important features early on so that if you run out of the time, you still have a
worthwhile system

ADVANTAGES OF THE PROTOTYPING MODEL

Here, are important pros/benefits of using Prototyping models:

 Users are actively involved in development. Therefore, errors can be detected in the
initial stage of the software development process.

 Missing functionality can be identified, which helps to reduce the risk of failure as
Prototyping is also considered as a risk reduction activity.

 Helps team member to communicate effectively

 Customer satisfaction exists because the customer can feel the product at a very early
stage.

 There will be hardly any chance of software rejection.

 Quicker user feedback helps you to achieve better software development solutions.

 Allows the client to compare if the software code matches the software specification.

 It helps you to find out the missing functionality in the system.

 It also identifies the complex or difficult functions.

 Encourages innovation and flexible designing.

 It is a straightforward model, so it is easy to understand.

 No need for specialized experts to build the model

 The prototype serves as a basis for deriving a system specification.

10
 The prototype helps to gain a better understanding of the customer's needs.

 Prototypes can be changed and even discarded.

 A prototype also serves as the basis for operational specifications.

 Prototypes may offer early training for future users of the software system.

DISADVANTAGES OF THE PROTOTYPING MODEL

Here, are important cons/drawbacks of prototyping model:

 Prototyping is a slow and time taking process.

 The cost of developing a prototype is a total waste as the prototype is ultimately thrown
away.

 Prototyping may encourage excessive change requests.

 Sometimes customers may not be willing to participate in the iteration cycle for the
longer time duration.

 There may be far too many variations in software requirements when each time the
prototype is evaluated by the customer.

 Poor documentation because the requirements of the customers are changing.

 It is very difficult for software developers to accommodate all the changes demanded by
the clients.

 After seeing an early prototype model, the customers may think that the actual product
will be delivered to him soon.

 The client may lose interest in the final product when he or she is not happy with the
initial prototype.

 Developers who want to build prototypes quickly may end up building sub-standard
development solutions.

Summary

 In Software Engineering, Prototype methodology is a software development model in


which a prototype is built, test and then reworked when needed until an acceptable
prototype is achieved.

11
 1) Requirements gathering and analysis, 2) Quick design, 3) Build a Prototype, 4) Initial
user evaluation, 5) Refining prototype, 6)Implement Product and Maintain; are 6 steps of
the prototyping process

 Type of prototyping models are 1) Rapid Throwaway prototypes 2) Evolutionary


prototype 3) Incremental prototype 4) Extreme prototype

 Regular meetings are essential to keep the project on time and avoid costly delays in
prototyping approach.

 Missing functionality can be identified, which helps to reduce the risk of failure as
Prototyping is also considered as a risk reduction activity in SDLC.

 Prototyping may encourage excessive change requests.

THE RAD MODEL

RAD Model or Rapid Application Development model is a software development process based
on prototyping without any specific planning. In RAD model, there is less attention paid to the
planning and more priority is given to the development tasks. It targets at developing software in
a short span of time.

SDLC RAD modelling has following phases

 Business Modeling
 Data Modeling
 Process Modeling
 Application Generation
 Testing and Turnover

It focuses on input-output source and destination of the information. It emphasizes on delivering


projects in small pieces; the larger projects are divided into a series of smaller projects. The main
features of RAD modelling are that it focuses on the reuse of templates, tools, processes, and
code.

DIFFERENT PHASES OF RAD MODEL

There are following five major phases of Rapid Application Development Model

RAD Model Activities performed in RAD Modeling

12
Phases
Business  On basis of the flow of information and distribution between various
Modeling business channels, the product is designed

Data Modeling  The information collected from business modelling is refined into a
set of data objects that are significant for the business

Process  The data object that is declared in the data modelling phase is
Modeling transformed to achieve the information flow necessary to implement
a business function

Application  Automated tools are used for the construction of the software, to
Generation convert process and data models into prototypes

Testing and  As prototypes are individually tested during every iteration, the
Turnover overall testing time is reduced in RAD.

WHEN TO USE RAD METHODOLOGY?

 When a system needs to be produced in a short span of time (2-3 months)


 When the requirements are known
 When the user will be involved all through the life cycle
 When technical risk is less
 When there is a necessity to create a system that can be modularized in 2-3 months of
time
 When a budget is high enough to afford designers for modelling along with the cost of
automated tools for code generation

RAPID APPLICATION DEVELOPMENT ADVANTAGES AND DISADVANTAGES

ADVANTAGES OF RAD MODEL DISADVANTAGES OF RAD MODEL


Flexible and adaptable to changes It can't be used for smaller projects
It is useful when you have to reduce the overall Not all application is compatible with RAD
project risk
It is adaptable and flexible to changes When technical risk is high, it is not suitable
It is easier to transfer deliverables as scripts, If developers are not committed to delivering

13
high-level abstractions and intermediate codes software on time, RAD projects can fail
are used
Due to code generators and code reuse, there is Reduced features due to time boxing, where
a reduction of manual coding features are pushed to a later version to finish a
release in short period
Due to prototyping in nature, there is a Reduced scalability occurs because a RAD
possibility of lesser defects developed application begins as a prototype
and evolves into a finished application
Each phase in RAD delivers highest priority Progress and problems accustomed are hard to
functionality to client track as such there is no documentation to
demonstrate what has been done
With less people, productivity can be increased Requires highly skilled designers or developers
in short time

SPIRAL MODEL

Spiral Model is a risk-driven software development process model. It is a combination of


waterfall model and iterative model. Spiral Model helps to adopt software development elements
of multiple process models for the software project based on unique risk patterns ensuring
efficient development process.

Each phase of spiral model in software engineering begins with a design goal and ends with the
client reviewing the progress. The spiral model in software engineering was first mentioned by
Barry Boehm in his 1986 paper.

The development process in Spiral model in SDLC, starts with a small set of requirement and
goes through each development phase for those set of requirements. The software engineering
team adds functionality for the additional requirement in every-increasing spirals until the
application is ready for the production phase.

SPIRAL MODEL PHASES

SPIRAL ACTIVITIES PERFORMED DURING PHASE


MODEL
PHASES
Planning It includes estimating the cost, schedule and resources for the iteration. It

14
also involves understanding the system requirements for continuous
communication between the system analyst and the customer
Risk Analysis Identification of potential risk is done while risk mitigation strategy is
planned and finalized
Engineering It includes testing, coding and deploying software at the customer site
Evaluation Evaluation of software by the customer. Also, includes identifying and
monitoring risks such as schedule slippage and cost overrun

WHEN TO USE SPIRAL MODEL?

 A Spiral model in software engineering is used when project is large


 When releases are required to be frequent, spiral methodology is used
 When creation of a prototype is applicable
 When risk and costs evaluation is important
 Spiral methodology is useful for medium to high-risk projects
 When requirements are unclear and complex, Spiral model in SDLC is useful
 When changes may require at any time
 When long term project commitment is not feasible due to changes in economic priorities

SPIRAL MODEL ADVANTAGES AND DISADVANTAGES

ADVANTAGES DISADVANTAGES
Additional functionality or changes can be Risk of not meeting the schedule or budget
done at a later stage
Cost estimation becomes easy as the prototype Spiral development works best for large
building is done in small fragments projects only also demands risk assessment
expertise
Continuous or repeated development helps in For its smooth operation spiral model protocol
risk management needs to be followed strictly
Development is fast and features are added in a Documentation is more as it has intermediate
systematic way in Spiral development phases
There is always a space for customer feedback Spiral software development is not advisable
for smaller project, it might cost them a lot

THE INCREMENTAL MODEL

15
Incremental Model is a process of software development where requirements are broken down
into multiple standalone modules of software development cycle. Incremental development is
done in steps from analysis design, implementation, testing/verification, maintenance.

Each iteration passes through the requirements, design, coding and testing phases. And each
subsequent release of the system adds function to the previous release until all designed
functionality has been implemented.

The system is put into production when the first increment is delivered. The first increment is
often a core product where the basic requirements are addressed, and supplementary features are
added in the next increments. Once the core product is analysed by the client, there is plan
development for the next increment.

CHARACTERISTICS OF AN INCREMENTAL MODULE INCLUDES

 System development is broken down into many mini development projects


 Partial systems are successively built to produce a final total system
 Highest priority requirement is tackled first
 Once the requirement is developed, requirement for that increment are frozen.

INCREMENTAL ACTIVITIES PERFORMED IN INCREMENTAL PHASES


PHASES
Requirement Requirement and specification of the software are collected
Analysis
Design Some high-end function are designed during this stage
Code Coding of software is done during this stage
Test Once the system is deployed, it goes through the testing phase

WHEN TO USE INCREMENTAL MODELS?

 Requirements of the system are clearly understood


 When demand for an early release of a product arises
 When software engineering team are not very well skilled or trained
 When high-risk features and goals are involved
 Such methodology is more in use for web application and product based companies

16
ADVANTAGES AND DISADVANTAGES OF INCREMENTAL MODEL

ADVANTAGES DISADVANTAGES
The software will be generated quickly during It requires a good planning designing
the software life cycle
It is flexible and less expensive to change Problems might cause due to system
requirements and scope architecture as such not all requirements
collected up front for the entire software
lifecycle
Throughout the development stages changes Each iteration phase is rigid and does not
can be done overlap each other
This model is less costly compared to others Rectifying a problem in one unit requires
correction in all the units and consumes a lot of
time
A customer can respond to each building
Errors are easy to be identified

AGILE METHODOLOGY

AGILE methodology is a practice that promotes continuous iteration of development and


testing throughout the software development lifecycle of the project. In the Agile model, both
development and testing activities are concurrent, unlike the Waterfall model.

The Agile software development methodology is one of the simplest and effective processes to
turn a vision for a business need into software solutions. Agile is a term used to describe
software development approaches that employ continual planning, learning, improvement, team
collaboration, evolutionary development, and early delivery. It encourages flexible responses to
change.

The agile software development emphasizes on four core values.

1. Individual and team interactions over processes and tools

2. Working software over comprehensive documentation

3. Customer collaboration over contract negotiation

4. Responding to change over following a plan

KEY AGILE SOFTWARE DEVELOPMENT LIFECYCLE PHASES

17
When you break it down to the core concepts, the agile development is not that difficult. And
while it may seem wasteful with the number of meetings involved, it saves a lot of time by
optimizing the development tasks and reducing the errors during the planning stages can have.

1: Requirements

Before a Product Owner can even start designing a project, they need to create the initial
documentation that will list the initial requirements. They are:

 The end result the project is going to achieve. For example, a text editor;

 The features that it will support. For example, different font sizes;

 The features that it will not initially support. For example, adding animations to the text
or ability to embed video;

A general recommendation is to lower these initial requirements as hard as one can, adding only
the definitely necessary features and ignoring ones that won’t be used often. Developers can
work on them later, once the app is deployed and the core features work well.

NOTE: If developers choose to ignore this stage, they are prone to feature creep — situation
when new non-crucial features are constantly being added to the project, taking developers’ time
away from the important tasks.

On further iterations, the Client and the Product Owner review the requirements and make them
more relevant.

2: Design

There are two ways to approach design in the software development — one is the visual design
and the other is the architectural structure of the app.

Software Design

During the first iteration, the Product Owner assembles their development team and introduces
the requirements created during the previous stage. The team then discusses how to tackle these
requirements, and proposes the tools needed to achieve the best result. For example, the team
defines the programming language, frameworks, and libraries that the project is going to be using.

On further iterations, the developers discuss the feature implementation and the internal structure
of the come.

UI/UX Design

During this SDLC stage, the designers create a rough mock-up of the UI. If the product is
consumer-grade, the user interface and user experience are most important. So it’s generally a

18
good idea to review the possible competitors to see what they are doing right — and especially
what they are doing wrong.

Further iterations are spent refining the initial design and/or reworking it to suit the new features.

3. Development and Coding

The development phase is about writing code and converting design documentation into the
actual software within the software development process. This stage of SDLC is generally the
longest as it’s the backbone of the whole process.

There aren’t many changes between the iterations here.

4. Integration and Testing

This stage is spent on making sure that the software is bug-free and compatible with everything
else that the developers have written before. The Quality Assurance team conducts a series of
tests in order to ensure the code is clean and business goals of the solution are met.

During the further iterations of this SDLC stage, the testing becomes more involved and
accounts not only for functionality testing, but also for systems integration, interoperability, and
user acceptance testing, etc.

5. Implementation and Deployment

The application is deployed on the servers and provided to the customers — either for the demo
or the actual use. Further iterations update the already installed software, introducing new
features and resolving bugs.

6. Review

Once all previous development phases are complete, the Product Owner gathers the
Development Team once again and reviews the progress made towards completing the
requirements. The team introduces their ideas toward resolving the problems that arose during
the previous phases and the Product Owner takes their propositions into consideration.

Afterwards, the Agile software development lifecycle phases start anew — either with a new
iteration or by moving toward the next stage and scaled Agile.

ADVANTAGES OF AGILE METHODOLOGY

 Flexibility and adaptability. All projects run in short-term agile methodology phases,
with each of them taking from 2 to 4 weeks. Along with the fact that the methodology
does not require extensive planning before project launch, these short phases give teams
numerous opportunities to introduce adjustments to their projects, if needed. This comes

19
in handy when a team needs to build a product they have never tried developing before
and are not sure what to expect from the project.

 Transparency and communication. These two components are paramount for an


effective agile methodology process. All team members are encouraged to sustain
constant communication, which helps them be on the same page regarding all project
plans.

 Stakeholder engagement. As a rule, clients are welcome to take an active part in all
agile methodology steps, which does not only increase stakeholder satisfaction rates but
also makes it easier for the team to understand the client’s vision and meet their
expectations. This, in turn, increases successful project completion rates.

 Early delivery. Instead of waiting for months to see some solid results, both the team
and the client can see their first accomplishments shortly after the launch of the project.
New features get delivered quickly and frequently, proving the value of agile software
development methodology.

 Predictable costs. Same as with delivery, the client gets to learn how much each feature
will cost because of the short development duration. At the beginning of each phase (or
sprint, as they are often called in agile methodology Scrum), everyone gets to learn time
and cost estimation for the upcoming feature.

 Lower risks of missed goals. Instead of settling for a single ultimate goal, teams get to
develop multiple smaller goals they need to reach as the project unravels. With each agile
project being broken down into stages, goals feel reachable and realistic from day one.

DISADVANTAGES OF AGILE DEVELOPMENT METHODOLOGY

 In the case of some software deliverables, especially the large ones, it is difficult to assess
the effort required at the beginning of the software development life cycle.

 This methodology focuses on working software rather than documentation, hence it may
result in a lack of documentation.

 The project can easily get taken off track if the customer representative is not clear what
final outcome that they want.

 Only senior programmers are capable of taking the kind of decisions required during the
development process. Hence it has no place for newbie programmers unless combined
with experienced resource

20
SYSTEMS ANALYSIS & DESIGN

System Analysis
System analysis is a process of gathering & interpreting facts, identifying problems and
decomposition of a system into its components. The main purpose of the system analysis is to
study different systems or its parts for identifying the objectives of the system. System analysis is
a problem-solving technique that helps in enhancing the system and ensuring that all the
components of the system work efficiently for accomplishing their purpose. Analysis mainly
helps in describing the tasks should be performed by the system (Diab, 2016). System analysis is
a process that needs to be conducted before the development of software. It is usually done by
the system analyst. System analyst is a person that consults with the user or the clients and asks
the customer about the requirements of their software and checks if they are feasible. After
confirming all the details, they divide the work to their teams and the team work to meet the
requirements of the client.

System Design
System design is a process that helps to plan a new business system or to replace an existing
system by defining its modules or components for satisfying the specific requirements. System
Design also focuses on methods for accomplishing the objective of the system. System Analysis
and Design mainly focuses on - Technology, Systems, Processes. Mainly there are two types of
system designs – logical design and physical design.
• Logical System Design – Logical design of the system is designing the various components of
the system. What all will be the inputs and outputs to the system, what or how the data will flow
in the system. What procedures will be followed in it? All these questions are answered in
logical system design. It is a virtual design based on which the physical designing of the system
is done. Structured designing tools like Data Flow Diagrams (DFD), Entity Relationship
Diagram (ER diagram), Decision Tree, etc. are used in logical designing of the system.
• Physical System Design – In physical system design, the actual methods are implemented to
form the system. The main focus is on how the inputs will enter the system, how they will be
verified and the delivery of output to its destination. All the procedures and processes are design
in the logical design of the system.

SYSTEM FEASIBILITY

A feasibility study is an analysis done to determine the viability of a project from an economical,
legal, and technical perspective. Simply put, it gives us an insight into whether a project is
doable or worth the investment.

A feasibility study, that’s well-designed, should offer insights on the description of the project,
resource allocation, accounting statements, financial data, legal requirements, and tax obligations.

21
It helps to determine whether the project is both possible and profitable for the company to
undertake. Hence, this study is mandatorily done before technical development and project
execution.

The feasibility study is a management-oriented activity. The objective of a feasibility study is to


find out if an information system project can be done and to suggest possible alternative
solutions.
Projects are initiated for two broad reasons:

1. Problems that lend themselves to systems solutions

2. Opportunities for improving through: (a) upgrading systems (b) altering systems (c)
installing new systems

Feasibility Study can be considered as preliminary investigation that helps the management to
take decision about whether study of system should be feasible for development or not.

 It identifies the possibility of improving an existing system, developing a new system,


and produce refined estimates for further development of system.

 It is used to obtain the outline of the problem and decide whether feasible or appropriate
solution exists or not.

 The main objective of a feasibility study is to acquire problem scope instead of solving
the problem.

 The output of a feasibility study is a formal system proposal act as decision document
which includes the complete nature and scope of the proposed system.

STEPS INVOLVED IN FEASIBILITY ANALYSIS

The following steps are to be followed while performing feasibility analysis −

 Form a project team and appoint a project leader.

 Develop system flowcharts.

 Identify the deficiencies of current system and set goals.

 Enumerate the alternative solution or potential candidate system to meet goals.

 Determine the feasibility of each alternative such as technical feasibility, operational


feasibility, etc.

 Weight the performance and cost effectiveness of each candidate system.

 Rank the other alternatives and select the best candidate system.

22
 Prepare a system proposal of final project directive to management for approval.

TYPES OF FEASIBILITIES

Economic Feasibility

For any system if the expected benefits equal or exceed the expected costs, the system can be
judged to be economically feasible. In economic feasibility, cost benefit analysis is done in
which expected costs and benefits are evaluated. Economic analysis is used for evaluating the
effectiveness of the proposed system.

In economic feasibility, the most important is cost-benefit analysis. As the name suggests, it is an
analysis of the costs to be incurred in the system and benefits derivable out of the system. Click
on the link below which will get you to the page that explains what cost benefit analysis is and
how you can perform a cost benefit analysis.

 It is evaluating the effectiveness of candidate system by using cost/benefit analysis


method.

 It demonstrates the net benefit from the candidate system in terms of benefits and costs to
the organization.

 The main aim of Economic Feasibility Analysis (EFS) is to estimate the economic
requirements of candidate system before investments funds are committed to proposal.

 It prefers the alternative which will maximize the net worth of organization by earliest
and highest return of funds along with lowest level of risk involved in developing the
candidate system.

Technical Feasibility

A large part of determining resources has to do with assessing technical feasibility. It considers
the technical requirements of the proposed project. The technical requirements are then
compared to the technical capability of the organization. The systems project is considered
technically feasible if the internal technical capability is sufficient to support the project
requirements.
The analyst must find out whether current technical resources can be upgraded or added to in a
manner that fulfills the request under consideration. This is where the expertise of system
analysts is beneficial, since using their own experience and their contact with vendors they will
be able to answer the question of technical feasibility.

 It investigates the technical feasibility of each implementation alternative.

 It analyses and determines whether the solution can be supported by existing technology
or not.

23
 The analyst determines whether current technical resources be upgraded or added it that
fulfil the new requirements.

 It ensures that the candidate system provides appropriate responses to what extent it can
support the technical enhancement.

Operational Feasibility

Operational feasibility is dependent on human resources available for the project and involves
projecting whether the system will be used if it is developed and implemented.

Operational feasibility is a measure of how well a proposed system solves the problems, and
takes advantage of the opportunities identified during scope definition and how it satisfies the
requirements identified in the requirements analysis phase of system development.
Operational feasibility reviews the willingness of the organization to support the proposed
system. This is probably the most difficult of the feasibilities to gauge. In order to determine this
feasibility, it is important to understand the management commitment to the proposed project. If
the request was initiated by management, it is likely that there is management support and the
system will be accepted and used. However, it is also important that the employee base will be
accepting of the change.

 It determines whether the system is operating effectively once it is developed and


implemented.

 It ensures that the management should support the proposed system and its working
feasible in the current organizational environment.

 It analyses whether the users will be affected and they accept the modified or new
business methods that affect the possible system benefits.

 It also ensures that the computer resources and network architecture of candidate system
are workable.

Legal Feasibility

It includes study concerning contracts, liability, violations, and legal other traps frequently
unknown to the technical staff.

Behavioral Feasibility

 It evaluates and estimates the user attitude or behavior towards the development of new
system.

 It helps in determining if the system requires special effort to educate, retrain, transfer,
and changes in employee’s job status on new ways of conducting business.

24
Schedule Feasibility

 It ensures that the project should be completed within given time constraint or schedule.

 It also verifies and validates whether the deadlines of project are reasonable or not.

IMPORTANCE OF FEASIBILITY STUDY

The importance of a feasibility study is based on organizational desire to “get it right” before
committing resources, time, or budget. A feasibility study might uncover new ideas that could
completely change a project’s scope. It’s best to make these determinations in advance, rather
than to jump in and to learn that the project won’t work. Conducting a feasibility study is always
beneficial to the project as it gives you and other stakeholders a clear picture of the proposed
project.

Below are some key benefits of conducting a feasibility study:

 Get a clear-cut idea of whether the project is likely to be successful, before allocating
budget, manpower, and time.

 Enhances the project teams’ efficiency and focus

 Helps detect and capitalize on new opportunities

 Substantiates with evidence of why and how a project should be executed

 Streamlines the business alternatives

 Diagnoses errors and aids in troubleshooting them

 Prevents threats from occurring and helps in risk mitigation

 Gives valuable insights to both the team and stakeholders associated with the project

Apart from the approaches to feasibility study listed above, some projects also require other
constraints to be analysed -

 Internal Project Constraints: Technical, Technology, Budget, Resource, etc.

 Internal Corporate Constraints: Financial, Marketing, Export, etc.

 External Constraints: Logistics, Environment, Laws, and Regulations, etc.

DATA COLLECTION

Data is one of the most valuable resources today’s businesses have. The more information you
have about your customers, the better you can understand their interests, wants and needs. This

25
enhanced understanding helps you meet and exceed your customers’ expectations and allows you
to create messaging and products that appeal to them.

Essentially, collecting data means putting your design for collecting information into
operation. You’ve decided how you’re going to get information – whether by direct observation,
interviews, surveys, experiments and testing, or other methods – and now you and/or other
observers have to implement your plan. There’s a bit more to collecting data, however. If you are
conducting observations, for example, you’ll have to define what you’re observing and arrange
to make observations at the right times, so you actually observe what you need to. You’ll have to
record the observations in appropriate ways and organize them so they’re optimally useful.

Recording and organizing data may take different forms, depending on the kind of information
you’re collecting. The way you collect your data should relate to how you’re planning to analyze
and use it. Regardless of what method you decide to use, recording should be done concurrent
with data collection if possible, or soon afterwards, so that nothing gets lost and memory doesn’t
fade.

Primary Data Collection Definition

The term “primary data” refers to data you collect yourself, rather than data you gather after
another party initially recorded it. Primary data is information obtained directly from the source.
You will be the first party to use this exact set of data.

When it comes to data businesses collect about their customers, primary data is also typically
first-party data. First-party data is the information you gather directly from your audience. It
could include data you gathered from online properties, data in your customer relationship
management system or non-online data you collect from your customers through surveys and
various other sources.

First-party data differs from second-party and third-party data. Second-party data is the first-
party data of another company. You can purchase second-party data directly from the
organization that collected it or buy it in a private marketplace. Third-party data is information a
company has pulled together from numerous sources. You can buy and sell this kind of data on a
data exchange, and it typically contains a large number of data points.

Because first-party data comes directly from your audience, you can have high confidence in its
accuracy, as well as its relevance to your business.

Second-party data has many of the same positive attributes as first-party data. It comes directly
from the source, so you can be confident in its accuracy, but it also gives you insights you
couldn’t get with your first-party data. Third-party data offers much more scale than any other
type of data, which is its primary benefit.

26
Different types of data can be useful in different scenarios. It can also be helpful to use different
types of data together. First-party data will typically be the foundation of your dataset. If your
first-party data is limited, though, you may want to supplement it with second-party or third-
party data. Adding these other types of data can increase the scale of your audience or help you
reach new audiences.

In this article, we’ll focus on primary data. Because it’s the kind of data you gather yourself, you
need a strategy for how to collect it.

Quantitative vs. Qualitative Data

You can divide primary data into two categories: quantitative and qualitative.

Quantitative data comes in the form of numbers, quantities and values. It describes things in
concrete and easily measurable terms. Examples include the number of customers who bought a
given product, the rating a customer gave a product out of five stars and the amount of time a
visitor spent on your website.

Because quantitative data is numeric and measurable, it lends itself well to analytics. When you
analyze quantitative data, you may uncover insights that can help you better understand your
audience. Because this kind of data deals with numbers, it is very objective and has a reputation
for reliability.

Qualitative data is descriptive, rather than numeric. It is less concrete and less easily measurable
than quantitative data. This data may contain descriptive phrases and opinions. Examples include
an online review a customer writes about a product, an answer to an open-ended survey question
about what type of videos a customer likes to watch online and the conversation a customer had
with a customer service representative.

Qualitative data helps explains the “why” behind the information quantitative data reveals. For
this reason, it is useful for supplementing quantitative data, which will form the foundation of
your data strategy. Because quantitative data is so foundational, this article will focus on
collection methods for quantitative primary data.

THERE ARE VARIOUS INFORMATION GATHERING TECHNIQUES

Interviewing

Systems analyst collects information from individuals or groups by interviewing. The analyst can
be formal, legalistic, play politics, or be informal; as the success of an interview depends on the
skill of analyst as interviewer.

It can be done in two ways −

27
 Unstructured Interview − The system analyst conducts question-answer session to
acquire basic information of the system.

 Structured Interview − It has standard questions which user need to respond in either
close (objective) or open (descriptive) format.

Advantages of Interviewing

 This method is frequently the best source of gathering qualitative information.

 It is useful for them, who do not communicate effectively in writing or who may not have
the time to complete questionnaire.

 Information can easily be validated and cross checked immediately.

 It can handle the complex subjects.

 It is easy to discover key problem by seeking opinions.

 It bridges the gaps in the areas of misunderstandings and minimizes future problems.

Questionnaires

This method is used by analyst to gather information about various issues of system from large
number of persons.

There are two types of questionnaires −

 Open-ended Questionnaires − It consists of questions that can be easily and correctly


interpreted. They can explore a problem and lead to a specific direction of answer.

 Closed-ended Questionnaires − It consists of questions that are used when the systems
analyst effectively lists all possible responses, which are mutually exclusive.

Advantages of questionnaires

 It is very effective in surveying interests, attitudes, feelings, and beliefs of users which
are not co-located.

 It is useful in situation to know what proportion of a given group approves or disapproves


of a particular feature of the proposed system.

 It is useful to determine the overall opinion before giving any specific direction to the
system project.

 It is more reliable and provides high confidentiality of honest responses.

28
 It is appropriate for electing factual information and for statistical data collection which
can be emailed and sent by post.

Review of Records, Procedures, and Forms

Review of existing records, procedures, and forms helps to seek insight into a system which
describes the current system capabilities, its operations, or activities.

Advantages

 It helps user to gain some knowledge about the organization or operations by themselves
before they impose upon others.

 It helps in documenting current operations within short span of time as the procedure
manuals and forms describe the format and functions of present system.

 It can provide a clear understanding about the transactions that are handled in the
organization, identifying input for processing, and evaluating performance.

 It can help an analyst to understand the system in terms of the operations that must be
supported.

 It describes the problem, its affected parts, and the proposed solution.

Observation

This is a method of gathering information by noticing and observing the people, events, and
objects. The analyst visits the organization to observe the working of current system and
understands the requirements of the system.

Advantages

 It is a direct method for gleaning information.

 It is useful in situation where authenticity of data collected is in question or when


complexity of certain aspects of system prevents clear explanation by end-users.

 It produces more accurate and reliable data.

 It produces all the aspect of documentation that are incomplete and outdated.

Joint Application Development (JAD)

It is a new technique developed by IBM which brings owners, users, analysts, designers, and
builders to define and design the system using organized and intensive workshops. JAD trained
analyst act as facilitator for workshop who has some specialized skills.

29
Advantages of JAD

 It saves time and cost by replacing months of traditional interviews and follow-up
meetings.

 It is useful in organizational culture which supports joint problem solving.

 Fosters formal relationships among multiple levels of employees.

 It can lead to development of design creatively.

 It Allows rapid development and improves ownership of information system.

Secondary Research or Background Reading

This method is widely used for information gathering by accessing the gleaned information. It
includes any previously gathered information used by the marketer from any internal or external
source.

Advantages

 It is more openly accessed with the availability of internet.

 It provides valuable information with low cost and time.

 It act as forerunner to primary research and aligns the focus of primary research.

 It is used by the researcher to conclude if the research is worth it as it is available with


procedures used and issues in collecting them.

REQUIREMENTS ANALYSIS

Requirements analysis is very critical process that enables the success of a system or software
project to be assessed. Requirements are generally split into two types: Functional and Non-
functional requirements.

Clearly defined requirements are essential signs on the road that leads to a successful project.
They establish a formal agreement between a client and a provider that they are both working to
reach the same goal. High-quality, detailed requirements also help mitigate financial risks and
keep the project on a schedule. According to the Business Analysis Body of
Knowledge definition, requirements are a usable representation of a need.

Creating requirements is a complex task as it includes a set of processes such as elicitation,


analysis, specification, validation, and management. In this article, we’ll discuss the main types
of requirements for software products and provide a number of recommendations for their use.

Classification of requirements

30
Prior to discussing how requirements are created, let’s differentiate their types.

Business requirements. These include high-level statements of goals, objectives, and needs.

Stakeholder requirements. The needs of discrete stakeholder groups are also specified to define
what they expect from a particular solution.

Solution requirements. Solution requirements describe the characteristics that a product must
have to meet the needs of the stakeholders and the business itself.

 Nonfunctional requirements describe the general characteristics of a system. They are


also known as quality attributes.

 Functional requirements describe how a product must behave, what its features and
functions.

Transition requirements. An additional group of requirements defines what is needed from an


organization to successfully move from its current state to its desired state with the new product.

FUNCTIONAL VS NON FUNCTIONAL REQUIREMENTS

FUNCTIONAL REQUIREMENTS

A functional requirement is any Requirement Which Specifies What the System Should Do.”

In other words, a functional requirement will describe a particular behaviour of function of the
system when certain conditions are met, for example: “Send email when a new customer signs
up” or “Open a new account”.

A functional requirement for an everyday object like a cup would be: “ability to contain tea or
coffee without leaking”.

Some of the more typical functional requirements include:

Business requirements. They contain the ultimate goal, such as an order system, an online
catalogue, or a physical product. It can also include things like approval workflows and
authorization levels.

Administrative functions. They are the routine things the system will do, such as reporting.

User requirements. They are what the user of the system can do, such as place an order or
browse the online catalogue.

System requirements. These are things like software and hardware specifications, system
responses, or system actions.

31
So what about Non-Functional Requirements? What are those, and how are they different?

Simply put, the difference is that non-functional requirements describe how the system
works, while functional requirements describe what the system should do.

The definition for a non-functional requirement is that it essentially specifies how the system
should behave and that it is a constraint upon the systems behaviour. One could also think of
non-functional requirements as quality attributes for of a system.

EXAMPLE OF FUNCTIONAL REQUIREMENTS

 The software automatically validates customers against the ABC Contact Management
System

 The Sales system should allow users to record customers sales

 The background color for all windows in the application will be blue and have a
hexadecimal RGB color value of 0x0000FF.

 Only Managerial level employees have the right to view revenue data.

 The software system should be integrated with banking API

 The software system should pass Section 508 accessibility requirement.

ADVANTAGES OF FUNCTIONAL REQUIREMENT

Here, are the pros/advantages of creating a typical functional requirement document-

 Helps you to check whether the application is providing all the functionalities that were
mentioned in the functional requirement of that application

 A functional requirement document helps you to define the functionality of a system or


one of its subsystems.

 Functional requirements along with requirement analysis help identify missing


requirements. They help clearly define the expected system service and behavior.

 Errors caught in the Functional requirement gathering stage are the cheapest to fix.

 Support user goals, tasks, or activities for easy project management

 Functional requirement can be expressed in Use Case form or user story as they exhibit
externally visible functional behavior.

MISTAKES WHILE CREATING A FUNCTIONAL REQUIREMENT

32
Here, are some common mistakes made while creating function requirement document:

 Putting in unjustified extra information that may confuse developers

 Not putting sufficient detail in the requirement document.

 You add rules or examples, scoping statements or objectives anything except the
requirement itself.

 Left out a piece of important information that is an absolute must to fully, accurately, and
definitively state the requirement.

 Some professionals start to defend the requirements they have documented when the
requirement is modified, instead of finding the correct truth.

 Requirements which are not mapped to an objective or principle.

NON-FUNCTIONAL REQUIREMENTS

A non-functional requirement is any Requirement That Specifies How the System Performs a
Certain Function.”

In other words, a non-functional requirement will describe how a system should behave and what
limits there are on its functionality.

Non-functional requirements cover all the remaining requirements which are not covered by the
functional requirements. They specify criteria that judge the operation of a system, rather than
specific behaviours, for example: “Modified data in a database should be updated for all users
accessing it within 2 seconds.”

A non-functional requirement for the cup mentioned previously would be: “contain hot liquid
without heating up to more than 45°C”.

Even in the case when the non-functional requirements are not met the basic functionality will
not be impacted.

If the functionality of the product is not dependent on non-functional requirements then why are
they important? The answer is in usability. Non-functional requirements affect the user
experience as they define a system’s behavior, features, and general characteristics.

Non-functional requirements when defined and executed well will help to make the system easy
to use and enhance the performance.

Non-functional requirements focus on user expectations, as they are product properties.

33
Let’s take an example of a functional requirement. A system loads a webpage when someone
clicks on a button. The related non-functional requirement specifies how fast the webpage must
load. A delay in loading will create a negative user experience and poor quality of the system
even though the functional requirement is fully met.

Some typical non-functional requirements are:

Usability

Usability defines how difficult it will be for a user to learn and operate the system. Usability can
be assessed from different points of view:

Efficiency of use: the average time it takes to accomplish a user’s goals, how many tasks a user
can complete without any help, the number of transactions completed without errors, etc.

Intuitiveness: how simple it is to understand the interface, buttons, headings, etc.

Low perceived workload: how many attempts are needed by users to accomplish a particular
task.

Example: Usability requirements can consider language barriers and localization tasks: People
with no understanding of French must be able to use the product. Or you may set accessibility
requirements: Keyboard users who navigate a website using <tab>, must be able to reach the
“Add to cart” button from a product page within 15 <tab> clicks.

Security

Security requirements ensure that the software is protected from unauthorized access to the
system and its stored data. It considers different levels of authorization and authentication across
different users roles. For instance, data privacy is a security characteristic that describes who can
create, see, copy, change, or delete information. Security also includes protection against viruses
and malware attacks.

Example: Access permissions for the particular system information may only be changed by the
system’s data administrator.

Reliability

Reliability defines how likely it is for the software to work without failure for a given period of
time. Reliability decreases because of bugs in the code, hardware failures, or problems with other
system components. To measure software reliability, you can count the percentage of operations
that are completed correctly or track the average period of time the system runs before failing.

Example: The database update process must roll back all related updates when any update fails.

Performance

34
Performance is a quality attribute that describes the responsiveness of the system to various user
interactions with it. Poor performance leads to negative user experience. It also jeopardizes
system safety when it’s is overloaded.

Example: The front-page load time must be no more that 2 seconds for users that access the
website using an LTE mobile connection.

Availability

Availability is gauged by the period of time that the system’s functionality and services are
available for use with all operations. So, scheduled maintenance periods directly influence this
parameter. And it’s important to define how the impact of maintenance can be minimized. When
writing the availability requirements, the team has to define the most critical components of the
system that must be available at all time. You should also prepare user notifications in case the
system or one of its parts becomes unavailable.

Example: New module deployment musn’t impact front page, product pages, and check out
pages availability and mustn’t take longer than one hour. The rest of the pages that may
experience problems must display a notification with a timer showing when the system is going
to be up again.

Scalability

Scalability requirements describe how the system must grow without negative influence on its
performance. This means serving more users, processing more data, and doing more transactions.
Scalability has both hardware and software implications. For instance, you can increase
scalability by adding memory, servers, or disk space. On the other hand, you can compress data,
use optimizing algorithms, etc.

Example: The website attendancy limit must be scalable enough to support 200,000 users at a
time.

As said above, non-functional requirements specify the system’s ‘quality characteristics’ or


‘quality attributes’.

Many different stakeholders have a vested interest in getting the non-functional requirements
right particularly in the case of large systems where the buyer of the system is not necessarily
also the user of the system.

The importance of non-functional requirements is therefore not to be trifled with. One way of
ensuring that as few as possible non-functional requirements are left out is to use non-functional
requirement groups. For an explanation on how to use non-functional requirement group, read
this blog post which will give you four of the main groups to use.

EXAMPLES OF NON-FUNCTIONAL REQUIREMENTS


35
Here, are some examples of non-functional requirement:

1. Users must change the initially assigned login password immediately after the first
successful login. Moreover, the initial should never be reused.

2. Employees never allowed to update their salary information. Such attempt should be
reported to the security administrator.

3. Every unsuccessful attempt by a user to access an item of data shall be recorded on an


audit trail.

4. A website should be capable enough to handle 20 million users with affecting its
performance

5. The software should be portable. So moving from one OS to other OS does not create any
problem.

6. Privacy of information, the export of restricted technologies, intellectual property rights,


etc. should be audited.

ADVANTAGES OF NON-FUNCTIONAL REQUIREMENT

Benefits/pros of Non-functional testing are:

 The non-functional requirements ensure the software system follow legal and compliance
rules.

 They ensure the reliability, availability, and performance of the software system

 They ensure good user experience and ease of operating the software.

 They help in formulating security policy of the software system.

DISADVANTAGES OF NON-FUNCTIONAL REQUIREMENT

Cons/drawbacks of Non-function requirement are:

 None functional requirement may affect the various high-level software subsystem

 They require special consideration during the software architecture/high-level design


phase which increases costs.

 Their implementation does not usually map to the specific software sub-system,

 It is tough to modify non-functional once you pass the architecture phase.

36
DIFFERENCE BETWEEN FUNCTIONAL AND NON-FUNCTIONAL
REQUIREMENTS:

Functional Requirements Non-Functional Requirements


They define a system or its They define the quality attribute of a system
component.
It specifies, “What the system should It specifies, “How should the system fulfil the
do?” functional requirements?”
User specifies functional Non-functional requirement is specified by
requirement. technical peoples e.g. Architect, Technical leaders
and software developers.
It is mandatory to meet these It is not mandatory to meet these requirements.
requirements.
It is captured in use case. It is captured as a quality attribute.
Defined at a component level. Applied to a whole system.
Helps you to verify the functionality Helps you to verify the performance of the software.
of the software.
Functional Testing like System, Non-Functional Testing like Performance, Stress,
Integration, End to End, API testing, Usability, Security testing, etc are done.
etc are done.
Usually easy to define. Usually more difficult to define.

HOW TO GATHER FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS?

Guided brainstorming session is one of the best ways to gather requirements by getting all
stakeholders together. You should include user representatives who are the best sources of non-
functional requirements.

HOW TO WRITE FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS?

There are different ways to write functional and non-functional requirements.

The most common way to write functional and non-functional requirements is through
a Requirements Specification Document. It is a written description of the required functionality.

It states the project objective and includes an overview of the project to provide context, along
with any constraints and assumptions. The requirements specification document is should
include visual representations of the requirements to help non-technical stakeholders understand
the scope.

Closely related to a requirements specification document is a Work Breakdown Structure or


WBS. This breaks down the entire process into its components by “decomposing” the
requirements into their elements until they cannot be broken down any further.

37
Another approach is User Stories. They describe the functionality from the perspective of the
end-user and states exactly what they want the system to do.

It effectively states “As a <type of user>, I want <goal> so that <reason>”. One benefit of user
stories is that they do not require much technical knowledge to write. User stories can also be
used as a precursor to a requirements specification document by helping define user needs.

Use Cases are similar to user stories in that no technical knowledge is necessary. Use cases
simply describe in detail what a user is doing as they execute a task. A use case might be
“purchase product”, and describes from the standpoint of the user each step in the process of
making the purchase.

SYSTEM MODELLING

System modelling is the process of developing abstract models of a system, with each model
presenting a different view or perspective of that system. It is about representing a system using
some kind of graphical notation, which is now almost always based on notations in the Unified
Modelling Language (UML). Models help the analyst to understand the functionality of the
system; they are used to communicate with customers.

Models can explain the system from different perspectives:

 An external perspective, where you model the context or environment of the system.

 An interaction perspective, where you model the interactions between a system and its
environment, or between the components of a system.

 A structural perspective, where you model the organization of a system or the structure
of the data that is processed by the system.

 A behavioral perspective, where you model the dynamic behavior of the system and how
it responds to events.

Five types of UML diagrams that are the most useful for system modeling:

 Activity diagrams, which show the activities involved in a process or in data processing.

 Use case diagrams, which show the interactions between a system and its environment.

 Sequence diagrams, which show interactions between actors and the system and between
system components.

 Class diagrams, which show the object classes in the system and the associations
between these classes.

 State diagrams, which show how the system reacts to internal and external events.

38
Models of both new and existing system are used during requirements engineering. Models of
the existing systems help clarify what the existing system does and can be used as a basis for
discussing its strengths and weaknesses. These then lead to requirements for the new system.
Models of the new system are used during requirements engineering to help explain the
proposed requirements to other system stakeholders. Engineers use these models to discuss
design proposals and to document the system for implementation.

STRUCTURED METHORD

In software engineering, structured analysis (SA) and structured design (SD) are methods for
analyzing business requirements and developing specifications for converting practices
into computer programs, hardware configurations, and related manual procedures.

Structured analysis and design techniques are fundamental tools of systems analysis. They
developed from classical systems analysis of the 1960s and 1970s.

CONTEXT MODELS

Context models are used to illustrate the operational context of a system - they show what lies
outside the system boundaries. Social and organizational concerns may affect the decision on
where to position system boundaries. Architectural models show the system and its relationship
with other systems.

System boundaries are established to define what is inside and what is outside the system. They
show other systems that are used or depend on the system being developed. The position of the
system boundary has a profound effect on the system requirements. Defining a system boundary
is a political judgment since there may be pressures to develop system boundaries that
increase/decrease the influence or workload of different parts of an organization.

Context models simply show the other systems in the environment, not how the system being
developed is used in that environment. Process models reveal how the system being developed
is used in broader business processes. UML activity diagrams may be used to define business
process models.

The example below shows a UML activity diagram describing the process of involuntary
detention and the role of MHC-PMS (mental healthcare patient management system) in it.

39
INTERACTION MODELS

Types of interactions that can be represented in a model:

 Modeling user interaction is important as it helps to identify user requirements.

 Modeling system-to-system interaction highlights the communication problems that


may arise.

 Modeling component interaction helps us understand if a proposed system structure is


likely to deliver the required system performance and dependability.

Use cases were developed originally to support requirements elicitation and now incorporated
into the UML. Each use case represents a discrete task that involves external interaction with a
system. Actors in a use case may be people or other systems. Use cases can be represented using
a UML use case diagram and in a more detailed textual/tabular format.

Simple use case diagram:

Use case description in a tabular format:

40
Use case title Transfer data

A receptionist may transfer data from the MHC-PMS to a general patient record
database that is maintained by a health authority. The information transferred
Description
may either be updated personal information (address, phone number, etc.) or a
summary of the patient's diagnosis and treatment.

Actor(s) Medical receptionist, patient records system (PRS)

Patient data has been collected (personal information, treatment summary);


Preconditions The receptionist must have appropriate security permissions to access the
patient information and the PRS.

Postconditions PRS has been updated

1. Receptionist selects the "Transfer data" option from the menu.


Main success 2. PRS verifies the security credentials of the receptionist.
scenario 3. Data is transferred.
4. PRS has been updated.

2a. The receptionist does not have the necessary security credentials.
Extensions 2a.1. An error message is displayed.
2a.2. The receptionist backs out of the use case.

UML sequence diagrams are used to model the interactions between the actors and the objects
within a system. A sequence diagram shows the sequence of interactions that take place during a
particular use case or use case instance. The objects and actors involved are listed along the top
of the diagram, with a dotted line drawn vertically from these. Interactions between objects are
indicated by annotated arrows.

41
STRUCTURAL MODELS

Structural models of software display the organization of a system in terms of the components
that make up that system and their relationships. Structural models may be static models, which
show the structure of the system design, or dynamic models, which show the organization of the
system when it is executing. You create structural models of a system when you are discussing
and designing the system architecture.

UML class diagrams are used when developing an object-oriented system model to show the
classes in a system and the associations between these classes. An object class can be thought of
as a general definition of one kind of system object. An association is a link between classes that
indicates that there is some relationship between these classes. When you are developing models
during the early stages of the software engineering process, objects represent something in the
real world, such as a patient, a prescription, doctor, etc.

42
Generalization is an everyday technique that we use to manage complexity. In modeling
systems, it is often useful to examine the classes in a system to see if there is scope for
generalization. In object-oriented languages, such as Java, generalization is implemented using
the class inheritance mechanisms built into the language. In a generalization, the attributes and
operations associated with higher-level classes are also associated with the lower-level classes.
The lower-level classes are subclasses inherit the attributes and operations from their
superclasses. These lower-level classes then add more specific attributes and operations.

An aggregation model shows how classes that are collections are composed of other classes.
Aggregation models are similar to the part-of relationship in semantic data models.

43
BEHAVIORAL MODELS

Behavioral models are models of the dynamic behavior of a system as it is executing. They
show what happens or what is supposed to happen when a system responds to a stimulus from its
environment. Two types of stimuli:

 Some data arrives that has to be processed by the system.

 Some event happens that triggers system processing. Events may have associated data,
although this is not always the case.

Many business systems are data-processing systems that are primarily driven by data. They are
controlled by the data input to the system, with relatively little external event processing. Data-
driven models show the sequence of actions involved in processing input data and generating an
associated output. They are particularly useful during the analysis of requirements as they can be
used to show end-to-end processing in a system. Data-driven models can be created using
UML activity diagrams:

Data-driven models can also be created using UML sequence diagrams:

44
Real-time systems are often event-driven, with minimal data processing. For example, a landline
phone switching system responds to events such as 'receiver off hook' by generating a dial
tone. Event-driven models shows how a system responds to external and internal events. It is
based on the assumption that a system has a finite number of states and that events (stimuli) may
cause a transition from one state to another. Event-driven models can be created using
UML state diagrams:

OOAD - OBJECT MODEL

The object model visualizes the elements in a software application in terms of objects. In this
chapter, we will look into the basic concepts and terminologies of object–oriented systems.

Objects and Classes

The concepts of objects and classes are intrinsically linked with each other and form the
foundation of object–oriented paradigm.

Object

An object is a real-world element in an object–oriented environment that may have a physical or


a conceptual existence. Each object has −

 Identity that distinguishes it from other objects in the system.

 State that determines the characteristic properties of an object as well as the values of the
properties that the object holds.

 Behavior that represents externally visible activities performed by an object in terms of


changes in its state.

45
Objects can be modelled according to the needs of the application. An object may have a
physical existence, like a customer, a car, etc.; or an intangible conceptual existence, like a
project, a process, etc.

Class

A class represents a collection of objects having same characteristic properties that exhibit
common behavior. It gives the blueprint or description of the objects that can be created from it.
Creation of an object as a member of a class is called instantiation. Thus, object is an instance of
a class.

The constituents of a class are −

 A set of attributes for the objects that are to be instantiated from the class. Generally,
different objects of a class have some difference in the values of the attributes. Attributes
are often referred as class data.

 A set of operations that portray the behavior of the objects of the class. Operations are
also referred as functions or methods.

Example

Let us consider a simple class, Circle, that represents the geometrical figure circle in a two–
dimensional space. The attributes of this class can be identified as follows −

 x–coord, to denote x–coordinate of the center

 y–coord, to denote y–coordinate of the center

 a, to denote the radius of the circle

Some of its operations can be defined as follows −

 findArea(), method to calculate area

 findCircumference(), method to calculate circumference

 scale(), method to increase or decrease the radius

During instantiation, values are assigned for at least some of the attributes. If we create an object
my_circle, we can assign values like x-coord : 2, y-coord : 3, and a : 4 to depict its state. Now, if
the operation scale() is performed on my_circle with a scaling factor of 2, the value of the
variable a will become 8. This operation brings a change in the state of my circle, i.e., the object
has exhibited certain behaviour.

Encapsulation and Data Hiding

46
Encapsulation

Encapsulation is the process of binding both attributes and methods together within a class.
Through encapsulation, the internal details of a class can be hidden from outside. It permits the
elements of the class to be accessed from outside only through the interface provided by the class.

Data Hiding

Typically, a class is designed such that its data (attributes) can be accessed only by its class
methods and insulated from direct outside access. This process of insulating an object’s data is
called data hiding or information hiding.

Example

In the class Circle, data hiding can be incorporated by making attributes invisible from outside
the class and adding two more methods to the class for accessing class data, namely −

 setValues(), method to assign values to x-coord, y-coord, and a

 getValues(), method to retrieve values of x-coord, y-coord, and a

Here the private data of the object my_circle cannot be accessed directly by any method that is
not encapsulated within the class Circle. It should instead be accessed through the methods
setValues() and getValues().

Message Passing

Any application requires a number of objects interacting in a harmonious manner. Objects in a


system may communicate with each other using message passing. Suppose a system has two
objects: obj1 and obj2. The object obj1 sends a message to object obj2, if obj1 wants obj2 to
execute one of its methods.

The features of message passing are −

 Message passing between two objects is generally unidirectional.

 Message passing enables all interactions between objects.

 Message passing essentially involves invoking class methods.

 Objects in different processes can be involved in message passing.

Inheritance

Inheritance is the mechanism that permits new classes to be created out of existing classes by
extending and refining its capabilities. The existing classes are called the base classes/parent
classes/super-classes, and the new classes are called the derived classes/child classes/subclasses.

47
The subclass can inherit or derive the attributes and methods of the super-class(es) provided that
the super-class allows so. Besides, the subclass may add its own attributes and methods and may
modify any of the super-class methods. Inheritance defines an “is – a” relationship.

Example

From a class Mammal, a number of classes can be derived such as Human, Cat, Dog, Cow, etc.
Humans, cats, dogs, and cows all have the distinct characteristics of mammals. In addition, each
has its own particular characteristics. It can be said that a cow “is – a” mammal.

Types of Inheritance

 Single Inheritance − A subclass derives from a single super-class.

 Multiple Inheritance − A subclass derives from more than one super-classes.

 Multilevel Inheritance − A subclass derives from a super-class which in turn is derived


from another class and so on.

 Hierarchical Inheritance − A class has a number of subclasses each of which may have
subsequent subclasses, continuing for a number of levels, so as to form a tree structure.

 Hybrid Inheritance − A combination of multiple and multilevel inheritance so as to


form a lattice structure.

Polymorphism

Polymorphism is originally a Greek word that means the ability to take multiple forms. In object-
oriented paradigm, polymorphism implies using operations in different ways, depending upon
the instance they are operating upon. Polymorphism allows objects with different internal
structures to have a common external interface. Polymorphism is particularly effective while
implementing inheritance.

Example

Let us consider two classes, Circle and Square, each with a method findArea(). Though the name
and purpose of the methods in the classes are same, the internal implementation, i.e., the
procedure of calculating area is different for each class. When an object of class Circle invokes
its findArea() method, the operation finds the area of the circle without any conflict with the
findArea() method of the Square class.

Generalization and Specialization

Generalization and specialization represent a hierarchy of relationships between classes, where


subclasses inherit from super-classes.

48
Generalization

In the generalization process, the common characteristics of classes are combined to form a class
in a higher level of hierarchy, i.e., subclasses are combined to form a generalized super-class. It
represents an “is – a – kind – of” relationship. For example, “car is a kind of land vehicle”, or
“ship is a kind of water vehicle”.

Specialization

Specialization is the reverse process of generalization. Here, the distinguishing features of groups
of objects are used to form specialized classes from existing classes. It can be said that the
subclasses are the specialized versions of the super-class.

49
SYSTEMS DEVELOPMENT

PROGRAM

A computer program is a collection of instructions that can be executed by a computer to


perform a specific task.

A computer program is usually written by a computer programmer in a programming language.


From the program in its human-readable form of source code, a compiler or assembler can
derive machine code—a form consisting of instructions that the computer can directly execute.
Alternatively, a computer program may be executed with the aid of an interpreter.

A collection of computer programs, libraries, and related data are referred to as software.
Computer programs may be categorized along functional lines, such as application
software and system software. The underlying method used for some calculation or manipulation
is known as an algorithm

PROGRAMMING

Programming is the process of creating a set of instructions that tell a computer how to perform a
task. Programming can be done using a variety of computer programming languages, such as
JavaScript, Python, and C++. Created by Pamela Fox.

PROGRAMMING LANGUAGE

A programming language is a computer language programmers use to develop software


programs, scripts, or other sets of instructions for computers to execute.

TYPES OF PROGRAMMING LANGUAGES

There are two types of programming languages, which can be categorized into the following
ways:

 Low level language


 High level language

1. Low level language

This language is the most understandable language used by computer to perform its operations. It
can be further categorized into:

a. Machine Language (1GL)

50
Machine language consists of strings of binary numbers (i.e. 0s and 1s) and it is the only one
language, the processor directly understands. Machine language has an Merits of very fast
execution speed and efficient use of primary memory.

Merits:

 It is directly understood by the processor so has faster execution time since the programs
written in this language need not to be translated.

 It doesn’t need larger memory.

Demerits:

 It is very difficult to program using 1GL since all the instructions are to be represented by
0s and 1s.

 Use of this language makes programming time consuming.

 It is difficult to find error and to debug.

 It can be used by experts only.

b. Assembly Language

Assembly language is also known as low-level language because to design a program


programmer requires detailed knowledge of hardware specification. This language uses
mnemonics code (symbolic operation code like ‘ADD’ for addition) in place of 0s and 1s. The
program is converted into machine code by assembler. The resulting program is reffered to as an
object code.

Merits:

 It is makes programming easier than 1GL since it uses mnemonics code for programming.
E.g. ADD for addition, SUB for subtraction, DIV for division, etc.

 It makes programming process faster.

 Error can be identified much easily compared to 1GL.

 It is easier to debug than machine language.

Demerits:

 Programs written in this language is not directly understandable by computer so


translators should be used.

51
 It is hardware dependent language so programmers are forced to think in terms of
computer’s architecture rather than to the problem being solved.

 Being machine dependent language, programs written in this language are very less or
not portable.

 Programmers must know its mnemonics codes to perform any task.

Differences between Machine-Level language and Assembly language

Machine-level language Assembly language


The machine-level language comes at the lowest The assembly language comes above the
level in the hierarchy, so it has zero abstraction machine language means that it has less
level from the hardware. abstraction level from the hardware.
It cannot be easily understood by humans. It is easy to read, write, and maintain.
The machine-level language is written in binary The assembly language is written in
digits, i.e., 0 and 1. simple English language, so it is easily
understandable by the users.
It does not require any translator as the machine In assembly language, the assembler is
code is directly executed by the computer. used to convert the assembly code into
machine code.
It is a first-generation programming language. It is a second-generation programming
language.

2. High level language

Instructions of this language closely resemble to human language or English like words. It uses
mathematical notations to perform the task. The high level language is easier to learn. It requires
less time to write and is easier to maintain the errors. The high level language is converted into
machine language by one of the two different languages translator programs; interpreter or
compiler.

High level language can be further categorized as:

a. Procedural-Oriented language (3GL)

Procedural Programming is a methodology for modeling the problem being solved, by


determining the steps and the order of those steps that must be followed in order to reach a
desired outcome or specific program state. These languages are designed to express the logic and
the procedure of a problem to be solved. It includes languages such as Pascal, COBOL, C,
FORTAN, etc.

Merits:

 Because of their flexibility, procedural languages are able to solve a variety of problems.
52
 Programmer does not need to think in term of computer architecture which makes them
focused on the problem.

 Programs written in this language are portable.

Demerits:

 It is easier but needs higher processor and larger memory.

 It needs to be translated therefore its execution time is more.

b. Problem-Oriented language (4GL)

It allows the users to specify what the output should be, without describing all the details of how
the data should be manupulated to produce the result. This is one step ahead from 3GL. These
are result oriented and include database query language.

E.g. Visual Basic, C#, PHP, etc.

The objectives of 4GL are to:

 Increase the speed of developing programs.

 Minimize user’s effort to obtain information from computer.

 Reduce errors while writing programs.

Merits:

 Programmer need not to think about the procedure of the program. So, programming is
much easier.

Demerits:

 It is easier but needs higher processor and larger memory.

 It needs to be translated therefore its execution time is more.

c. Natural language (5GL)

Natural language are stil in developing stage where we could write statrments that would look
like normal sentences.

Merits:

53
 Easy to program.

 Since, the program uses normal sentences, they are easy to understand.

 The programs designed using 5GL will have artificial intelligence (AI).

 The programs would be much more interactive and interesting.

Demerits:

 It is slower than previous generation language as it should be completely translated into


binary code which is a tedious task.

 Highly advanced and expensive electronic devices are required to run programs
developed in 5GL. Therefore, it is an expensive approach.

DIFFERENCES BETWEEN LOW-LEVEL LANGUAGE AND HIGH-LEVEL


LANGUAGE:

Low-level language High-level language


It is a machine-friendly language, i.e., It is a user-friendly language as this language
the computer understands the machine is written in simple English words, which can
language, which is represented in 0 or 1. be easily understood by humans.
The low-level language takes more time It executes at a faster pace.
to execute.
It requires the assembler to convert the It requires the compiler to convert the high-
assembly code into machine code. level language instructions into machine
code.
The machine code cannot run on all The high-level code can run all the platforms,
machines, so it is not a portable so it is a portable language.
language.
It is memory efficient. It is less memory efficient.
Debugging and maintenance are not Debugging and maintenance are easier in a
easier in a low-level language. high-level language

PSEUDOCODE

Pseudocode is an informal way of programming description that does not require any strict
programming language syntax or underlying technology considerations. It is used for creating an
outline or a rough draft of a program. Pseudocode summarizes a program’s flow, but excludes
underlying details. System designers write pseudocode to ensure that programmers understand a
software project's requirements and align code accordingly.

54
Pseudocode is not an actual programming language. So it cannot be compiled into an executable
program. It uses short terms or simple English language syntaxes to write code for programs
before it is actually converted into a specific programming language. This is done to identify top
level flow errors, and understand the programming data flows that the final program is going to
use. This definitely helps save time during actual programming as conceptual errors have been
already corrected. Firstly, program description and functionality is gathered and then pseudocode
is used to create statements to achieve the required results for a program. Detailed pseudocode is
inspected and verified by the designer’s team or programmers to match design specifications.
Catching errors or wrong program flow at the pseudocode stage is beneficial for development as
it is less costly than catching them later. Once the pseudocode is accepted by the team, it is
rewritten using the vocabulary and syntax of a programming language. The purpose of using
pseudocode is an efficient key principle of an algorithm. It is used in planning an algorithm with
sketching out the structure of the program before the actual coding takes place.

ADVANTAGES OF PSEUDOCODE

 Pseudocode is understood by the programmers of all types.


 It enables the programmer to concentrate only on the algorithm part of the code
development.
 It cannot be compiled into an executable program. Example, Java code : if (i 10) { i++; }
pseudocode :if i is less than 10, increment i by 1.

WHY USE PSEUDOCODE

1. Better readability. Often, programmers work alongside people from other domains, such
as mathematicians, business partners, managers, and so on. Using pseudocode to explain
the mechanics of the code will make the communication between the different
backgrounds easier and more efficient.

2. Ease up code construction. When the programmer goes through the process of
developing and generating pseudocode, the process of converting that into real code
written in any programming language will become much easier and faster as well.

3. A good middle point between flowchart and code. Moving directly from the idea to the
flowchart to the code is not always a smooth ride. That’s where pseudocode presents a
way to make the transition between the different stages somewhat smoother.

4. Act as a start point for documentation. Documentation is an essential aspect of


building a good project. Often, starting documentation is the most difficult part. However,
pseudocode can represent a good starting point for what the documentation should
include. Sometimes, programmers include the pseudocode as a docstring at the beginning
of the code file.

55
5. Easier bug detection and fixing. Since pseudocode is written in a human-readable
format, it is easier to edit and discover bugs before actually writing a single line of code.
Editing pseudocode can be done more efficiently than testing, debugging, and fixing
actual code.

THE MAIN CONSTRUCTS OF PSEUDOCODE

The core of pseudocode is the ability to represent 6 programming constructs (always written in
uppercase): SEQUENCE, CASE, WHILE, REPEAT-UNTIL, FOR, and IF-THEN-ELSE. These
constructs — also called keywords —are used to describe the control flow of the algorithm.

1. SEQUENCE represents linear tasks sequentially performed one after the other.

2. WHILE a loop with a condition at its beginning.

3. REPEAT-UNTIL a loop with a condition at the bottom.

4. FOR another way of looping.

5. IF-THEN-ELSE a conditional statement changing the flow of the algorithm.

6. CASE the generalization form of IF-THEN-ELSE.

Image by the author (made using Canva)

Although these 6 constructs are the most often used ones, you can theoretically use them to
implement any algorithm. You might find yourself needing some more based on your specific
application. Perhaps the two most needed commands are:

1. Invoking classes or calling functions (using the CALL keyword).

56
2. Handling exceptions (using EXCEPTION, WHEN keywords).

Image by the author (made using Canva)

Of course, based on the field you’re working in, you might add more constructs (keywords) to
your pseudocode glossary as long as you never use these keywords as variable names and that
they are well known within your field or company.

RULES OF WRITING PSEUDOCODE

When writing pseudocode, everyone often has their own style of presenting things out since it’s
read by humans and not by a computer; its rules are less rigorous than that of a programming
language. However, there are some simple rules that help make pseudocode more universally
understood.

1. Always capitalize the initial word (often one of the main 6 constructs).

2. Have only one statement per line.

3. Indent to show hierarchy, improve readability, and show nested constructs.

4. Always end multiline sections using any of the END keywords (ENDIF, ENDWHILE,
etc.).

5. Keep your statements programming language independent.

6. Use the naming domain of the problem, not that of the implementation. E.g., “Append
the last name to the first name” instead of “name = first+ last.”

7. Keep it simple, concise, and readable.

57
Following these rules help you generate readable pseudocode and be able to recognize a not
well-written one.

IDE

An integrated development environment (IDE) is an application that facilitates application


development. IDEs are designed to encompass all programming tasks in one application.
Therefore, IDEs offer a central interface featuring all the tools a developer needs, including the
following:

 Code editor: This feature is a text editor designed for writing and editing source code.
Source code editors are distinguished from text editors because they enhance or simplify
the writing and editing of code.

 Compiler: This tool transforms source code written in a human readable/writable


language into a form executable by a computer.

 Debugger: This tool is used during testing to help debug application programs.

 Build automation tools: These tools automate common developer tasks.

In addition, some IDEs might also include the following:

 Class browser: This tool is used to examine and reference the properties of an object-
oriented class hierarchy.

 Object browser: This feature is used to examine the objects instantiated in a running
application program.

 Class hierarchy diagram: This tool allows the programmer to visualize the structure of
object-oriented programming code.

58
The IDE may be a stand-alone application or may be included as part of one or more compatible
applications.

HISTORY OF IDE

Before IDEs, developers wrote their programs in text editors. They would write and save an
application in a text editor; then run the compiler, taking note of the error messages; then go back
to the text editor to revise the code.

In 1983, Borland Ltd. acquired a Pascal compiler and released it as TurboPascal, which featured,
for the first time, an integrated editor and compiler.

While TurboPascal launched the idea of an integrated development environment, many believe
Microsoft’s Visual Basic (VB), launched in 1991, was the first real IDE. Visual Basic was built
on the older BASIC language, which was a popular programming language throughout the 1980s.
With the emergence of Visual Basic, programming could be thought of in graphical terms, and
significant productivity benefits emerged.

WHY DO DEVELOPERS USE IDES?

An IDE allows developers to start programming new applications quickly because multiple
utilities don’t need to be manually configured and integrated as part of the setup process.
Developers also don’t need to spend hours individually learning how to use different tools when
every utility is represented in the same workbench. This can be especially useful for onboarding
new developers who can rely on an IDE to get up to speed on a team’s standard tools and
workflows. In fact, most features of IDEs are meant to save time, like intelligent code
completion and automated code generation, which removes the need to type out full character
sequences.

Other common IDE features are meant to help developers organize their workflow and solve
problems. IDEs parse code as it is written, so bugs caused by human error are identified in real-
time. Because utilities are represented by a single GUI, developers can execute actions without
switching between applications. Syntax highlighting is also common in most IDEs, which uses
visual cues to distinguish grammar in the text editor. Some IDEs additionally include class and
object browsers, as well as class hierarchy diagrams for certain languages.

It is possible to develop applications without an IDE, or for each developer to essentially build
their own IDE by manually integrating various utilities with a lightweight text editor like Vim or
Emacs. For some developers the benefit of this approach is the ultra-customization and control it
offers. In an enterprise context, though, the time saved, environment standardization,
and automation features of modern IDEs usually outweigh other considerations.

59
Today, most enterprise development teams opt for a pre-configured IDE that is best suited to
their specific use case, so the question is not whether to adopt an IDE, but rather which IDE to
select.

TYPES OF IDE

There are a variety of different IDEs, catering to the many different ways developers work and
the different types of code they produce. There are IDEs that are designed to work with one
specific language, cloud-based IDEs, IDEs customized for the development of mobile
applications or for HTML, and IDEs meant specifically for Apple development or Microsoft
development.

Multi-Language IDE

Multi-language IDEs, such as Eclipse, NetBeans, Komodo, Aptana and Geany, support multiple
programming languages.

 Eclipse: Supports C, C++, Python, Perl, PHP, Java, Ruby and more. This free and open
source editor is the model for many development frameworks. Eclipse began as a Java
development environment and has expanded through plugins. Eclipse is managed and
directed by the Eclipse.org Consortium.

 NetBeans: Supports Java, JavaScript, PHP, Python, Ruby, C, C++ and more. This option
is also free and open source. All the functions of the IDE are provided by modules that
each provide a well-defined function. Support for other programming languages can be
added by installing additional modules.

 Komodo IDE: Supports Perl, Python, Tcl, PHP, Ruby, Javascript and more. This
enterprise-level tool has a higher price point.

 Aptana: Supports HTML, CSS, JavaScript, AJAX and others via plugins. This is a
popular choice for web app development.

 Geany: Supports C, Java, PHP, HTML, Python, Perl, Pascal and many more. This is a
highly customizable environment with a large set of plugins

IDE for Mobile Development

There are IDEs specifically for mobile development, including PhoneGap and Appcelerator's
Titanium Mobile.

Many IDEs, especially those that are multi-language, have mobile-development plugins. For
instance, Eclipse has this functionality.

HTML IDE

60
Some of the most popular IDEs are those for developing HTML applications. For example, IDEs
such as HomeSite, DreamWeaver or FrontPage automate many tasks involved in web site
development.

Cloud-Based IDE

Cloud-based IDEs are starting to become mainstream. The capabilities of these web-based IDEs
are increasing rapidly, and most major vendors will likely need to offer one to be competitive.
Cloud IDEs give developers access to their code from anywhere. For example, Nitrous is a
cloud-based development environment platform that supports Ruby, Python, Node.js and more.
Cloud9 IDE supports more than 40 languages, including PHP, Ruby, Python, JavaScript with
Node.js, and Go. Heroku is a cloud-based development platform as a service (PaaS), supporting
several programming languages.

IDE Specific to Microsoft or Apple

These IDEs cater to those working in Microsoft or Apple environments:

 Visual Studio: Supports Visual C++, VB.NET, C#, F# and others. Visual Studio is
Microsoft's IDE and is designed to create applications for the Microsoft platform.

 MonoDevelop: Supports C/C++, Visual Basic, C# and other .NET languages.

 Xcode: Supports the Objective-C and Swift languages, and Cocoa and Cocoa Touch APIs.
This IDE is just for creating iOS and Mac applications and includes an iPhone/iPad
simulator and GUI builder.

 Espresso: Supports HTML, CSS, XML, JavaScript and PHP. This is a tool for Mac web
developers.

 Coda: Supports PHP, JavaScript, CSS, HTML, AppleScript and Cocoa API. Coda bills
itself as "one-window development" for the Mac user.

IDE for Specific Languages

Some IDEs cater to developers working in a single language. These include CodeLite and C-Free
for C/C++, Jikes and Jcreator for Java, Idle for Python, and RubyMine for Ruby/Rails.

Application security and the integrated development environment.

While application security is a critical priority for development teams, managing security testing
within an integrated development environment has often been a significant challenge.
Developers who are pressing to meet deadlines in agile or waterfall software development
processes are often already managing a variety of separate tools. New AppSec technology that
lacks flexible APIs and can’t easily be used within an integrated development environment will

61
often see low adoption, leading to greater security challenges and more difficulty meeting the
requirements of regulatory frameworks such as HIPAA and SarbOx compliance.

To improve application security, Veracode offers a suite of desktop, web and mobile app security
testing solutions in a cloud-based service that can be seamlessly combined in an integrated
development environment to find and fix flaws at any point in the SDLC.

Veracode solutions for the integrated development environment

Veracode is a leading provider of application security testing technology that enables enterprises
and development teams to ensure the security of software that is built, bought and assembled. As
an easy-to-use, SaaS-based service, Veracode allows developers to test for vulnerabilities
throughout the development process without having to open a new environment or learn a new
tool. The Veracode Application Security Platform integrates with the developer’s integrated
development environment as well as the security and risk-tracking tools that developers already
use.Flexible APIs enable development teams to create custom integrations or use community
integrations built by the open source community and other technology partners.

Veracode integrates with Eclipse, IBM RAD and other Eclipse-based IDEs, IntelliJ, and Visual
Studio. Before checking in code, Veracode allows developers to start a scan, review findings and
triage results all from within their integrated development environment.

Veracode’s testing solutions for the integrated development environment include Static Analysis,
Web Application Scanning, Software Composition Analysis, Vendor Application Security
Testing and more.

Veracode Static Analysis IDE Scan: testing within the integrated development environment.

Veracode Static Analysis IDE Scan is a security testing solution that brings scanning right into
an integrated development environment to test for flaws as developers write code. Veracode
Static Analysis IDE Scan runs in the background of an integrated development environment and
provides immediate feedback on potential vulnerabilities, highlighting code that may be flawed
and providing contextual tips on how to fix it. Veracode Static Analysis IDE Scan provides
insight into the type of flaw, such as SQL injection or buffer overflow, as well as the severity of
the flaw and the exact line of code where the flaw is located.

Learn more about security testing in the integrated development environment with Veracode, or
consult Veracode’s AppSec knowledge base to get answers to questions like “What is an
integrated development environment?” and “What is a worm?"

BENEFITS OF IDE

62
The overall goal and main benefit of an integrated development environment is improved
developer productivity. IDEs boost productivity by reducing setup time, increasing the speed of
development tasks, keeping developers up to date and standardizing the development process.

 Faster setup: Without an IDE interface, developers would need to spend time configuring
multiple development tools. With the application integration of an IDE, developers have
the same set of capabilities in one place, without the need for constantly switching tools.

 Faster development tasks: Tighter integration of all development tasks improves


developer productivity. For example, code can be parsed and syntax checked while being
edited, providing instant feedback when syntax errors are introduced. Developers don’t
need to switch between applications to complete tasks. In addition, the IDE’s tools and
features helps developers organize resources, prevent mistakes and take shortcuts.

Further, IDEs streamline development by encouraging holistic thinking. They force developers
to think of their actions in terms of the entire development lifecycle, rather than as a series of
discrete tasks.

 Continual learning: Staying up to date and educated is another benefit. For instance, the
IDE’s help topics are constantly being updated, as well as new samples, project templates,
etc. Programmers who are continually learning and current with best practices are more
likely to contribute value to the team and the enterprise, and to boost productivity.

 Standardization: The IDE interface standardizes the development process, which helps
developers work together more smoothly and helps new hires get up to speed more
quickly.

LANGUAGES SUPPORTED BY IDE

Some IDEs are dedicated to a specific programming language or set of languages, creating a
feature set that aligns with the particulars of that language. For instance, Xcode for the
Objective-C and Swift languages, Cocoa and Cocoa Touch APIs.

However, there are many multiple-language IDEs, such as Eclipse (C, C++, Python, Perl, PHP,
Java, Ruby and more), Komodo (Perl, Python, Tcl, PHP, Ruby, Javascript and more) and
NetBeans (Java, JavaScript, PHP, Python, Ruby, C, C++ and more).

Support for alternative languages is often provided by plugins. For example, Flycheck is a syntax
checking extension for GNU Emacs 24 with support for 39 languages.

ALGORITHM

An algorithm is a finite list of instructions, most often used in solving problems or performing
tasks.

63
An algorithm is a set of step-by-step procedures, or a set of rules to follow, for completing a
specific task or solving a particular problem. Algorithms are all around us. The recipe for baking
a cake, the method we use to solve a long division problem, and the process of doing laundry are
all examples of an algorithm. Here’s what baking a cake might look like, written out as a list of
instructions, just like an algorithm:

1. Preheat the oven

2. Gather the ingredients

3. Measure out the ingredients

4. Mix together the ingredients to make the batter

5. Grease a pan

6. Pour the batter into the pan

7. Put the pan in the oven

8. Set a timer

9. When the timer goes off, take the pan out of the oven

10. Enjoy!

Algorithmic programming is all about writing a set of rules that instruct the computer how to
perform a task. A computer program is essentially an algorithm that tells the computer what
specific steps to execute, in what specific order, in order to carry out a specific task. Algorithms
are written using particular syntax, depending on the programming language being used.

TYPES OF ALGORITHMS

Algorithms are classified based on the concepts that they use to accomplish a task. While there
are many types of algorithms, the most fundamental types of computer science algorithms are:

1. Divide and conquer algorithms – divide the problem into smaller subproblems of the
same type; solve those smaller problems, and combine those solutions to solve the
original problem.

2. Brute force algorithms – try all possible solutions until a satisfactory solution is found.

3. Randomized algorithms – use a random number at least once during the computation to
find a solution to the problem.

4. Greedy algorithms – find an optimal solution at the local level with the intent of finding
an optimal solution for the whole problem.

64
5. Recursive algorithms – solve the lowest and simplest version of a problem to then solve
increasingly larger versions of the problem until the solution to the original problem is
found.

6. Backtracking algorithms – divide the problem into subproblems, each which can be
attempted to be solved; however, if the desired solution is not reached, move backwards
in the problem until a path is found that moves it forward.

7. Dynamic programming algorithms – break a complex problem into a collection of


simpler subproblems, then solve each of those subproblems only once, storing their
solution for future use instead of re-computing their solutions.

WHY IS ALGORITHMS IMPORTANT TO UNDERSTAND?

Algorithmic thinking, or the ability to define clear steps to solve a problem, is crucial in many
different fields. Even if we’re not conscious of it, we use algorithms and algorithmic thinking all
the time. Algorithmic thinking allows students to break down problems and conceptualize
solutions in terms of discrete steps. Being able to understand and implement an algorithm
requires students to practice structured thinking and reasoning abilities.

INTRODUCTION TO DATA STRUCTURES

Data Structure is a way of collecting and organising data in such a way that we can perform
operations on these data in an effective way. Data Structures is about rendering data elements in
terms of some relationship, for better organization and storage. For example, we have some data
which has, player's name "Virat" and age 26. Here "Virat" is of String data type and 26 is
of integer data type.

We can organize this data as a record like Player record, which will have both player's name and
age in it. Now we can collect and store player's records in a file or database as a data
structure. For example: "Dhoni" 30, "Gambhir" 31, "Sehwag" 33

If you are aware of Object Oriented programming concepts, then a class also does the same thing,
it collects different type of data under one single entity. The only difference being, data
structures provides for techniques to access and manipulate data efficiently.

In simple language, Data Structures are structures programmed to store ordered data, so that
various operations can be performed on it easily. It represents the knowledge of data to be
organized in memory. It should be designed and implemented in such a way that it reduces the
complexity and increases the efficiency.

A data structure is a specialized format for organizing, processing, retrieving and storing data.
There are several basic and advanced types of data structures, all designed to arrange data to suit
65
a specific purpose. Data structures make it easy for users to access and work with the data they
need in appropriate ways. Most importantly, data structures frame the organization of
information so that machines and humans can better understand it.

In computer science and computer programming, a data structure may be selected or designed to
store data for the purpose of using it with various algorithms. In some cases, the algorithm's
basic operations are tightly coupled to the data structure's design. Each data structure contains
information about the data values, relationships between the data and -- in some cases --
functions that can be applied to the data.

For instance, in an object-oriented programming language, the data structure and its associated
methods are bound together as part of a class definition. In non-object-oriented languages, there
may be functions defined to work with the data structure, but they are not technically part of the
data structure.

WHY ARE DATA STRUCTURES IMPORTANT?

Typical base data types, such as integers or floating-point values, that are available in most
computer programming languages are generally insufficient to capture the logical intent for data
processing and use. Yet applications that ingest, manipulate and produce information must
understand how data should be organized to simplify processing. Data structures bring together
the data elements in a logical way and facilitate the effective use, persistence and sharing of data.
They provide a formal model that describes the way the data elements are organized.

Data structures are the building blocks for more sophisticated applications. They are designed by
composing data elements into a logical unit representing an abstract data type that has relevance
to the algorithm or application. An example of an abstract data type is a "customer name" that is
composed of the character strings for "first name," "middle name" and "last name."

It is not only important to use data structures, but it is also important to choose the proper data
structure for each task. Choosing an ill-suited data structure could result in slow runtimes or
unresponsive code. Five factors to consider when picking a data structure include the following:

1. What kind of information will be stored?

2. How will that information be used?

3. Where should data persist, or be kept, after it is created?

4. What is the best way to organize the data?

5. What aspects of memory and storage reservation management should be considered?

HOW ARE DATA STRUCTURES USED

66
In general, data structures are used to implement the physical forms of abstract data types. Data
structures are a crucial part of designing efficient software. They also play a critical role in
algorithm design and how those algorithms are used within computer programs.

Early programming languages -- such as Fortran, C and C++ -- enabled programmers to define
their own data structures. Today, many programming languages include an extensive collection
of built-in data structures to organize code and information. For example, Python lists and
dictionaries, and JavaScript arrays and objects are common coding structures used for storing
and retrieving information.

Software engineers use algorithms that are tightly coupled with the data structures -- such as lists,
queues and mappings from one set of values to another. This approach can be fused in a variety
of applications, including managing collections of records in a relational database and creating
an index of those records using a data structure called a binary tree.

EXAMPLES OF HOW DATA STRUCTURES ARE USED

Storing data. Data structures are used for efficient data persistence, such as specifying the
collection of attributes and corresponding structures used to store records in a database
management system.

Managing resources and services. Core operating system (OS) resources and services are
enabled through the use of data structures such as linked lists for memory allocation, file
directory management and file structure trees, as well as process scheduling queues.

Data exchange. Data structures define the organization of information shared between
applications, such as TCP/IP packets.

Ordering and sorting. Data structures such as binary search trees -- also known as an ordered or
sorted binary tree -- provide efficient methods of sorting objects, such as character strings used
as tags. With data structures such as priority queues, programmers can manage items organized
according to a specific priority.

Indexing. Even more sophisticated data structures such as B-trees are used to index objects, such
as those stored in a database.

Searching. Indexes created using binary search trees, B-trees or hash tables speed the ability to
find a specific sought-after item.

Scalability. Big data applications use data structures for allocating and managing data storage
across distributed storage locations, ensuring scalability and performance. Certain big data
programming environments -- such as Apache Spark -- provide data structures that mirror the
underlying structure of database records to simplify querying.

CHARACTERISTICS OF DATA STRUCTURES


67
Data structures are often classified by their characteristics. The following three characteristics
are examples:

1. Linear or non-linear. This characteristic describes whether the data items are arranged
in sequential order, such as with an array, or in an unordered sequence, such as with a
graph.

2. Homogeneous or heterogeneous. This characteristic describes whether all data items in


a given repository are of the same type. One example is a collection of elements in an
array, or of various types, such as an abstract data type defined as a structure in C or a
class specification in Java.

3. Static or dynamic. This characteristic describes how the data structures are compiled.
Static data structures have fixed sizes, structures and memory locations at compile time.
Dynamic data structures have sizes, structures and memory locations that can shrink or
expand, depending on the use.

DATA TYPES

If data structures are the building blocks of algorithms and computer programs, the primitive --
or base -- data types are the building blocks of data structures. The typical base data types
include the following:

 Boolean, which stores logical values that are either true or false.

 Integer, which stores a range on mathematical integers -- or counting numbers. Different


sized integers hold a different range of values -- e.g., a signed 8-bit integer holds values
from -128 to 127, and an unsigned long 32-bit integer holds values from 0 to
4,294,967,295.

 Floating-point numbers, which store a formulaic representation of real numbers.

 Fixed-point numbers, which are used in some programming languages and hold real
values but are managed as digits to the left and the right of the decimal point.

 Character, which uses symbols from a defined mapping of integer values to symbols.

 Pointers, which are reference values that point to other values.

 String, which is an array of characters followed by a stop code -- usually a "0" value -- or
is managed using a length field that is an integer value.

BASIC TYPES OF DATA STRUCTURES

68
As we have discussed above, anything that can store data can be called as a data structure, hence
Integer, Float, Boolean, Char etc, all are data structures. They are known as Primitive Data
Structures.

Then we also have some complex Data Structures, which are used to store large and connected
data. Some example of Abstract Data Structure are :

 Linked List

 Tree

 Graph

 Stack, Queue etc.

All these data structures allow us to perform different operations on data. We select these data
structures based on which type of operation is required. We will look into these data structures in
more details in our later lessons.

The data structure type used in a particular situation is determined by the type of operations that
will be required or the kinds of algorithms that will be applied. The various data structure types
include the following:

 Array. An array stores a collection of items at adjoining memory locations. Items that are
the same type are stored together so the position of each element can be calculated or
retrieved easily by an index. Arrays can be fixed or flexible in length.

69
An array can hold a collection of integers, floating-point numbers, stings or even other arrays.

 Stack. A stack stores a collection of items in the linear order that operations are applied.
This order could be last in, first out (LIFO) or first in, first out (FIFO).

 Queue. A queue stores a collection of items like a stack; however, the operation order
can only be first in, first out.

 Linked list. A linked list stores a collection of items in a linear order. Each element, or
node, in a linked list contains a data item, as well as a reference, or link, to the next item
in the list.

Linked list data structures are a set of nodes that contain data and the address or a pointer to the
next node.

 Tree. A tree stores a collection of items in an abstract, hierarchical way. Each node is
associated with a key value, with parent nodes linked to child nodes -- or subnodes. There
is one root node that is the ancestor of all the nodes in the tree.

A binary search tree is a set of nodes where each has a value and can point to two child nodes.

 Heap. A heap is a tree-based structure in which each parent node's associated key value
is greater than or equal to the key values of any of its children's key values.

 Graph. A graph stores a collection of items in a nonlinear fashion. Graphs are made up
of a finite set of nodes, also known as vertices, and lines that connect them, also known
as edges. These are useful for representing real-world systems such as computer networks.

 Trie. A trie, also known as a keyword tree, is a data structure that stores strings as data
items that can be organized in a visual graph.

 Hash table. A hash table -- also known as a hash map -- stores a collection of items in an
associative array that plots keys to values. A hash table uses a hash function to convert an
index into an array of buckets that contain the desired data item.

70
Hashing is a data structure technique where key values are converted into indexes of an array
where the data is stored.

These are considered complex data structures as they can store large amounts of interconnected
data.

How to choose a data structure

When choosing a data structure for a program or application, developers should consider the
answers to the following three questions:

1. Supported operations. What functions and operations does the program need?

2. Computational complexity. What level of computational performance is tolerable? For


speed, a data structure whose operations execute in time linear to the number of items
managed -- using Big O notation: O(n) -- will be faster than a data structure whose
operations execute in time proportional to the square of the number of items managed --
O(n^2).

3. Programming elegance. Are the organization of the data structure and its functional
interface easy to use?

Some real-world examples include:

 Linked lists are best if a program is managing a collection of items that don't need to be
ordered, constant time is required for adding or removing an item from the collection and
increased search time is OK.

 Stacks are best if the program is managing a collection that needs to support a LIFO
order.

 Queues should be used if the program is managing a collection that needs to support a
FIFO order.

 Binary trees are good for managing a collection of items with a parent-child relationship,
such as a family tree.

 Binary search trees are appropriate for managing a sorted collection where the goal is to
optimize the time it takes to find specific items in the collection.

 Graphs work best if the application will analyze connectivity and relationships among a
collection of individuals in a social media network.

What is Stack Data Structure?

71
Stack is an abstract data type with a bounded(predefined) capacity. It is a simple data structure
that allows adding and removing elements in a particular order. Every time an element is added,
it goes on the top of the stack and the only element that can be removed is the element that is at
the top of the stack, just like a pile of objects.

Basic features of Stack

1. Stack is an ordered list of similar data type.

2. Stack is a LIFO(Last in First out) structure or we can say FILO(First in Last out).

3. push() function is used to insert new elements into the Stack and pop() function is used to
remove an element from the stack. Both insertion and removal are allowed at only one
end of Stack called Top.

4. Stack is said to be in Overflow state when it is completely full and is said to be


in Underflow state if it is completely empty.

Applications of Stack

The simplest application of a stack is to reverse a word. You push a given word to stack - letter
by letter - and then pop letters from the stack.

There are other uses also like:

1. Parsing

2. Expression Conversion(Infix to Postfix, Postfix to Prefix etc)

72
Implementation of Stack Data Structure

Stack can be easily implemented using an Array or a Linked List. Arrays are quick, but are
limited in size and Linked List requires overhead to allocate, link, unlink, and deallocate, but is
not limited in size. Here we will implement Stack using array.

Algorithm for PUSH operation

1. Check if the stack is full or not.

2. If the stack is full, then print error of overflow and exit the program.

3. If the stack is not full, then increment the top and add the element.

Algorithm for POP operation

1. Check if the stack is empty or not.

2. If the stack is empty, then print error of underflow and exit the program.

3. If the stack is not empty, then print the element at the top and decrement the top.

Below we have a simple C++ program implementing stack data structure while following the
object oriented programming concepts.

73
If you are not familiar with C++ programming concepts, you can learn it form

Copy

Position of Top Status of Stack

-1 Stack is Empty

0 Only one element in Stack

N-1 Stack is Full

N Overflow state of Stack

Analysis of Stack Operations

Below mentioned are the time complexities for various operations that can be performed on the
Stack data structure.

 Push Operation : O(1)

 Pop Operation : O(1)

 Top Operation : O(1)

 Search Operation : O(n)

The time complexities for push() and pop() functions are O(1) because we always have to insert
or remove the data from the top of the stack, which is a one step process.

Now that we have learned about the Stack in Data Structure, you can also check out these topics:

74
INTRODUCTION TO ROBOTICS

WHAT IS ROBOTICS?

Robotics is the intersection of science, engineering and technology that produces machines,
called robots, that substitute for (or replicate) human actions. Pop culture has always been
fascinated with robots. R2-D2. Optimus Prime. WALL-E. These over-exaggerated, humanoid
concepts of robots usually seem like a caricature of the real thing...or are they more forward
thinking than we realize? Robots are gaining intellectual and mechanical capabilities that don’t
put the possibility of a R2-D2-like machine out of reach in the future.

WHAT IS A ROBOT?

A robot is the product of the robotics field, where programmable machines are built that can
assist humans or mimic human actions. Robots were originally built to handle monotonous tasks
(like building cars on an assembly line), but have since expanded well beyond their initial uses to
perform tasks like fighting fires, cleaning homes and assisting with incredibly intricate surgeries.
Each robot has a differing level of autonomy, ranging from human-controlled bots that carry out
tasks that a human has full control over to fully-autonomous bots that perform tasks without any
external influences.

As technology progresses, so too does the scope of what is considered robotics. In 2005, 90% of
all robots could be found assembling cars in automotive factories. These robots consist mainly of
mechanical arms tasked with welding or screwing on certain parts of a car. Today, we’re seeing
an evolved and expanded definition of robotics that includes the development, creation and use
of bots that explore Earth’s harshest conditions, robots that assist law-enforcement and
even robots that assist in almost every facet of healthcare.

While the overall world of robotics is expanding, a robot has some consistent characteristics:

1. Robots all consist of some sort of mechanical construction. The mechanical aspect of a
robot helps it complete tasks in the environment for which it’s designed. For example,
the Mars 2020 Rover’s wheels are individually motorized and made of titanium tubing
that help it firmly grip the harsh terrain of the red planet.

2. Robots need electrical components that control and power the machinery. Essentially, an
electric current (a battery, for example) is needed to power a large majority of robots.

3. Robots contain at least some level of computer programming. Without a set of code
telling it what to do, a robot would just be another piece of simple machinery. Inserting a
program into a robot gives it the ability to know when and how to carry out a task.

75
We’re really bound to see the promise of the robotics industry sooner, rather than later,
as artificial intelligence and software also continue to progress. In the near future, thanks to
advances in these technologies, robots will continue getting smarter, more flexible and more
energy efficient. They’ll also continue to be a main focal point in smart factories, where they’ll
take on more difficult challenges and help to secure global supply chains.

Though relatively young, the robotics industry is filled with an admirable promise of progress
that science fiction could once only dream about. From the deepest depths of our oceans to
thousands of miles in outer space, robots will be found performing tasks that humans couldn’t
dream of achieving alone.

TYPES OF ROBOTS

Mechanical bots come in all shapes and sizes to efficiently carry out the task for which they are
designed. All robots vary in design, functionality and degree of autonomy. From the 0.2
millimeter-long “RoboBee” to the 200 meter-long robotic shipping vessel “Vindskip,” robots are
emerging to carry out tasks that humans simply can’t. Generally, there are five types of robots:

1) Pre-Programmed Robots

Pre-programmed robots operate in a controlled environment where they do simple, monotonous


tasks. An example of a pre-programmed robot would be a mechanical arm on an automotive
assembly line. The arm serves one function — to weld a door on, to insert a certain part into the
engine, etc. — and its job is to perform that task longer, faster and more efficiently than a human.

2) Humanoid Robots

Humanoid robots are robots that look like and/or mimic human behavior. These robots usually
perform human-like activities (like running, jumping and carrying objects), and are sometimes
designed to look like us, even having human faces and expressions. Two of the most prominent
examples of humanoid robots are Hanson Robotics’ Sophia (in the video above) and Boston
Dynamics’ Atlas.

3) Autonomous Robots

Autonomous robots operate independently of human operators. These robots are usually
designed to carry out tasks in open environments that do not require human supervision. They
are quite unique because they use sensors to perceive the world around them, and then employ
decision-making structures (usually a computer) to take the optimal next step based on their data
and mission. An example of an autonomous robot would be the Roomba vacuum cleaner, which
uses sensors to roam freely throughout a home.

EXAMPLES OF AUTONOMOUS ROBOTS

 Cleaning Bots (for example, Roomba)


76
 Lawn Trimming Bots

 Hospitality Bots

 Autonomous Drones

 Medical Assistant Bots

4) Teleoperated Robots

Teleoperated robots are semi-autonomous bots that use a wireless network to enable human
control from a safe distance. These robots usually work in extreme geographical conditions,
weather, circumstances, etc. Examples of teleoperated robots are the human-controlled
submarines used to fix underwater pipe leaks during the BP oil spill or drones used to detect
landmines on a battlefield.

5) Augmenting Robots

Augmenting robots either enhance current human capabilities or replace the capabilities a human
may have lost. The field of robotics for human augmentation is a field where science fiction
could become reality very soon, with bots that have the ability to redefine the definition of
humanity by making humans faster and stronger. Some examples of current augmenting robots
are robotic prosthetic limbs or exoskeletons used to lift hefty weights.

USES OF ROBOTS

Robots have wide variety of use cases that make them the ideal technology for the future. Soon,
we will see robots almost everywhere. We'll see them in our hospitals, in our hotels and even on
our roads.

APPLICATIONS OF ROBOTICS

 Helping fight forest fires

 Working alongside humans in manufacturing plants (known as co-bots)

 Robots that offer companionship to elderly individuals

 Surgical assistants

 Last-mile package and food order delivery

 Autonomous household robots that carry out tasks like vacuuming and mowing the grass

 Assisting with finding items and carrying them throughout warehouses

77
 Used during search-and-rescue missions after natural disasters

 Landmine detectors in war zones

Manufacturing

The manufacturing industry is probably the oldest and most well-known user of robots. These
robots and co-bots (bots that work alongside humans) work to efficiently test and assemble
products, like cars and industrial equipment. It’s estimated that there are more than three million
industrial robots in use right now.

Logistics

Shipping, handling and quality control robots are becoming a must-have for most retailers and
logistics companies. Because we now expect our packages arriving at blazing speeds, logistics
companies employ robots in warehouses, and even on the road, to help maximize time efficiency.
Right now, there are robots taking your items off the shelves, transporting them across the
warehouse floor and packaging them. Additionally, a rise in last-mile robots (robots that will
autonomously deliver your package to your door) ensure that you’ll have a face-to-metal-face
encounter with a logistics bot in the near future.

Home

It’s not science fiction anymore. Robots can be seen all over our homes, helping with chores,
reminding us of our schedules and even entertaining our kids. The most well-known example of
home robots is the autonomous vacuum cleaner Roomba. Additionally, robots have now evolved
to do everything from autonomously mowing grass to cleaning pools.
Travel

Is there anything more science fiction-like than autonomous vehicles? These self-driving cars are
no longer just imagination. A combination of data science and robotics, self-driving vehicles are
taking the world by storm. Automakers, like Tesla, Ford, Waymo, Volkswagen and BMW are all
working on the next wave of travel that will let us sit back, relax and enjoy the ride. Rideshare
companies Uber and Lyft are also developing autonomous rideshare vehicles that don’t require
humans to operate the vehicle.
Healthcare

Robots have made enormous strides in the healthcare industry. These mechanical marvels have
use in just about every aspect of healthcare, from robot-assisted surgeries to bots that help
humans recover from injury in physical therapy. Examples of robots at work in healthcare
are Toyota’s healthcare assistants, which help people regain the ability to walk, and “TUG,” a
robot designed to autonomously stroll throughout a hospital and deliver everything from
medicines to clean linens.

78
COMPONENTS OF ROBOTS

Control System

At the most basic level, human beings and other animals survive through a principle called
feedback. Human beings sense what is going on around them and react accordingly. The use of
feedback to control how a machine functions dates back to at least 1745, when English lumber
mill owner Edmund Lee used the principle to improve the function of his wind-powered mill.
Every time the wind changed direction, his workers had to move the windmill to compensate.
Lee added two smaller windmills to the larger one. These smaller windmills powered an axle that
automatically turned the larger one to face the wind.

A robot's control system uses feedback just as the human brain does. However, instead of a
collection of neurons, a robot's brain consists of a silicon chip called a central processing unit, or
CPU, that is similar to the chip that runs your computer. Our brains decide what to do and how to
react to the world based on feedback from our five senses. A robot's CPU does the same thing
based on data collected by devices called sensors.

Sensors

Robots receive feedback from sensors that mimic human senses such as video cameras or
devices called light-dependent resistors that function like eyes or microphones that act as ears.
Some robots even have touch, taste and smell. The robot's CPU interprets signals from these
sensors and adjusts its actions accordingly.

Actuators

To be considered a robot, a device must have a body that it can move in reaction to feedback
from its sensors. Robot bodies consist of metal, plastic and similar materials. Inside these bodies
are small motors called actuators. Actuators mimic the action of human muscle to move parts of
the robot's body. The simplest robots consist of an arm with a tool attached for a particular task.
More advanced robots may move around on wheels or treads. Humanoid robots have arms and
legs that mimic human movement.

Power Supply

In order to function a robot must have power. Human beings get their energy from food. After
we eat, the food is broken down and converted into energy by our cells. Most robots get their
energy from electricity. Stationary robotic arms like the ones that work in car factories can be
plugged in like any other appliance. Robots that move around are usually powered by batteries.
Our robotic space probes and satellites are often designed to collect solar power.

End Effectors

79
In order to interact with the environment and carry out assigned tasks, robots are equipped with
tools called end effectors. These vary according to the tasks the robot has been designed to carry
out. For example, robotic factory workers have interchangeable tools such as paint sprayers or
welding torches. Mobile robots such as the probes sent to other planets or bomb disposal robots
often have universal grippers that mimic the function of the human hand.

SINGLE BOARD COMPUTERS

Single board computers are hand-sized computers kids (and anyone) can use to learn about
computer hardware and software. They began as custom boards engineers ordered from a factory,
tweaked to build and test their designs, then ordered in quantity. Some boards are complete
computers, for example, the Pi, Arduino, and BeagleBoard, while other boards are embedded as
part of a system of sensors or attached devices.

Single board computers are still used by engineers. But Raspberry Pi, Arduino, and other boards
have made it possible for anyone to learn how computers work, as well as create fun electronics
projects. These pages display many of the boards available. The magazine site URL below
includes links to these boards.

Raspberry Pi

The idea behind this tiny and affordable computer for kids came in 2006, when Eben Upton, Rob
Mullins, Jack Lang and Alan Mycroft, based at the University of Cambridge’s Computer
Laboratory, became concerned about the year-on-year decline in the numbers and skill levels of
the A Level students in the UK applying to read Computer Science at Cambridge. From a
situation in the 1990s where most of the kids applying were coming to interview as experienced
hobbyist programmers, the landscape in the 2000s was very different; a typical applicant might
only have done a little web design. Unlike game consoles, the technology most kids are familiar
with today, the Raspberry Pi is designed to be a flexible low cost computer with lots of power
and capabilities.

Arduino

80
Initially created by Massimo Banzi as a design tool for his students at the Interaction Design
Institute Ivrea in Italy, the project was formally started as Arduino by five friends and released in
2005. The board schematics and source code is open source hardware. Anyone can build their
own versions or buy off the shelf versions available online. Arduinos are perfect for all kinds of
electronics projects with sensors, lights, cameras, and other machines.

https://www.arduino.cc/
http://spectrum.ieee.org/geek-life/hands-on/the-making-of-arduino

BeagleBoard

The BeagleBoard was created by several people in the US, including a few employees from
Texas Instruments, a computer chip manufacturer. They wanted to create an open source board
kids and hobbyists could use to create electronics projects easily at a low cost. Another goal is to
reduce or eliminate the distance between computer software and electronics projects which often
require plugs to connect with sensors and other devices.

http://beagleboard.org/

C.H.I.P.

81
Begun as a Kickstarter project in 2015, this board has an incredible amount of capability packed
into a tiny low cost board. The basic board cost $9 on Kickstarter and included, among many
other things, a 1Ghz processor, 512 MB of RAM, 4GB of hard disk storage, bluetooth, and wifi.
The board also includes a pocket container and a separate video connector. Combined with the
open source software, this board provides lots of flexibility at a low cost.

http://nextthing.co/
https://www.kickstarter.com/projects/1598272670/chip-the-worlds-first-9-computer/description

Cubit

Also begun on Kickstarter in 2015, Cubit is a plug and play board with lots of extra kits. It’s a
mashup of littleBits and a single board computer. Extra kits include servos, sensors,
potentiometers, buzers, LED strips, and more.

http://qfusionlabs.com/
https://www.kickstarter.com/projects/1762626887/cubit-the-make-anything-platform/description

Intel Edison

82
While created by Intel in 2014 as a development board for wearable technology, the project also
is compatabile with Arduino and includes Linux. There is a development kit and modules, as
well as an online community.

https://www-ssl.intel.com/content/www/us/en/do-it-yourself/edison.html

Onion

Omega is an invention platform for the Internet of Things. It comes WiFi-enabled and supports
most of the popular languages such as Python and Node.

https://onion.io/omega
https://www.kickstarter.com/projects/onion/onion-omega-invention-platform-for-the-internet-of

micro:bit

83
Designed and distributed by the British Broadcasting Corporation, the micro:bit will be given
away to year 7 UK students fall 2015. The board is a stepping stone to let kids experiment with
computing before moving to Raspberry Pi and other boards. The name is based on the much
loved BBC Micro computer from the 1980s.

http://www.bbc.co.uk/programmes/articles/4hVG2Br1W1LKCmw8nSm9WnQ/introducing-the-
bbc-micro-bit

Curiosity Development Platform

A highly flexible development board, the Curiosity board is for more advanced projects and
makes a good option when you want to graduate from a Pi, BeagleBoard, or Arduino.

http://www.microchip.com/pagehandler/en-us/family/8bit/devboards/curiosity.html
http://hackaday.com/2015/07/22/review-microchip-curiosity-is-a-gorgeous-new-8-bit-dev-board/

MinnowBoard

84
The MinnowBoard and MinnowBoard Max are Intel® Atom™ processor based boards which
introduce Intel® Architecture to the small and low cost embedded market for developer and
maker communities. The boards use open source hardware standards

CONTROL FLOW

Programming languages allow us to express the way that execution components (statements,
expressions, and declarations) are wired together to effect a computation.

There are (at least) seven types of flow:

1. Sequencing (do this THEN this THEN this ...)

2. Selection (if, unless, switch, case, ...)

3. Iteration (for, while, repeat, until, ...)

4. Procedural Abstraction (subroutine call)

5. Recursion (you know what this is)

6. Nondeterminacy (do this OR this OR ...that is, choose one arbitrarily or randomly)

7. Concurrency (do multiple things at the same time)

Execution Components

The three important components are:

 Statements, which are executed

 Expressions, which are evaluated

 Declarations, which are elaborated

Sequencing

85
Sequencing is the most basic control flow... it refers to doing elaborations, evaluations, or
executions one after another. Not in parallel and not out of order (unless a compiler can
guarantee that such optimizations don’t change the meaning).

Selection

Selection means we do one of several alternatives. Often done with an "if" statement, for
example:

Iteration

Iteration is executing code repeatedly, either for each value in an enumeration, or while some
condition holds. Iteration is normally modelled by a loop construct, or an iterator object, or by
functions with names like for Each.

Loops

There seem to be 5 kinds of loops:

 Loop forever

 Loop n times

 Loop while/until a condition is true

 Loop through a range of numbers, optionally with a step

 Loop through each item in a collection (or each char in a string, or each node in a linked
list...)

Looping through a collection

Many languages provide loop constructs that loop through, or enumerate, items in a collection,
such as an array, list, set, string, lines in a file, files in a folder, and so on. (How the language
identifies what kinds of things are iterable is another matter, for now, we’re just looking at the
syntax for the iteration):

Looping through a collection with indexes

Sometimes you need the index, in addition to the value, within a collection. Some languages
offer a function or method, when applied to a collection, to yield Index-Value pairs:

What’s interesting about Go and Lua is that there is no way to iterate only over the values. To get
just the values, you need to explicitly ignore the index! It’s common to do this by naming the
index variable _ and just not touching it.

86
For-loops are not the only way to do this.

You might wonder how to get the index and value together in, say, JavaScript and Ruby.
Generally, you actually don’t use for-loops for these kinds of things. Instead, iterating with
“each”-functions are the way to go. We’ll see them very soon.

Of course, there are languages in which you have to use the index only and then grab the value
by subscripting:

Iterator Objects

An iterator is an object that keeps track of where you are during an iteration. You don’t have to
use them in a loop! You just call methods such as hasNext and next (or in some
languages, begin, end, and ++).

The use of explicit iterators opens the possibility that your underlying collection changes while
your iterator is still active. In some languages, this may cause your program to just crash; in
others, doing this will trigger a "concurrent modification exception."

Recursion

Recursive subroutine calls itself. Recursion is

 As powerful as iteration

 Often more natural than iteration when doing functional programming

 Often less natural than iteration when doing imperative programming

 Not allowed in some languages (e.g., pre-1990 Fortran)

Recursive code is

 Usually shorter than its iterative equivalent

 Usually easier to prove correct than its iterative equivalent

 Sometimes easy to misuse, resulting in horrid code

In functional programming the programmer will sometimes have to turn a non-tail-recursive


function into a tail recursive one. The idea is to "pass along" arguments (like a counter or partial
result) "into" the next call. So factorial can be written

Nondeterminacy

Nondeterministic control flow occurs when the next computation step is made randomly (not
arbitrarily) from a set of alternatives.

87
Generally each of the arms are guarded. To execute a select statement, first the guards are
evaluated, and then a choice is made among the open alternatives (those with true guards). A
missing guard is assumed to be true.

What if all guards are false? In Ada, this raises an exception. In other languages, the statement
simply has no effect.

Nondeterministic constructs turn out to be very useful in concurrent programming, because


random execution paths sometimes help to avoid deadlock.

Concurrency

Concurrency means multiple computations are happening at the same time. Concurrency is
everywhere in modern programming, whether we like it or not:

 Multiple computers in a network

 Multiple applications running on one computer

 Multiple processors in a computer (today, often multiple processor cores on a single chip)

In fact, concurrency is essential in modern programming:

 Web sites must handle multiple simultaneous users.

 Mobile apps need to do some of their processing on servers (“in the cloud”).

 Graphical user interfaces almost always require background work that does not interrupt
the user. For example, Eclipse compiles your Java code while you’re still editing it.

Being able to program with concurrency will still be important in the future. Processor clock
speeds are no longer increasing. Instead, we’re getting more cores with each new generation of
chips. So in the future, in order to get a computation to run faster, we’ll have to split up a
computation into concurrent pieces.

Two Models for Concurrent Programming

There are two common models for concurrent programming: shared memory and message
passing.

Shared memory. In the shared memory model of concurrency, concurrent modules interact by
reading and writing shared objects in memory.

88
Other examples of the shared-memory model:

 A and B might be two processors (or processor cores) in the same computer, sharing the
same physical memory.

 A and B might be two programs running on the same computer, sharing a common
filesystem with files they can read and write.

 A and B might be two threads in the same Java program (we’ll explain what a thread is
below), sharing the same Java objects.

Message passing. In the message-passing model, concurrent modules interact by sending


messages to each other through a communication channel. Modules send off messages, and
incoming messages to each module are queued up for handling. Examples include:

 A and B might be two computers in a network, communicating by network connections.

 A and B might be a web browser and a web server – A opens a connection to B, asks for
a web page, and B sends the web page data back to A.

 A and B might be an instant messaging client and server.

 A and B might be two programs running on the same computer whose input and output
have been connected by a pipe, like ls | grep typed into a command prompt.

Processes, Threads, Time-slicing

The message-passing and shared-memory models are about how concurrent modules
communicate. The concurrent modules themselves come in two different kinds: processes and
threads.

Process. A process is an instance of a running program that is isolated from other processes on
the same machine. In particular, it has its own private section of the machine’s memory.

The process abstraction is a virtual computer. It makes the program feel like it has the entire
machine to itself – like a fresh computer has been created, with fresh memory, just to run that
program.

Just like computers connected across a network, processes normally share no memory between
them. A process can’t access another process’s memory or objects at all. Sharing memory
between processes is possible on most operating system, but it needs special effort. By contrast, a
new process is automatically ready for message passing, because it is created with standard input
& output streams, which are the System.out and System.in streams you’ve used in Java.
89
Thread. A thread is a locus of control inside a running program. Think of it as a place in the
program that is being run, plus the stack of method calls that led to that place to which it will be
necessary to return through.

Just as a process represents a virtual computer, the thread abstraction represents a virtual
processor. Making a new thread simulates making a fresh processor inside the virtual computer
represented by the process. This new virtual processor runs the same program and shares the
same memory as other threads in process.

Threads are automatically ready for shared memory, because threads share all the memory in the
process. It needs special effort to get “thread-local” memory that’s private to a single thread. It’s
also necessary to set up message-passing explicitly, by creating and using queue data structures.
We’ll talk about how to do that in a future reading.

How can I have many concurrent threads with only one or two processors in my computer?
When there are more threads than processors, concurrency is simulated by time slicing, which
means that the processor switches between threads. The figure on the right shows how three
threads T1, T2, and T3 might be time-sliced on a machine that has only two actual processors. In
the figure, time proceeds downward, so at first one processor is running thread T1 and the other
is running thread T2, and then the second processor switches to run thread T3. Thread T2 simply
pauses, until its next time slice on the same processor or another processor.

On most systems, time slicing happens unpredictably and nondeterministically, meaning that a
thread may be paused or resumed at any time.

In the Java Tutorials, read:

 Processes & Threads (just 1 page)

 Defining and Starting a Thread (just 1 page)

MITx → Processes & Threads

Shared Memory Example

Let’s look at an example of a shared memory system. The point of this example is to show that
concurrent programming is hard, because it can have subtle bugs.

90
Imagine that a bank has cash machines that use a shared memory model, so all the cash machines
can read and write the same account objects in memory.

Interleaving

Here’s one thing that can happen. Suppose two cash machines, A and B, are both working on a
deposit at the same time. Here’s how the deposit() step typically breaks down into low-level
processor instructions:

get balance (balance=0)

add 1

write back the result (balance=1)

When A and B are running concurrently, these low-level instructions interleave with each other
(some might even be simultaneous in some sense, but let’s just worry about interleaving

This interleaving is fine – we end up with balance 2, so both A and B successfully put in a dollar.
But what if the interleaving looked like this:

The balance is now 1 – A’s dollar was lost! A and B both read the balance at the same time,
computed separate final balances, and then raced to store back the new balance – which failed to
take the other’s deposit into account.

Race Condition

This is an example of a race condition. A race condition means that the correctness of the
program (the satisfaction of postconditions and invariants) depends on the relative timing of
events in concurrent computations A and B. When this happens, we say “A is in a race with B.”

Some interleavings of events may be OK, in the sense that they are consistent with what a single,
nonconcurrent process would produce, but other interleavings produce wrong answers –
violating postconditions or invariants.

91
You can’t tell just from looking at Java code how the processor is going to execute it. You can’t
tell what the indivisible operations – the atomic operations – will be. It isn’t atomic just because
it’s one line of Java. It doesn’t touch balance only once just because the balance identifier occurs
only once in the line. The Java compiler, and in fact the processor itself, makes no commitments
about what low-level operations it will generate from your code. In fact, a typical modern Java
compiler produces exactly the same code for all three of these versions!

The key lesson is that you can’t tell by looking at an expression whether it will be safe from race
conditions.

Reordering

It’s even worse than that, in fact. The race condition on the bank account balance can be
explained in terms of different interleavings of sequential operations on different processors. But
in fact, when you’re using multiple variables and multiple processors, you can’t even count on
changes to those variables appearing in the same order.

We have two methods that are being run in different threads. computeAnswer does a long
calculation, finally coming up with the answer 42, which it puts in the answer variable. Then it
sets the ready variable to true, in order to signal to the method running in the other
thread, useAnswer, that the answer is ready for it to use. Looking at the code, answer is set
before ready is set, so once useAnswer sees ready as true, then it seems reasonable that it can
assume that the answer will be 42, right? Not so.

The problem is that modern compilers and processors do a lot of things to make the code fast.
One of those things is making temporary copies of variables like answer and ready in faster
storage (registers or caches on a processor), and working with them temporarily before
eventually storing them back to their official location in memory. The storeback may occur in a
different order than the variables were manipulated in your code. Here’s what might be going on
under the covers (but expressed in Java syntax to make it clear). The processor is effectively
creating two temporary variables, tmpr and tmpa, to manipulate the fields ready and answer:

MITx → Race Conditions

Message Passing Example

92
Now let’s look at the message-passing approach to our bank account example.

Now not only are the cash machine modules, but the accounts are modules, too. Modules interact
by sending messages to each other. Incoming requests are placed in a queue to be handled one at
a time. The sender doesn’t stop working while waiting for an answer to its request. It handles
more requests from its own queue. The reply to its request eventually comes back as another
message.

Unfortunately, message passing doesn’t eliminate the possibility of race conditions. Suppose
each account supports get-balance and withdraw operations, with corresponding messages. Two
users, at cash machine A and B, are both trying to withdraw a dollar from the same account.
They check the balance first to make sure they never withdraw more than the account holds,
because overdrafts trigger big bank penalties:

get-balance

if balance >= 1 then withdraw 1

The problem is again interleaving, but this time interleaving of the messages sent to the bank
account, rather than the instructions executed by A and B. If the account starts with a dollar in it,
then what interleaving of messages will fool A and B into thinking they can both withdraw a
dollar, thereby overdrawing the account?

One lesson here is that you need to carefully choose the operations of a message-passing
model. withdraw-if-sufficient-funds would be a better operation than just withdraw.

Concurrency is Hard to Test and Debug

If we haven’t persuaded you that concurrency is tricky, here’s the worst of it. It’s very hard to
discover race conditions using testing. And even once a test has found a bug, it may be very hard
to localize it to the part of the program causing it.

93
Concurrency bugs exhibit very poor reproducibility. It’s hard to make them happen the same
way twice. Interleaving of instructions or messages depends on the relative timing of events that
are strongly influenced by the environment. Delays can be caused by other running programs,
other network traffic, operating system scheduling decisions, variations in processor clock speed,
etc. Each time you run a program containing a race condition, you may get different behavior.

These kinds of bugs are heisenbugs, which are nondeterministic and hard to reproduce, as
opposed to a “bohrbug”, which shows up repeatedly whenever you look at it. Almost all bugs in
sequential programming are bohrbugs.

A heisenbug may even disappear when you try to look at it with println or debugger! The reason
is that printing and debugging are so much slower than other operations, often 100-1000x slower,
that they dramatically change the timing of operations, and the interleaving. So inserting a simple
print statement into the cashMachine():

…and suddenly the balance is always 0, as desired, and the bug appears to disappear. But it’s
only masked, not truly fixed. A change in timing somewhere else in the program may suddenly
make the bug come back.

Concurrency is hard to get right. Part of the point of this reading is to scare you a bit. Over the
next several readings, we’ll see principled ways to design concurrent programs so that they are
safer from these kinds of bugs.

94
SYSTEMS DEPLOYMENT

What is Software Deployment?

Software deployment includes all of the steps, processes, and activities that are required to make
a software system or update available to its intended users. Today, most IT organizations and
software developers deploy software updates, patches and new applications with a combination
of manual and automated processes. Some of the most common activities of software
deployment include software release, installation, testing, deployment, and performance
monitoring.

Software Deployment Evolution

Software development teams have innovated heavily over the past two decades, creating new
paradigms and working methods for software delivery that are designed to meet the changing
demands of consumers in an increasingly connected world. In particular, software developers
have created workflows that enable faster and more frequent deployment of software updates to
the production environment where they can be accessed by users.

Cloud Software Deployment

While many development teams still choose to host applications using on-premises IT
infrastructure, cloud service providers like Amazon Web Services (AWS), Google Cloud
Platform and Microsoft Azure now offer IT Infrastructure-as-a-Service (IaaS) and Platform-as-a-
Service (PaaS) products that help developers deploy applications into live environments without
the additional financial and administrative burden of managing their own storage and
virtualization servers.

IMPORTANT SOFTWARE DEPLOYMENT

Software deployment is one of the most important aspects of the software development process.
Deployment is the mechanism through which applications, modules, updates, and patches are
delivered from developers to users. The methods used by developers to build, test and deploy
new code will impact how fast a product can respond to changes in customer preferences or
requirements and the quality of each change.

Software development teams that streamline the process of building, testing and deploying new
code can respond more quickly to customer demand with new updates and deliver new features
more frequently to drive customer satisfaction, satisfy user needs and take advantage of
economic opportunities.

Software Deployment vs Software Release - What's the Difference?

95
For the uninitiated, software deployment and software release may sound like very much the
same thing. In fact, these terms describe two separate aspects of the overall software deployment
process that should be understood separately.

The Software Release Process Defined

The software release cycle refers to the stages of development for a piece of computer software,
whether it is released as a piece of physical media, online, or as a web-based application (SaaS).
When a software development team prepares a new software release, it typically includes a
specific version of the code and associated resources that have been assigned a version number.
When the code is updated or modified with bug fixes, a new version of the code may be
packaged with supporting resources and assigned a new release number. Versioning new
software releases in this way helps to differentiate between different versions and identify the
most up-to-date software release.

The Software Deployment Process Defined

Software deployment refers to the process of running an application on a server or device. A


software update or application may be deployed to a test server, a testing machine, or into the
live environment, and it may be deployed several times during the development process to verify
its proper functioning and check for errors. Another example of software deployment could be
when a user downloads a mobile application from the App Store and installs it onto their mobile
device.

To summarize, a software release is a specific version of a code and its dependencies that are
made available for deployment. Software deployment refers to the process of making the
application work on a target device, whether it be a test server, production environment or a
user's computer or mobile device.

Software Deployment Methodologies

DevOps is a methodology and a set of best practices for software development whose primary
goals are to shorten delivery times for new software updates while maintaining high quality. In
the DevOps framework, there are seven steps in the software development process:

1. Coding

2. Building

3. Testing

4. Packaging

5. Releasing

96
6. Configuring

7. Monitoring

Software deployment falls into the software releasing step and includes activities such as release
coordination, deploying and promoting applications, back-ups & recovery and scheduled or
timed releases. DevOps especially emphasizes the use of automation to streamline the software
deployment process. DevOps usually incorporates a framework known as Continuous
Integration (CI) where new code is integrated into a shared repository by working teams on a
regular basis, sometimes even several times per day. Newly integrated code can be tested
through an automated build process to support early bug detection and removal, helping to
ensure that releases contain only quality code with few or no errors.

Continuous Deployment (CD) describes a software release strategy where new code passes
through a battery of automated tests before being automatically released into the production
environment where users can interact with it. Continuous deployment works best for software
development teams that have invested heavily in automated testing that helps ensure new code is
production-ready as it is developed.

Frequent integrations of new code and automated testing are crucial to effective continuous
deployment. Developers that use CD also depend on real-time monitoring to help detect
performance and operational issues once code has been deployed to the live environment.

What is the Software Deployment Process?

Every organization must develop its own process for software deployment, either basing it on an
existing framework of best practices or customizing a process that meets relevant business
objectives. Software deployment can be summarized in three general phases: preparation, testing
and the deployment itself.

Preparation

In the preparation stage, developers must gather all of the code that will be deployed along with
any other libraries, configuration files, or resources needed for the application to function.
Together, these items can be packaged as a single software release. Developers should also
verify that the host server is correctly configured and running smoothly.

Testing

Before an update can be pushed to the live environment, it should be deployed to a test server
where it can be subjected to a pre-configured set of automated tests. Developers should review
the results and correct any bugs or errors before deploying the update to the live environment.

Deployment

97
Once an update has been fully tested, it can be deployed to the live environment. Developers
may run a set of scripts to update relevant databases before changes can go live. The final step is
to check for bugs or errors that occur on the live server to ensure the best possible customer
experience for users interacting with the new update.

Monitor and Secure Your Software Deployment with Sumo Logic

Sumo Logic provides the network monitoring and security capabilities that software developers
and IT organizations need to verify and ensure the correct functioning of newly deployed
software applications. With Sumo Logic, developers can gather real-time operational and
performance data from new software deployments, streamlining the detection and correction of
errors before they negatively impact users.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

System Deployment

98
1. Develop Operational Support Plan

The purpose of an Operational Support Plan is to ensure that the developed system will be:

 Properly integrated into the IT Service Catalog and ITS Service Management process

 Adequately supported by ITS as a production system on an ongoing basis in line with an


agreed client Service Level Agreement (SLA) and Operating Level Agreements (OLA)

The activities in this stage assume that no Operational Support Plan exists and that it will have to
be created for the system. If the system under consideration is already well established as an ITS
service with an assigned Service Team then most of the Operational Support needs should
already be in place. In this case the Project Team simply needs to work with the Service Team to
consider any changes in these arrangements that need to be made due to the development project.

 Document System Operational Support Needs

99
The team should define the scope of the Operational Support Plan by identifying all key
resources, operating activities and tasks that will be required to support the system. Example
components are:

 Hours of operation and scheduled maintenance windows (This should be integrated with
the ITS Maintenance Calendar)
 System Backup schedule
 System environmental considerations (physical, hardware, operating environments,
facilities, networks and platforms)
 Relation to other systems and services (Is the system a standalone system, a subsystem of
a larger system or a replacement of an existing system?)
 System and network monitoring needs (performance measurements, statistics, and system
logs)
 Documentation of Operating Procedures
 Acquisition and storage of consumable supplies (i.e. paper, toner, tapes, removable disk);
 Physical security needs, designated personnel access
 Disaster recovery procedures
 Software and Hardware licensing and maintenance agreement information, client and
vendor contacts

 Identify Operational Support Team

The roles (responsibilities and competencies) and functional groups responsible for ongoing
support of the system should be identified.

 Assign Operational Support Roles & Duties

Specific personnel should be formally assigned to the identified Operational Support Team roles
and provided with the necessary orientation to perform their support role. The Service Team
roster should be updated to reflect this information.
The assigned Operational Support team will then begin to create the components identified
in Document System Operational Support Needs above.

 Finalize Service Level Agreements (SLA)

The Project Manager should work with the Account Manager, Project Sponsor and IT Service
Management to develop a formal Service Level Agreement or SLA for the system. The IT
Service Development process is a separate but related process. This activity represents a key
integration point between these processes but for more details of SLA standards and
development please refer to the ITS Web site.

 Document Operational Support Plan

100
The Project Manager should ensure that the various components of the Operational Support Plan
are documented, compiled and provided to the designated IT Service Manager.

 Prepare Help Desk for System Deployment

Final arrangement and preparations should be made with the IT Help Desk to prepare them for
System Deployment.

2. Develop Legacy Retirement Plan

 Assess Legacy Systems Impact

In most cases the impact of the new system on any legacy systems it will replace will have been
identified earlier in the design phase. This analysis needs to be documented in this activity in
preparation for the development of a legacy retirement plan.

 Build Legacy Retirement Plan

The team should build a legacy retirement plan that phases out the old system as the new
solution is implemented and adopted. In some cases this may include a “parallel running” stage
in which both systems, old and new, are operated concurrently until final user acceptance.

3. Develop System Deployment Plan

The System Deployment Plan is a holistic implementation plan that considers the people,
processes and technology that need to be in place for the system to be successfully installed,
adopted by the user community, and the benefits of the system to be realized.

 Assess Organizational Change Management (OCM) Impact

The implementation of every major system will require some change in the behavior and practice
of the organization to be successful.

Depending on the nature and scale of the system being implemented the Project Team may
encounter significant organizational resistance to change. If this “soft side” of implementation is
not properly addressed the project will fail to achieve its goals. It is beyond the scope of this
SDLC methodology to fully address the implications of Organizational Change Management
(OCM) but at the very least the team should assess impact of the new system in terms of new
tasks, changes in procedures, policies and processes, new skills and competencies required, and
new or modified job roles on the user organization. This analysis should then be used to shape
the development of the deployment approach, training and communications for the system
implementation and to work with the Project Sponsor in rolling out the system.

101
It should also be noted, that although this activity is positioned in the deployment phase of the
SDLC this OCM analysis and conversation with the Project Sponsor should be begun as early as
possible in the SDLC process.

 Define Training Approach

The training approach should be determined in terms of audiences (Users, IT Help Desk, etc.),
learning objectives, training modes (personal coaching, self-study, classroom, Computer based
Training, On-line, etc.), use of specialist instructional designers and trainers, and sources of
training e.g. vendor classes etc.

 Design Training Curriculum

The Project Manager or a specialist Trainer/Instructional Designer will identify, design and
source the training offerings required to support the various users of the system. These Users
may be End Users, Sponsors, ITS support staff etc.

 Develop Training Materials

Training materials are developed or acquired. It is recommended that the training materials also
be reviewed by the ITS Help Desk staff. This will enable the Help Desk staff to gain advance
knowledge of the capability and support needs of the system before the Go Live date.

 Develop Communication Plan

If the system to be implemented is complex, effective communications is a critical success factor


for the deployment process. Using the insight gained from the OCM analysis, the Project
Manager should work with the Project Sponsor to develop a communication plan that identifies
and addresses:

 Target stakeholder groups and individuals and their communication needs (e.g. awareness,
sponsorship, etc.)
 Communication roles of the Project Sponsor and the project team
 Communication channels (Regular staff meetings, webinars, Town Halls, etc.)
 Messaging and development of communications collateral
 Timing and sequence of communication events
 Creation of any additional system user feedback process beyond the normal IT Incident
Management process

 Build System Deployment Plan

All the previously identified aspects of the System Plan should be interpreted into a consolidated
deployment project plan with appropriate tasks, role and milestones. This plan should encompass
all the major activities of the deployment phase through to system “Go Live”.

102
4. Conduct Operational Readiness Review (ORR)

Operational Readiness refers to the explicit acknowledgement that all the necessary requirements
for production support are in place for the system and that the user system steward and the ITS
Operations team accept responsibility for the system in production. An Operational Readiness
Review must be initiated by the project manager to conform to the ITS Operational Readiness
Review process. The project manager is responsible to follow the process through to operational
readiness sign off by the external contributors and primary stakeholders. The primary work
products are the ORR Review Packet and the Summary Assessment.

 Collect ORR Prerequisites and Inputs

The Project Manager should consult with the IT Operations team, impacted IT Functional
managers, and the Project Sponsor to identify, assess and summarize all outstanding issues,
implementation concerns and unfinished project work that could impede the transition of the
system into production.

 Prepare ORR Review Packet

An ORR Review Packet is created for distribution to the IT Functional Group leads. This review
packet provides an overview of the system and its operating requirements.

 Review ORR Packet and Complete Checklist

Each Functional Group Leads uses the information in the ORR Review Packet to create a
specific ORR Checklist for their group area and develop any required risk assessment and
mitigation plans for system cutover.

 Develop Risk Assessment & Mitigation Plan

The Project Manager compiles the input from the Functional Group Leads into an overall Risk
Assessment and Mitigation Plan.

 Review Assessments & Checklist with Authors

The Project manager schedules the ORR meeting and reviews and compiles the ORR Checklists
from each Functional Group Leader.

 Conduct ORR Meeting

The ORR meeting is held.

 Prepare Summary Assessment

103
The Project Manager created a Summary Assessment report as a result of the ORR Meeting. This
document is then shared with the external contributors and primary stakeholders to inform their
decision to proceed or reschedule based on the outcome of the ORR Meeting.

5. Deploy Solution

 Notify Change Management

The Change Coordinator must notified of the Go Live timing at least two weeks in advance.

 Stage Solution into Production Environment

The system is migrated into production.

 Deliver End User Training

The training plan is executed.

 Go Live

The Project Manager should confirm the roles and responsibilities of all project team members,
Project Sponsor and Key Users for orchestrating the system go live event. The ITS Help Desk
should be also represented in this team. Team members are responsible for completing their
assigned tasks and reporting their status at the Go Live meeting.

A final Go Live checklist should be developed and a Go Live Checklist Review meeting
scheduled.

The Project Manager and Project Sponsor chair the Go Live meeting. The goal of the meeting is
to determine that:

o The Client is ready

o The Users are ready

o The Implementation and Operational technicians are ready

o The system is ready

o An outage schedule has been negotiated and/or an RFC has been


submitted/approved through the Change Management Process

The Go Live checklist is reviewed and action items assigned for follow-up. The Project Manager
will oversee the completion of all outstanding Go Live Checklist issues.
After resolution of all checklist issues, the system is put into production. The Project Manager

104
must then notify Change Management that the system is now live. This is the final notification
that deployment has been completed, or that the System (full or portions of) failed and was
backed out of the Production environment.

105

You might also like