Bus Reservation System Final Report
Bus Reservation System Final Report
INTRODUCTION
The main objective of this project is to provide the better work efficiency,
security, accuracy, reliability, feasibility. The error occurred could be reduced to nil
and working conditions can be improved and also provide following features:
Online Bus Ticket Booking Facility and Cancellation Facility.
1 E-HealthCare
CHAPTER 2
The main objective of this project is to provide the better work efficiency, security, accuracy,
reliability, feasibility. The error occurred could be reduced to nil and working conditions can
be improved and also provide following features:
Online Bus Ticket Booking Facility and Cancellation Facility.
A feasibility study is an evaluation and analysis of the potential of the proposed project
which is based on extensive investigation and research to give full comfort to the decisions
makers. Feasibility studies aim to objectively and rationally uncover the strengths and
weaknesses of an existing business or proposed venture, opportunities and threats as
presented by the environment, the resources required to carry through, and ultimately the
prospects for success. In its simplest terms, the two criteria to judge feasibility
are cost required and value to be attained.
A brief description of the business to assess more possible factor/s which could affect
the study
The part of the business being examined
At this level, the concern is whether the proposal is both technically and legally feasible
(assuming moderate cost).
Operational feasibility is a measure of how well a proposed system solves the problems, and
takes advantage of the opportunities identified during scope definition and how it satisfies the
requirements identified in the requirements analysis phase of system development.
The operational feasibility assessment focuses on the degree to which the proposed
development projects fits in with the existing business environment and objectives with
regard to development schedule, delivery date, corporate culture, and existing business
processes.
The purpose of the economic feasibility assessment is to determine the positive economic
benefits to the organization that the proposed system will provide. It includes quantification
and identification of all the benefits expected. This assessment typically involves a cost/
benefits analysis.
In case of a new project, financial viability can be judged on the following parameters:
o Time delays.
The concept is further developed to describe how the business will operate once the
approved system is implemented, and to assess how the system will impact the administrator
and user privacy. To ensure the products and /or services provide the required capability on-
time and within budget, project resources, activities, schedules, tools, and reviews are
defined. Additionally, security certification and accreditation activities begin with the
identification of system security requirements and the completion of a high level vulnerability
assessment.
PROJECT PLANNING
Establishing objectives.
Defining project.
Determining resources
project
Major tasks in project.
Activities.
The Data Flow Diagram (DFD) is the graphical representation of the processes
and the flow of data among them. A data flow diagram illustrates the processes, data
stores, external entities and the connecting data flows in a system. The following
figure is an example of a typical DFD.
There are four components for a Data Flow Diagram. They are
They represent where information comes from and where it goes. These
Data Stores represents a place in the process where data rests. This is
Login into
Passenger
Online
Store in
Verify passenger
Report
Information
Report
Customer A\C
Passenger Information
Information
Table
Passenger
Entity
Entity type
Relationship
Attributes
Specialization
Generalization
Relationship
Relationship
A use case diagram at its simplest is a representation of a user's interaction with the
system and depicting the specifications of a use case. A use case diagram can portray the
different types of users of a system and the various ways that they interact with the system.
This type of diagram is typically used in conjunction with the textual use case and will often
be accompanied by other types of diagrams as well.
AGENT
Check the ticket mode
Forward a message to
customers mob no.
The Internet revolution of the late 1990s represented a dramatic shift in the way individuals
and organizations communicate with each other. Traditional applications, such as word
processors and accounting packages, are modeled as stand-alone applications: they offer
users the capability to perform tasks using data stored on the system the application resides
and executes on. Most new software, in contrast, is modeled based on a distributed
computing model where applications collaborate to provide services and expose functionality
to each other. As a result, primary role of most new software is changing into supporting
information exchange (through Web servers and browsers), collaboration (through e-mail and
instant messaging), and individual expression (through Web logs, also known as Blogs, and
e-zines Web based magazines). Essentially, the basic role of software is changing from
providing discrete functionality to providing services.
The .NET Framework represents a unified, object-oriented set of services and libraries that
embrace the changing role of new network-centric and networkaware software. In fact, the
.NET Framework is the first platform designed from the ground up with the Internet in mind.
The .NET Class Library is a key component of the .NET Framework it is sometimes
referred to as the Base Class Library (BCL). The .NET Class Library contains hundreds of
classes you can use for tasks such as the following:
Processing XML
Working with data from multiple data sources
Debugging your code and working with event logs
Working with data streams and files
Managing the run-time environment
Developing Web services, components, and standard Windows applications
Working with application security
Working with directory services
The functionality that the .NET Class Library provides is available to all .NET languages,
resulting in a consistent object model regardless of the programming language developers
use.
Performance
ASP.NET aims for performance benefits over other script-based technologies (including
Classic ASP) by compiling the server-side code to one or more DLL files on the web server.
This compilation happens automatically the first time a page is requested (which means the
developer need not perform a separate compilation step for pages). This feature provides the
ease of development offered by scripting languages with the performance benefits of a
compiled binary.
Microsoft Access stores data in its own format based on the Access Jet Database Engine. It
can also import or link directly to data stored in other applications and databases.[1]
Software developers and data architects can use Microsoft Access to develop application
software, and "power users" can use it to build software applications. Like other Office
applications, Access is supported by Visual Basic for Applications, an object-
oriented programming language that can reference a variety of objects including DAO (Data
Access Objects), ActiveX Data Objects, and many other ActiveX components. Visual objects
used in forms and reports expose their methods and properties in the VBA programming
environment, and VBA code modules may declare and call Windows operating-
system functions.
Users can create tables, queries, forms and reports, and connect them together with macros.
Advanced users can use VBA to write rich solutions with advanced data manipulation and
user control. Access also has report creation features that can work with any data source that
Access can "access".
The original concept of Access was for end users to be able to "access" data from any source.
Other features include: the import and export of data to many formats
including Excel, Outlook, ASCII, dBase, Paradox, FoxPro, SQL Server, Oracle, ODBC, etc.
It also has the ability to link to data in its existing location and use it for viewing, querying,
editing, and reporting. This allows the existing data to change while ensuring that Access uses
the latest data. It can perform heterogeneous joins between data sets stored across different
platforms. Access is often used by people downloading data from enterprise level
databases for manipulation, analysis, and reporting locally.
There is also the Jet Database format (MDB or ACCDB in Access 2007) which can contain
the application and data in one file. This makes it very convenient to distribute the entire
application to another user, who can run it in disconnected environments.
One of the benefits of Access from a programmer's perspective is its relative compatibility
with SQL (structured query language) queries can be viewed graphically or edited as SQL
statements, and SQL statements can be used directly in Macros and VBA Modules to
manipulate Access tables. Users can mix and use both VBA and "Macros" for programming
forms and logic and offers object-oriented possibilities. VBA can also be included in queries.
Microsoft Access offers parameterized queries. These queries and Access tables can be
referenced from other programs like VB6 and .NET through DAO or ADO. From Microsoft
Access, VBA can reference parameterized stored procedures via ADO.
The desktop editions of Microsoft SQL Server can be used with Access as an alternative to
the Jet Database Engine. This support started with MSDE (Microsoft SQL Server Desktop
Engine), a scaled down version of Microsoft SQL Server 2000, and continues with the SQL
Server Express versions of SQL Server 2005 and 2008.
All versions of IIS prior to 7.0 running on client operating systems supported only 10
simultaneous connections and a single web site.
FEATURES
Anonymous authentication
Basic access authentication
UNC authentication
.NET Passport Authentication (Removed in Windows Server 2008 and IIS 7.0)
Certificate authentication
IIS 7.0 has a modular architecture. Modules, also called extensions, can be added or removed
individually so that only modules required for specific functionality have to be installed. IIS 7
includes native modules as part of the full installation. These modules are individual features
that the server uses to process requests and include the following:
Security modules Used to perform many tasks related to security in the request-
processing pipeline, such as specifying authentication schemes, performing URL
authorization, and filtering requests.
Logging and Diagnostics modules Used to perform tasks related to logging and
diagnostics in the request-processing pipeline, such as passing information and
processing status to HTTP. sys for logging, reporting events, and tracking requests
currently executing in worker processes.
Request filtering
URL authorization
Authentication changed slightly between IIS 6.0 and IIS 7, most notably in that the
anonymous user which was named "IUSR_{machinename}" is a built-in account in Vista and
future operating systems and named "IUSR". Notably, in IIS 7, each authentication
mechanism is isolated into its own module and can be installed or uninstalled.
SECURITY
Earlier versions of IIS were hit with a number of vulnerabilities, especially the CA-
2001-13 which led to the infamous Code Red worm; however, both versions 6.0 and 7.0
currently have no reported issues with this specific vulnerability. In IIS 6.0 Microsoft opted to
change the behaviour of pre-installed ISAPI handlers, many of which were culprits in the
vulnerabilities of 4.0 and 5.0, thus reducing the attack surface of IIS. In addition, IIS 6.0
added a feature called "Web Service Extensions" that prevents IIS from launching any
program without explicit permission by an administrator.
By default IIS 5.1 and lower run websites in-process under the SYSTEM account, a default
Windows account with 'superuser' rights. Under 6.0 all request handling processes have been
brought under a Network Services account with significantly fewer privileges so that should
In June 2007, a Google study of 80 million domains concluded that while the IIS market
share was 23% at the time, IIS servers hosted 49% of the world's malware, the same
as Apache servers whose market share was 66%. The study also observed the geographical
location of these dirty servers and suggested that the cause of this could be the use of pirated
copies of Windows that could not obtain security updates from Microsoft. Microsoft has
corrected this situation; Microsoft now supplies security updates even to pirated copies of
Windows.
SYSTEM DESIGN
Logical design
The logical design of a system pertains to an abstract representation of the data flows, inputs
and outputs of the system. This is often conducted via modeling, using an over-abstract (and
sometimes graphical) model of the actual system. In the context of systems design are
included. Logical design includes ER Diagrams i.e. Entity Relationship Diagrams.
Physical design
The physical design relates to the actual input and output processes of the system. This is laid
down in terms of how data is input into a system, how it is verified/authenticated, how it is
processed, and how it is displayed as output. In Physical design, following requirements
about the system are decided.
1. Input requirement,
3. Storage requirements,
4. Processing Requirements,
Put another way, the physical portion of systems design can generally be broken down into
three sub-tasks:
2. Data Design
3. Process Design
User Interface Design is concerned with how users add information to the system and with
how the system presents information back to them. Data Design is concerned with how the
data is represented and stored within the system. Finally, Process Design is concerned with
how data moves through the system, and with how and where it is validated, secured and/or
transformed as it flows into, through and out of the system. At the end of the systems design
phase, documentation describing the three sub-tasks is produced and made available for use
in the next phase.
Physical design, in this context, does not refer to the tangible physical design of an
information system. To use an analogy, a personal computer's physical design involves input
via a keyboard, processing within the CPU, and output via a monitor, printer, etc. It would not
concern the actual layout of the tangible hardware, which for a PC would be a monitor, CPU,
motherboard, hard drive, modems, video/graphics cards, USB slots, etc. It involves a detailed
design of a user and a product database structure processor and a control processor. The H/S
personal specification is developed for the proposed system.
1. Person name
2. PNR no
3. Source
4. Destination
5. Age
At the first passenger will search for a booking agency or website or an agent then
he can be able to book his travel ticket on particular date and time.
This module is for canceling the booked ticket. This module takes following
argument as input
PNR no
If related ticket is booked then it cancel that ticket and if there is no such
ticket is booked then it returns false result.
PNR no
This module is for searching buses between two desired stations. There are three
arguments needed:-
Source
Destination
Date
This module enables user to send feedback to related bus agency so that possible
flaws can be removed and maximum customer satisfaction can be achieved. User post his
view with precious suggestions in comment box.
If customer wants to know about a particular bus then he has to pass following information:-
Bus no
Date
And if he wants to know about a particular passenger or himself the he has to pass following
input:-
Pnr no or name
Date
User name
Password
Confirm password
Security answer
DATA STRUCTURES:
This part of the Design consists the overall database schema or we can say that tables
which consists various types of records. Table of aa database consists attributes, entities,
tuples for storing and manipulating records.
This table is used for storing all details about agent of the bus agency.
This table is used for storing all information about feedback viz who posted this comment
and his mob no, email etc
This table is used for Storing all information about person who is going to tavel.
CHAPTER 5
INTRODUCTION
System implementation is the stage when the user has thoroughly tested the
system and approves all the features provided by the system. The various tests are performed
and the system is approved only after all the requirements are met and the user is satisfied.
Implementation is the process of having systems personnel check out and put
new equipment into use, train users, install the new application and construct any files of data
needed to use it. This phase is less creative than system design. Depending on the size of the
organization that will be involved in using the application and the risk involved in its use,
systems developers may choose to test the operation in only one area of the firm with only
one or two persons. Sometimes, they will run both old and new system in parallel way to
com-pare the results. In still other situations, system developers stop using the old system one
day and start using the new one the next.
The implementation of the web based or lan based networked project has some extra
steps at the time of implementation. We need to configure the system according the
requirement of the software.
TRAINING
Even well designed and technically elegant systems can succeed or fail because of
Since, Human Resource Recruitment Process is web-based and user friendly, not
much effort was required in training process.
PARALLEL RUN
In this approach, the old system and the new system are used simultaneously
for some period of time so that the performance of the new system can be monitored and
compared with that of the old system. Also in case of failure of the new system, the user
can fall back on the old system. The risk of this approach is that the user may never want
to shift to new system.
IMMEDIATE CUT-OFF
In this method, the use of the old system ceases as soon as the new system is
implemented and bought in to palace. The old system becomes redundant from the day of
implementation of the new system. There is the high risk involved in this approach if the
new system is not tested rigorously. This is because of the fact that if the new system
fails, then there will not be anything to fall back upon. The advantage of this approach is
that both the systems need not be used simultaneously.
Implementation Tools
1) ASP.Net
2) IIS server
3) SQL server
Coding
This means program construction with procedural specifications has finished and the
coding for the program begins:
Main emphasis while coding was on style so that the end result was an optimized
code.
Coding Style
The structured programming method was used in all the modules the project. It
incorporated the following features
The code has been written so that the definition and implementation of each
function is contained in one file.
Naming Convention
As the project size grows, so does the complexity of recognizing the purpose of the
variables. Thus the variables were given meaningful names,whihch would help in
understanding the context and the purpose of the variable.
The function names are also given meaningful names that can be easily understood by
the user.
Indentation
Judicious use of indentation can make the task of reading and understanding a
program much simpler. Indentation is an essential part of a good program. If code id intended
without thought it will seriously affect the readability of the program.
The higher-level statements like he definition of the variables, constants and the
function are intended, with each nested block intended, stating their purpose in the
code.
Blank line is also left between each function definition to make the code look neat.
Indentation for each source file stating he purpose of the file is also done.
5.2 MAINTENANCE:
Corrective
Adaptive
Perfective.
CHAPTER 6
Here the System testing involved is the most widely used testing process consisting of
five stages as shown in the figure. In general, the sequence of testing activities is component
testing, integration testing, and then user testing. However, as defects are discovered at any
one stage, they require program modifications to correct them and this may require other
stages in the testing process to be repeated.
Unit testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing
Figure 7.1
Testing is vital to the success of the system. System testing makes a logical assumption that if
the parts of the system are correct, the goal will be successfully achieved. In adequate testing
or non-testing leads to errors that may not appear until months or even years later (Remember
the New York three day power failure due to a misplaced Break statement).
1. The time lag between the cause and the appearance of the problem.
2. The time interval effect of the system errors on files and the records on the system.
A small error can conceivably explode into a much larger problem. Effective testing early in
the process translates directly into long term cost savings from a reduced number of errors.
Another reason for system testing is its utility as a user oriented vehicle before
implementation. The best program is worthless if it does not meet the user requirements.
Unfortunately, the users demands are often compromised by efforts to facilitate program or
design efficiency in terms of processing time or design efficiency.
Thus in this phase we went to test the code we wrote. We needed to know if the code
compiled with the design or not? Whether the code gave the desired outputs on given inputs?
Whether it was ready to be installed on the users computer or some more modifications were
needed?
Through the web applications are characteristically different from their software counterparts
but the basic approach for testing these web applications is quite similar. These basic steps of
testing have been picked from software engineering practices. The following are the steps, we
undertook:
2. The design model of the web application is reviewed to uncover the navigation errors.
Use cases, derived as a part of the analysis activity allows a web designer to exercise
each usage scenario against the architectural and navigational design. In essence these
non-executable tests help to uncover the errors in navigation.
3. When web applications are considered the concept of unit changes. Each web page
encapsulate content navigation links, content and processing elements(Forms, Scripts,
JSPs as in our case). It is not always possible to test each of these individually. Thus
is the base of the web applications the unit to be considered is the web page. Unlike
the testing of the algorithmic details of a module the data that flows across the module
interface, page level testing for web applications is driven by content, processing and
links encapsulating the web page.
4. The Assembled web application is tested for overall functionality and content
delivery. the various user cases are used that test the system for errors and mistakes.
5. The Web application is tested for a variety of environmental settings and is tested for
various configurations and upon various platforms.
6. Thread based testing is done to monitor the regression tests so that the site does not
become very slow is a lot of users are simultaneously logged on.
7. A controlled and monitored population of end users tests Intranet application, this all
comprises of the User Acceptance Testing.
The aim of testing is often to demonstrate that a program works by showing that it has
no errors. The basic purpose of testing phase is to detect the errors that may be present in the
program. Hence one should not start testing with the intent of showing that a program works,
but the intent should be to show that a program doesnt work. Testing is the process of
executing a program with the intent of finding errors.
TESTING OBJECTIVES:
The main objective of testing is to uncover a host of errors, systematically and with
minimum effort and time. Stating formally, we can say,
A good test case is one that has a high probability of finding error, if it exists.
The software more or less confirms to the quality and reliable standards.
In order to uncover the errors present in different phases, we have the concept of
levels of testing. The basic levels of testing are
Client Needs
Acceptance Testing
Requirements
System Testing
Design
Integration Testing
Code
Unit Testing
Unit testing focuses verification effort on the smallest unit of software i.e. the
module. Using the detailed design and the process specifications, testing is done to uncover
errors within the boundary of the module. All modules must be successful in the unit test
before the start of the integration testing begins.
Integration Testing
After unit testing, we have to perform integration testing. The goal here is to
see if modules can be integrated properly, the emphasis being on testing interfaces between
modules. This testing activity can be considered as testing the design and hence the emphasis
on testing module interactions.
System Testing
Here the entire software system is tested. The reference document for this
process is the requirements document, and the goal is to see if software meets its
requirements.
Acceptance Testing
Acceptance Testing is performed with realistic data of the client to demonstrate that
the software is working satisfactorily. Testing here is focused on external behavior of the
system; the internal logic of program is not emphasized.
Test cases should be selected so that the largest number of attributes of an equivalence
class is exercised at once. The testing phase is an important part of software development. It
is the process of finding errors and missing operations and also a complete verification to
determine whether the objectives are met and the user requirements are satisfied.
This is a unit testing method, where a unit will be taken at a time and tested
thoroughly at a statement level to find the maximum possible errors.
I tested step wise every piece of code, taking care that every statement in the code is
executed at least once. The white box testing is also called Glass Box Testing.
I have generated a list of test cases, sample data, which is used to check all possible
combinations of execution paths through the code at every module level.
White-box test focuses on the program control structure. Test cases are derived to
ensure that all statement in the program control structure. Test cases are derived to ensure that
all statement in the program control structure. Test cases are derived to ensure that all
statement in the program has been executed at least once during testing and that all logical
conditions have been exercised. Basis path testing, a white box technique, makes use of
program graphs (or graph matrices) to derive the set of linearly independent test that will
ensure coverage. Condition and data flow testing further exercising degrees of complexity.
This testing method considers a module as a single unit and checks the unit at
interface and communication with other modules rather getting into details at statement level.
Here the module will be treated as a block that will take some input and generate output.
Output for a given set of input combinations are forwarded to other modules.
Black-box test are designed to uncover errors functional requirement without regard to
the internal workings of a program. Black-box testing techniques focus on the information
domain of the software, deriving test cases by partitioning the input and output domain of a
program in manner that provides through test coverage. The black-box test is used to
demonstrate that software functions are operational, that input is properly produced, and that
Graph based testing methods explore the relationship between and behavior of program
objects. Equivalence partitioning divides the input classes of data are likely to exercise
specific software function. Boundary values analysis probes the programs ability to handle
data at the limits of acceptability.
A strategy for software testing may also be viewed in the context of the spiral. Unit
testing begins at the vortex of the spiral and, concentrates on each unit, component of the
software as implemented in source code. Testing progresses moving outward along the
spiral to integration testing, where the focus is on designed the construction of the
software architecture. Taking another turn outward on spiral, we encounter validation
testing,
Considering the process from a procedural point of view, testing within the context of
software engineering is actually a series of four steps that are implemented sequentially. The
steps are shown in Figure. Initially, tests focus on each component individually, ensuring that
it functions properly as unit. Hence, the name unit testing. Unit testing makes heavy use of
white-box testing techniques, exercising specific paths in modules control structure to ensure
complete coverage and maximum error detection.
System Testing
Validation Testing
Integration Testing
Validation testing
Evaluation
Software
Configuration
Test Results
Error
Error Rate
Data
Reliability
Model
Test Configuration
Predicated Reliability
Correction
SYSTEM SECURITY
7.1 Introduction
One might think that there is a little reason to be concerned about security in an
intranet. After all, by definition an intranet is internal to ones organization; outsider cannot
access it. There are strong arguments for the position that an intranet should be completely
open to its users, with little or no security. One might not have considered ones intranet on
any other light.
On the other hand, implementing some simple, built-in security measures in ones intranet
can allow one to provide resources one might not have considered possible in such context.
For example, one can give access to some Web Pages to some people without them available
to owner entire customer base, with several kinds of authentication.
Intranet security is, then, a multifaceted issue, with both opportunities and dangers,
especially if ones network is part of the Intranet.
There are basically two types of security associated with this system:
Damage due to natural causes like earth tremor, flooding, water logging, fire hazards,
atmospheric or environmental conditions etc..For overcoming these difficulties the
replica of the data are automatically stored at various networks and for environmental
conditions Air conditioning environment is created.
a). Data not being available to the authorized person at the time of need.
To overcome these difficulties the following access facilities has been provided:-
i) Identification:-
ii) Authentication:-
System checks the password under the particular user identification. The computer
permits the various resource to the authorized person.
iii) Authorization:-
Many people view computer and network security in a negative light, thinking
of it only in terms of restricting access to services. One major view of network security is
that which is not expressly permitted is denied. Although this is a good way of thinking
about how to connect other organization to the internet, one can, and possibly should, view
intranet security from a more positive angle.
Property set up, intranet security can be an enabler, enriching ones intranet
with services and resources one would not otherwise be able to provide. Such an overall
security policy might be described as that which is not expressly denied is permitted.
The more defensive approach, preventing abuse of ones intranet, is also given play,
however. Organizations needs for security in an intranet can vary widely. Businesses in
which confidentiality and discretion are the norm in handling proprietary information and
corporate intellectual property have different needs than a college or university, for example.
Academic institutions generally tilt toward making the free exchange of ideas a primary
interest. At the same time, though the curiosity (to use a polite word) of undergraduates
requires strong needs for security. Keeping prying sophomores out of university
administration computing resources is a high priority; for example, students have been known
to try to access grade records(their own or those of others) for various reasons.
Processor: Pentium-IV
RAM: 512 MB
FUTURE ENHANCEMENT
By integrating the more report like station wise report, ticket information etc.
At the end of the Report and project development we founds that overall performance
of this project is 80 to 90 % are perfect, for its working functionality. Because project
documentation provides such capabilities which can responsible to evaluates the
performance and accuracy level of the project.
So at the end of this, we found that this project provide the accurate result in the
working area of Ticket Booking, Cancellation, searching information and add the new
user
1. www.stackoverflow.com
2. www.c-sharpcorner.com
3. www.mindcracker.com
4. www.dotnetcurry.com
5. www.dotnetspider.com