BCA Project on Bus Reservation System - PDF Report With Source Code Free Download
BCA Project on Bus Reservation System - PDF Report With Source Code Free Download
PROJECT REPORT
ON
“BUS RESERVATION SYSTEM”
(BCA Project)
Table of Contents
Chapter 1 Introduction
Chapter 10 Conclusion
Chapter 1
Introduction
In bus reservation system there has been a collection of buses, agent who are
booking tickets for customer’s journey which give bus number and departure time
of the bus. According to its name it manages the details of all agent, tickets, rental
details, and timing details and so on. It also manages the updating of the objects.
In the tour detail there is information about bus, who has been taking
customers at their destination, it also contain the detailed information about the
customer, who has been taken from which bus and at what are the number of
members he or she is taking his/her journey.
This section also contain the details of booking time of the seat(s) or
collecting time of the tickets, this section also contain the booking date and the
name of agent which is optional, by which the customer can reserve the seats for
his journey
The main objective of this project to provide the better work efficiency, security,
accuracy, reliability, feasibility. The error occurred could be reduced to nil and
working conditions can be improved.
Chapter 2
Development model
Development model
Our project life cycle uses the waterfall model, also known as classic life cycle
model or linear sequential model.
System/Information
Engineering
Analysis Design Code Test
Software requirements analysis involves requirements for both the system and the
software to be document and reviewed with the customer.
3. Design
4. Code Generation
5. Testing
Once code has been generated, program testing begins. The testing focuses on the
logical internals of the software, ensuring that all statement have been tested, and
on the functional externals; that is, conducting test to uncover errors and ensure
that define input will produce actual results that agree with required results.
6. Support
Chapter 3
System Study
Before the project can begin, it becomes necessary to estimate the work to be
done, the resource that will be required, and the time that will elapse from start to
finish. During making such a plan we visited site many more times.
We started to asking context-free questions; that is, a set of questions that will lead
to a basic understanding of the problem. The first set of context-free questions was
like this:
Can you show us (or describe) the environment in which the solution will
be used?
After first round of above asked questions. We revisited the site and asked many
more questions considering to final set of questions.
Are our questions relevant to the problem that you need to be solved?
Are we asking too many questions?
Should we be asking you anything else?
2.2.2 Feasibility
Software cost and effort estimation will never be an exact science. Too may
variables—human, technical, environmental, political—can affect the ultimate
cost of software and effort applied to develop it. However, software project
1. Delay estimation until late in the project (since, we can achieve 100%
accurate estimates after the project is complete!)
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost
and effort estimates.
4. Use one or more empirical models for software cost and effort
estimation.
Unfortunately, the first option, however attractive, is not practical. Cost estimates
must be provided “Up front”. However, we should recognize that the longer we
wait, the more we know, and the more we know, the less likely we are to make
serious errors in our estimates.
The second option can work reasonably well, if the current project is quite
similar to past efforts and other project influences (e.g., the customer, business
conditions, the SEE, deadlines) are equivalent. Unfortunately past experience has
not always been a good indicator of future results.
The remaining options are viable approaches the software project estimation.
Ideally, the techniques noted for each option be applied in tandem; each used as
cross check for the other. Decomposition techniques take a “divide and conquer”
approach to software project estimation. By decomposing a project into major
functions and related software engineering activities, cost and effort estimation can
be performed in the stepwise fashion.
D = f (vi)
Each of the viable software cost estimation options is only as good as the
historical data used to seed the estimate. If no historical data exist, costing rests on a
very shaky foundation.
Chapter 4
Program evaluation and review technique (PERT) and critical path method
(CPM) are two project scheduling methods that can be applied to software
development. These techniques are driven by following information:
Estimates of Effort
A decomposition of the product function
The selection of the appropriate process model and task set
Decomposition of tasks
PERT chart for this application software is illustrated in figure 3.1. The critical
Path for this Project is Design, Code generation and Integration and testing.
Aug-2008
Coding
Aug 15,2008
Documentati Finish
on and Jan 3, 2008
Report
Oct 30, 2008
Figure 4.1 PERT charts for “University Study Center Management System”.
System
Gantt chart which is also known as Timeline chart contains the information
like effort, duration, start date, completion date for each task. A timeline chart can
be developed for the entire project.
Below in figure 4.2 we have shown the Gantt chart for the project. All project
tasks have been listed in the left-hand column.
Figure: 4.2 Gant chart for the Project University Study Center Management
System. Note: Wk1—week1, d1—day1.
Chapter 5
Downloaded from Studynama.com. Click to download unlimited free college projects »
Downloaded from Studynama.com. Click to download unlimited free college projects »
System Analysis
Over the past two decades, a large number of analysis modeling methods
have been developed. Investigators have identified analysis problems and their
caused and have developed a variety of modeling notations and corresponding sets
of heuristics to overcome them. Each analysis method has a unique point of view.
However, all analysis methods are related by a set of operational principles:
Understand the problem before you begin to create the analysis model.
There is a tendency to rush to a solution, even before the problem is
understood. This often leads to elegant software that solves the wrong
problem! We always tried to escape from such situation while making
this project a success.
Develop prototypes that enable a user to understand how
human/machine interaction will occur. Since the perception of the
quality of software is ofter based on the perception ot the “friendliness”
of the interface, protoptying (and the iteration that results) are highly
recommended.
Record the origin of and the reason for every requirement. This is the
first step in establishing traceability back to the customer.
Use multiple views of requirements. Building data, functional, and
behavioral models provide the software developer with three views.
This reduces the likelihood that something will be missed and increases
the likelihood that inconsistency will be recognized.
Rank requirements. Tight deadlines may preclude the implementation of
every software requirement.
Work to eliminate ambiguity. Because most requirements are described
in a natural language, the opportunity for ambiguity abounds. The use of
formal technical reviews is one way to uncover and eliminate
ambiguity.
We have tried to takes above said principles to heart so that we could provide
an excellent foundation for design.
input, manipulate it in some way, and produce output. This fundamental statement
of objective is true whether we build batch software for a payroll system or real-
time embedded software to control fuel flow to an automobile engine.
considered.
Information content represents the individual data and control objects that
constitute some larger collection of information transformed by the software. For
example, the data object, Status declare is a composite of a number of important
pieces of data: the aircraft’s name, the aircraft’s model, ground run, no of hour
flying and so forth. Therefore, the content of Status declares is defined by the
attributes that are needed to create it. Similarly, the content of a control object
called System status might be defined by a string of bits. Each bit represents a
separate item of information that indicates whether or not a particular device is on-
or off-line.
Data and control objects can be related to other data and control objects.
For example, the date object Status declare has one or more relationships with the
objects like total no of flying, period left for the maintenance of aircraft an others.
Information flow represents the manner in which date and control change
as each moves through a system. Referring to figure 6.1, input objects are
Data/
Data/
Control
Control
Store
Store
5.1.2 Modeling
The second and third operational analysis principles require that we build models
of function and behavior.
Input
Processing
And output.
The functional model begins with a single context level model (i.e., the name of
the software to be built). Over a series of iterations, more and more functional
detail is gathered, until a through delineation of all system functionality is
represented.
A behavioral model creates a representation of the states of the software and the
events that cause software to change state.
Problems are often too large and complex to be understood as a whole, for
this reason, se tend to partition (divide) such problems into parts that can be easily
under stood and establish interfaces between the part so that overall function can
Horizontal partitioning:
Acceptance Rejection
part of an analysis activity that will be continued into design and construction. The
prototype of the software is the first evolution of the finished system.
The above six questions are made as per the Andriole suggestions for prototyping
approach.
E-R DIAGRAM:
BUS RESERVATION
SYSTEM
Give
service
s Divide
d
BUSES
Work
area
Care
of DIFFERENT
examin TYPE OF Full
e Work of
s BUSES
SLEEPER
OR
WITHOUT DEPARTMENT SEATS
SLEEPER
The following DFD shows how the working of a reservation system could be
smoothly managed:
WORK AREAS
BUSES RESERVED
RECORDS AGENT
AGENT
DETAILS
REPORT
TABLE
We have STARBUS as our database and some of our tables (relation) are
TIMELIST
STARBUS
AGENTBASICINFO
FEEDBACK
PASSANGERIFNO
STATIS
TIMELIST
AGENT_BASIC_INFO
AGENT_ID
AGENT_NAME
AGENT_FNAME
AGENT_SHOP_NAME
AGENT_SHOP_ADDRESS
AGENT_SHOP_CITY
AGENT_PHON_NUMBER
AGENT_MOBIL_NUMBER
AGENT_CURRENT_BAL
In our FEEDBACK table we have fields like name, Email, Phon, Subject,
Email
Name
Phone
FEEDBACK
Comment Subject
User_typ
e
C_name
C_phon
Bill_no
C_to
Status
PASSANGER
_INFO C_from
Agent_id
C_time
Amount
Seat_no
Total_seat
Station_nam
Sno e
Rate_perSeat
TIME_LIST
Bus_numbe Time
r
Reach_time
PROCESS LOGIC:
the more they are comfortable with us, the more customers we have visiting
our reservation unit .the above tables and modules facilitates many logics
like:
Chapter 6
Technology used
The Internet revolution of the late 1990s represented a dramatic shift in the way
individuals and organizations communicate with each other. Traditional
applications, such as word processors and accounting packages, are modeled as
stand-alone applications: they offer users the capability to perform tasks using data
stored on the system the application resides and executes on. Most new software,
in contrast, is modeled based on a distributed computing model where applications
collaborate to provide services and expose functionality to each other. As a result,
the primary role of most new software is changing into supporting information
exchange (through Web servers and browsers), collaboration (through e-mail and
instant messaging), and individual expression (through Web logs, also known as
Blogs, and e-zines — Web based magazines). Essentially, the basic role of
software is changing from providing discrete functionality to providing services.
The .NET Framework represents a unified, object-oriented set of services and
libraries that embrace the changing role of new network-centric and network-
aware software. In fact, the .NET Framework is the first platform designed from
the ground up with the Internet in mind.
Processing XML
Working with data from multiple data sources
Debugging your code and working with event logs
Working with data streams and files
Managing the run-time environment
Developing Web services, components, and standard Windows applications
Working with application security
Working with directory services
The functionality that the .NET Class Library provides is available to all .NET
languages, resulting in a consistent object model regardless of the programming
language developer’s use.
The .NET Framework consists of three key elements as show in below diagram
ASP.NET
Window Forms
Web Server Web Form Visual
Studio.NET
Operating System
The CLR is also responsible for compiling code just before it executes. Instead of
producing a binary representation of your code, as traditional compilers do, .NET
compilers produce a representation of your code in a language common to
the .NET Framework: Microsoft Intermediate Language, often referred to as IL.
When your code executes for the first time, the CLR invokes a special compiler
called a Just In Time (JIT) compiler, Because all .NET languages have the same
compiled representation, they all have similar performance characteristics. This
means that a program written in Visual Basic .NET can perform as well as the
same program written in Visual C++ .NET.
The benefits of using the .NET Class Library include a consistent set of services
available to all .NET languages and simplified deployment, because the .NET
Class Library is available on all implementations of the .NET Framework.
3. Unifying components
Until this point, this chapter has covered the low-level components of the .NET
Framework. The unifying components, listed next, are the means by which you
can access the services the .NET Framework provides:
ASP.NET
Windows Forms
Visual Studio .NET
ASP.NET
After the release of Internet Information Services 4.0 in 1997, Microsoft began
researching possibilities for a new web application model that would solve
common complaints about ASP.
. ASP.NET introduces two major features: Web Forms and Web Services.
1. Web Forms
Developers not familiar with Web development can spend a great deal of time, for
example, figuring out how to validate the e-mail address on a form. You can
validate the information on a form by using a client-side script or a server-side
script. Deciding which kind of script to use is complicated by the fact that each
approach has its benefits and drawbacks, some of which aren't apparent unless
you've done substantial design work. If you validate the form on the client by
using client-side JScript code, you need to take into consideration the browser that
your users may use to access the form. Not all browsers expose exactly the same
representation of the document to programmatic interfaces. If you validate the
form on the server, you need to be aware of the load that users might place on the
server. The server has to validate the data and send the result back to the client.
Web Forms simplify Web development to the point that it becomes as easy as
dragging and dropping controls onto a designer (the surface that you use to edit a
page) to design interactive Web applications that span from client to server.
2. Web Services
A Web service is an application that exposes a programmatic interface through
standard access methods. Web Services are designed to be used by other
applications and components and are not intended to be useful directly to human
end users. Web Services make it easy to build applications that integrate features
from remote sources. For example, you can write a Web Service that provides
weather information for subscribers of your service instead of having subscribers
link to a page or parse through a file they download from your site. Clients can
simply call a method on your Web Service as if they are calling a method on a
component installed on their system — and have the weather information
available in an easy-to-use format that they can integrate into their own
applications or Web sites with no trouble.
Introducing ASP.NET
Unlike the ASP runtime, ASP.NET uses the Common Language Runtime (CLR)
provided by the .NET Framework. The CLR is the .NET runtime, which manages
the execution of code. The CLR allows the objects, which are created in different
languages, to interact with each other and hence removes the language barrier.
CLR thus makes Web application development more efficient.
In addition to simplifying the designing of Web applications, the .NET CLR offers
many advantages.
The ASP.NET code is a compiled CLR code instead of an interpreted code. The
CLR provides just-in-time compilation, native optimization, and caching. Here, it
is important to note that compilation is a two-stage process in the .NET
Framework. First, the code is compiled into the Microsoft Intermediate Language
(MSIL). Then, at the execution time, the MSIL is compiled into native code. Only
the portions of the code that are actually needed will be compiled into native code.
This is called Just In Time compilation. These features lead to an overall improved
performance of ASP.NET applications.
Flexibility:
The entire .NET class library can be accessed by ASP.NET applications. You can
use the language that best applies to the type of functionality you want to
implement, because ASP.NET is language independent.
Configuration settings:
Security:
ASP.NET applications are secure and use a set of default authorization and
authentication schemes. However, you can modify these schemes according to the
security needs of an application. In addition to this list of advantages, the
ASP.NET framework makes it easy to migrate from ASP applications.
After you've set up the development environment for ASP.NET, you can create
your first ASP.NET Web application. You can create an ASP.NET Web
application in one of the following ways:
Use a text editor:
In this method, you can write the code in a text editor, such as Notepad, and save
the code as an ASPX file. You can save the ASPX file in the directory C:\inetpub\
wwwroot. Then, to display the output of the Web page in Internet Explorer, you
simply need to type http://localhost/<filename>.aspx in the Address box. If the IIS
server is installed on some other machine on the network, replace"localhost" with
the name of the server. If you save the file in some other directory, you need to
add the file to a virtual directory in the Default WebSite directory on the IIS
server. You can also create your own virtual directory and add the file to it.
Characteristics
Pages
ASP.NET pages, known officially as "web forms", are the main building block for
application development. Web forms are contained in files with an ASPX
extension; in programming jargon, these files typically contain static (X)HTML
markup, as well as markup defining server-side Web Controls and User Controls
where the developers place all the required static and dynamic content for the web
page. Additionally, dynamic code which runs on the server can be placed in a page
within a block <% -- dynamic code -- %> which is similar to other web
development technologies such as PHP, JSP, and ASP, but this practice is
generally discouraged except for the purposes of data binding since it requires
more calls when rendering the page.
Note that this sample uses code "inline", as opposed to code behind.
<%@ Page Language="C#" %>
<script runat="server">
</script>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Sample page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
The current time is: <asp:Label runat="server"
id="Label1" />
</div>
</form>
</body>
</html>
Code-behind model
It is recommended by Microsoft for dealing with dynamic program code to use the
code-behind model, which places this code in a separate file or in a specially
designated script tag. Code-behind files typically have names like MyPage.aspx.cs
or MyPage.aspx.vb based on the ASPX file name (this practice is automatic in
Microsoft Visual Studio and other IDEs). When using this style of programming,
the developer writes code to respond to different events, like the page being
loaded, or a control being clicked, rather than a procedural walk through the
document.
Example
The above tag is placed at the beginning of the ASPX file. The CodeFile property
of the @ Page directive specifies the file (.cs or .vb) acting as the code-behind
while the Inherits property specifies the Class the Page derives from. In this
example, the @ Page directive is included in SamplePage.aspx, then
SampleCodeBehind.aspx.cs acts as the code-behind for this page:
using System;
namespace Website
{
public partial class SampleCodeBehind :
System.Web.UI.Page
{
protected override void Page_Load(EventArgs e)
{
base.OnLoad(e);
}
}
}
In this case, the Page_Load () method is called every time the ASPX page is
requested. The programmer can implement event handlers at several stages of the
page execution process to perform processing.
User controls
Programmers can add their own properties, methods, and event handlers. An event
bubbling mechanism provides the ability to pass an event fired by a user control
up to its containing page.
State management
ASP.NET applications are hosted in a web server and are accessed over the
stateless HTTP protocol. As such, if the application uses stateful interaction, it has
to implement state management on its own. ASP.NET provides various
functionality for state management in ASP.NET applications.
Application state
Session state
ASPState Mode
In this mode, ASP.NET runs a separate Windows service that maintains the
state variables. Because the state management happens outside the
ASP.NET process, this has a negative impact on performance, but it allows
multiple ASP.NET instances to share the same state server, thus allowing
an ASP.NET application to be load-balanced and scaled out on multiple
servers. Also, since the state management service runs independent of
ASP.NET, variables can persist across ASP.NET process shutdowns.
SqlServer Mode
In this mode, the state variables are stored in a database server, accessible
using SQL. Session variables can be persisted across ASP.NET process
shutdowns in this mode as well. The main advantage of this mode is it
would allow the application to balance load on a server cluster while
sharing sessions between servers.
View state
Template engine
When first released, ASP.NET lacked a template engine. Because the .NET
framework is object-oriented and allows for inheritance, many developers would
define a new base class that inherits from "System.Web.UI.Page", write methods
here that render HTML, and then make the pages in their application inherit from
this new class. While this allows for common elements to be reused across a site,
it adds complexity and mixes source code with markup. Furthermore, this method
can only be visually tested by running the application - not while designing it.
Other developers have used include files and other tricks to avoid having to
implement the same navigation and other elements in every page.
ASP.NET 2.0 introduced the concept of "master pages", which allow for template-
based page development. A web application can have one or more master pages,
which can be nested. Master templates have place-holder controls, called
ContentPlaceHolders to denote where the dynamic content goes, as well as HTML
and JavaScript shared across child pages.
Child pages use those ContentPlaceHolder controls, which must be mapped to the
place-holder of the master page that the content page is populating. The rest of the
page is defined by the shared parts of the master page, much like a mail merge in a
word processor. All markup and server controls in the content page must be placed
within the ContentPlaceHolder control.
When a request is made for a content page, ASP.NET merges the output of the
content page with the output of the master page, and sends the output to the user.
The master page remains fully accessible to the content page. This means that the
content page may still manipulate headers, change title, configure caching etc. If
the master page exposes public properties or methods (e.g. for setting copyright
notices) the content page can use these as well.
Performance
when the newly-edited page is first requested from the web server, but won't again
unless the page requested is updated further.
The ASPX and other resource files are placed in a virtual host on an Internet
Information Services server (or other compatible ASP.NET servers; see Other
Implementations, below). The first time a client requests a page, the .NET
framework parses and compiles the file(s) into a .NET assembly and sends the
response; subsequent requests are served from the DLL files. By default ASP.NET
will compile the entire site in batches of 1000 files upon first request. If the
compilation delay is causing problems, the batch size or the compilation strategy
may be tweaked.
Development tools
Delphi 2006
Macromedia Dreamweaver MX, Macromedia Dreamweaver MX 2004, or
Macromedia Dreamweaver 8 (doesn't support ASP.NET 2.0 features, and
produces very inefficient code for ASP.NET 1.x: also, code generation and
ASP.NET features support through version 8.0.1 was little if any changed
from version MX: version 8.0.2 does add changes to improve security
against SQL injection attacks)
Macromedia HomeSite 5.5 (For ASP Tags)
Microsoft Expression Web, part of the Microsoft Expression Studio
application suite.
Microsoft SharePoint Designer
MonoDevelop (Free/Open Source)
SharpDevelop (Free/Open Source)
Visual Studio .NET (for ASP.NET 1.x)
Visual Web Developer 2005 Express Edition (free) or Visual Studio 2005
(for ASP.NET 2.0)
Visual Web Developer 2008 Express Edition (free) or Visual Studio 2008
(for ASP.NET 2.0/3.5)
Eiffel for ASP.NET
What is SQL?
SQL stands for Structured Query Language. SQL is used to communicate with a
database. According to ANSI (American National Standards Institute), it is the
standard language for relational database management systems. SQL statements
are used to perform tasks such as update data on a database, or retrieve data from a
database. Some common relational database management systems that use SQL
are: Oracle, Sybase, Microsoft SQL Server, Access, Ingres, etc. Although most
database systems use SQL, most of them also have their own additional
proprietary extensions that are usually only used on their system. However, the
standard SQL commands such as "Select", "Insert", "Update", "Delete", "Create",
and "Drop" can be used to accomplish almost everything that one needs to do with
a database. This tutorial will provide you with the instruction on the basics of each
of these commands as well as allow you to put them to practice using the SQL
Interpreter.
The first version of SQL was developed at IBM by Donald D. Chamberlin and
Raymond F. Boyce in the early 1970s. This version, initially called SEQUEL, was
designed to manipulate and retrieve data stored in IBM's original relational
database product, System R. The SQL language was later formally standardized
by the American National Standards Institute (ANSI) in 1986. Subsequent
versions of the SQL standard have been released as International Organization for
Standardization (ISO) standards.
During the 1970s, a group at IBM's San Jose research center developed the System
R relational database management system, based on the model introduced by
Edgar F. Codd in his influential paper, A Relational Model of Data for Large
Shared Data Banks. Donald D. Chamberlin and Raymond F. Boyce of IBM
subsequently created the Structured English Query Language (SEQUEL) to
manipulate and manage data stored in System R. The acronym SEQUEL was later
changed to SQL because "SEQUEL" was a trademark of the UK-based Hawker
Siddeley aircraft company.
The first non-commercial non-SQL RDBMS, Ingres, was developed in 1974 at the
U.C. Berkeley. Ingres implemented a query language known as QUEL, which was
later supplanted in the marketplace by SQL.
In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the
potential of the concepts described by Codd, Chamberlin, and Boyce and
developed their own SQL-based RDBMS with aspirations of selling it to the U.S.
Navy, CIA, and other government agencies. In the summer of 1979, Relational
Software, Inc. introduced the first commercially available implementation of SQL,
Oracle V2 (Version2) for VAX computers. Oracle V2 beat IBM's release of the
System/38 RDBMS to market by a few weeks.
After testing SQL at customer test sites to determine the usefulness and
practicality of the system, IBM began developing commercial products based on
their System R prototype including System/38, SQL/DS, and DB2, which were
commercially available in 1979, 1981, and 1983, respectively.
Standardization
SQL was adopted as a standard by ANSI in 1986 and ISO in 1987. In the original
SQL standard. Until 1996, the National Institute of Standards and Technology
(NIST) data management standards program was tasked with certifying SQL
DBMS compliance with the SQL standard. In 1996, however, the NIST data
management standards program was dissolved, and vendors are now relied upon to
self-certify their products for compliance.
The SQL standard has gone through a number of revisions, as shown below:
1992 SQL-92 SQL2, FIPS 127-2 Major revision (ISO 9075), Entry Level SQL-92
adopted as FIPS 127-2.
The SQL standard is not freely available. SQL: 2003 and SQL: 2006 may be
purchased from ISO or ANSI. A late draft of SQL: 2003 is freely available as a zip
archive, however, from Whitemarsh Information Systems Corporation. The zip
archive contains a number of PDF files that define the parts of the SQL: 2003
specification.
Procedural extensions
SQL is designed for a specific purpose: to query data contained in a relational
database. SQL is a set-based, declarative query language, not an imperative
language such as C or BASIC. However, there are extensions to Standard SQL
which add procedural programming language functionality, such as control-of-
flow constructs. These are:
Common
Source Full Name
Name
ANSI/ISO
SQL/PSM SQL/Persistent Stored Modules
Standard
Microsoft/
T-SQL Transact-SQL
Sybase
Additional extensions
SQL: 2003 also defines several additional extensions to the standard to increase
SQL functionality overall. These extensions include:
data wrappers and datalink types to allow SQL to manage external data. External
data is data that is accessible to, but not managed by, an SQL-based DBMS.
The SQL/JRT, or SQL Routines and Types for the Java Programming Language,
extension is defined by ISO/IEC 9075-13:2003. SQL/JRT specifies the ability to
invoke static Java methods as routines from within SQL applications. It also calls
for the ability to use Java classes as SQL structured user-defined types.
SQL statements also include the semicolon (";") statement terminator. Though not
required on every platform, it is defined as a standard part of the SQL grammar.
Queries
The most common operation in SQL databases is the query, which is performed
with the declarative SELECT keyword. SELECT retrieves data from a specified
table, or multiple related tables, in a database. While often grouped with Data
Manipulation Language (DML) statements, the standard SELECT query is
considered separate from SQL DML, as it has no persistent effects on the data
stored in a database. Note that there are some platform-specific variations of
SELECT that can persist their effects in a database, such as the SELECT INTO
syntax that exists in some databases.
SQL queries allow the user to specify a description of the desired result set, but it
is left to the devices of the database management system (DBMS) to plan,
optimize, and perform the physical operations necessary to produce that result set
in as efficient a manner as possible. An SQL query includes a list of columns to be
included in the final result immediately following the SELECT keyword. An
asterisk ("*") can also be used as a "wildcard" indicator to specify that all
available columns of a table (or multiple tables) are to be returned. SELECT is the
most complex statement in SQL, with several optional keywords and clauses,
including:
The FROM clause which indicates the source table or tables from which the data
is to be retrieved. The FROM clause can include optional JOIN clauses to join
related tables to one another based on user-specified criteria.
The WHERE clause includes a comparison predicate, which is used to restrict the
number of rows returned by the query. The WHERE clause is applied before the
GROUP BY clause. The WHERE clause eliminates all rows from the result set
where the comparison predicate does not evaluate to True.
The GROUP BY clause is used to combine, or group, rows with related values
into elements of a smaller set of rows. GROUP BY is often used in conjunction
with SQL aggregate functions or to eliminate duplicate rows from a result set.
The HAVING clause includes a comparison predicate used to eliminate rows after
the GROUP BY clause is applied to the result set. Because it acts on the results of
the GROUP BY clause, aggregate functions can be used in the HAVING clause
predicate.
The ORDER BY clause is used to identify which columns are used to sort the
resulting data, and in which order they should be sorted (options are ascending or
descending). The order of rows returned by an SQL query is never guaranteed
unless an ORDER BY clause is specified.
SELECT *
FROM Book
ORDER BY title;
The example below demonstrates the use of multiple tables in a join, grouping,
and aggregation in an SQL query, by returning a list of books and the number of
authors associated with each book.
FROM Book
JOIN Book_author
ON Book.isbn = Book_author.isbn
GROUP BY Book.title;
Title Authors
---------------------- -------
Pitfalls of SQL 1
(The underscore character "_" is often used as part of table and column names to
separate descriptive words because other punctuation tends to conflict with SQL
syntax. For example, a dash "-" would be interpreted as a minus sign.)
Under the precondition that isbn is the only common column name of the two
tables and that a column named title only exists in the Books table, the above
query could be rewritten in the following form:
FROM Book
GROUP BY title;
However, many vendors either do not support this approach, or it requires certain
column naming conventions. Thus, it is less common in practice.
Data retrieval is very often combined with data projection when the user is looking
for calculated values and not just the verbatim data stored in primitive data types,
or when the data needs to be expressed in a form that is different from how it's
stored. SQL allows the use of expressions in the select list to project data, as in the
following example which returns a list of books that cost more than 100.00 with
an additional sales_tax column containing a sales tax figure calculated at 6% of
the price.
FROM Book
ORDER BY title;
Some modern day SQL queries may include extra WHERE statements that are
conditional to each other. They may look like this example:
FROM Book
ORDER BY title;
Server Side
Core 2 Due 2.4GHz and Above
2 GB of Random Access Memory and Above
Client Side
Pentium-IV 1.5MHs and Above
512 MB of Random Access Memory and Above
80 GB Hard Disk
Chapter 7
System Design
1. Index page
This webpage is the starting page of the Website.It gives the followings:
2. Status.
Accessed by anyone.
Information about the booking which seat is booked and which
is empty.
3. Agent name.
Accessed by anyone.
Contains information about name, address and phone number
of the agent.
4. Feedback
5. FAQ
Accessed by anyone.
Useful for customer
Contain information when to reach the starting point and what
should do, in case when our ticket is lost.
8. Login page
It required user name who forget its password and then click on Next
button.
And also provide link for administration and other.
Administrator Area
Username
Password
Email
Security Question.
Security Answer.
As in the above image the Create agents continue page web page is
displaying:
Name
Father’s Name
Shop Name
Shop City
Shop phone number
Mobile Number
Deposit amount
Agent ID
Name
Shop Name
Shop City
Current Balance
Mobile Number
As in the above image the agent’s Deposit Amount web page is displaying:
Chapter 8
System Testing
System Testing
Once source code has been generated, software must be tested to uncover (and
correct) as many errors as possible before delivery to customer. Our goal is to
design a series of test cases that have a high likelihood of finding errors. To
uncover the errors software techniques are used. These techniques provide
systematic guidance for designing test that
(1) Internal program logic is exercised using “White box” test case design
techniques.
(2) Software requirements are exercised using “block box” test case design
techniques.
In both cases, the intent is to find the maximum number of errors with the
minimum amount of effort and time.
8.2 Strategies
A strategy for software testing must accommodate low-level tests that are
necessary to verify that a small source code segment has been correctly
implemented as well as high-level tests that validate major system functions
against customer requirements. A strategy must provide guidance for the
practitioner and a set of milestones for the manager. Because the steps of the test
strategy occur at a time when deadline pressure begins to rise, progress must be
measurable and problems must surface as earl as possible.
Following testing techniques are well known and the same strategy is adopted
during this project testing.
8.2.1 Unit testing: Unit testing focuses verification effort on the smallest unit of
software design- the software component or module. The unit test is white-box
oriented. The module interface is tested to ensure that information properly flows
into and of the program unit under test the local data structure has been examined
to ensure that data stored temporarily maintains its integrity during all steps in an
algorithm’s execution. Boundary conditions are tested to ensure that the module
operated properly at boundaries established to limit or restrict processing. All
independent paths through the control structure are exercised to ensure that all
statements in a module haven executed at least once.
8.2.4 System testing: System testing is actually a series of different tests whose
primary purpose is to fully exercise the computer-based system. Below we have
described the two types of testing which have been taken for this project.
For security purposes, when anyone who is not authorized user cannot
penetrate this system. When programs first load it check for correct username and
password. If any fails to act according will be simply ignored by the system.
As much time we run our project that is still sort of testing as Musa and Ackerman
said. They have suggested a response that is based on statistical criteria: “No, we
cannot be absolutely certain that the software will never fail, but relative to a
theoretically sound and experimentally validated statistical model, we have done
sufficient testing to say with 95 percent confidence that the probability of 1000
CPU hours of failure free operation in a probabilistically defined environment is at
least 0.995.”
Validation checks are useful when we specify the nature of data input. Let us
elaborate what I mean. In this project while entering the data to many text box you
will find the use of validation checks. When you try to input wrong data. Your
entry will be automatically abandoned.
In the very beginning of the project when user wishes to enter into the project, he
has to supply the password. This password is validated to certain string, till user
won’t supply correct word of string for password he cannot succeed. When you try
to edit the record for the trainee in Operation division you will find the validation
checks. If you supply the number (digits) for name text box, you won’t get the
entry; similarly if you data for trainee code in text (string) format it will be simply
abandoned.
Chapter 9
System Implementation
9.1.2 Representation
Chapter 10
Conclusion
To conclude, Project Grid works like a component which can access all the
databases and picks up different functions. It overcomes the many limitations
incorporated in the .NET Framework. Among the many features availed by the
project, the main among them are:
Simple editing
Insertion of individual images on each cell
Insertion of individual colors on each cell
Flicker free scrolling
Drop-down grid effect
Placing of any type of control anywhere in the grid
Chapter 11
The number of levels that the software is handling can be made unlimited
in future from the current status of handling up to N levels as currently laid
down by the software. Efficiency can be further enhanced and boosted up
to a great extent by normalizing and de-normalizing the database tables
used in the project as well as taking the kind of the alternative set of data
structures and advanced calculation algorithms available.
We can in future generalize the application from its current customized
status wherein other vendors developing and working on similar
applications can utilize this software and make changes to it according to
their business needs.
A future application of this system lies in the fact that the proposed system would
remain relevant in the future. In case there be any additions or deletion of the
services, addition or deletion of any reseller in any type of modification in future
can be implemented easily. The data collected by the system will be useful for
some other purposes also.
All these result in high client-satisfaction, hence, more and more business for the
company that will scale the company business to new heights in the forthcoming
future.
References
References:
Complete Reference of C#
Programming in C# - Deitel & Deitel
www.w3schools.com
http://en.wikipedia.org
The principles of Software Engineering – Roger S.Pressman
Software Engineering – Hudson
MSDN help provided by Microsoft .NET
Object Oriented Programming – Deitel & Deitel