0% found this document useful (0 votes)
76 views

1.1 About The Project

The document provides an overview of the Sunder Deep Recruitment System project. The project aims to automate the recruitment process for colleges and companies. It allows students to register, view job opportunities from registered companies, and apply for jobs online. Companies can register, post job listings, view applicant details, and select candidates. The system reduces workload for administrators by automatically informing students of interviews. It has modules for administration, student, and company functions. Requirements analysis and system feasibility were conducted to ensure the project meets needs.

Uploaded by

Abhishek
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views

1.1 About The Project

The document provides an overview of the Sunder Deep Recruitment System project. The project aims to automate the recruitment process for colleges and companies. It allows students to register, view job opportunities from registered companies, and apply for jobs online. Companies can register, post job listings, view applicant details, and select candidates. The system reduces workload for administrators by automatically informing students of interviews. It has modules for administration, student, and company functions. Requirements analysis and system feasibility were conducted to ensure the project meets needs.

Uploaded by

Abhishek
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 67

CHAPTER 1

INTRODUCTION
1.1 About the Project
The project Sunder Deep Recruitment System helps the students to keep update with the
information of all those companies who linked to the recruitment system. Its working is quiet
simple to understand as all students of college can register themselves on the recruitment system
portal and they can only enter to the system after successful approval of their basic information
by the administrators of the system (i.e college). It provides the opportunities to the students to
apply online for any offered jobs by the companies. Similarly any company that is interested to
recruit some candidates from the college can register itself on Sunder Deep Recruitment System
and post the jobs offers. These jobs offers will show on students Job Opportunities page from
where students can easily apply to the corresponding jobs offered by the companies. On other
hand companies also can see the data of those students who applied for the job offered by them.
Companies can see basic information of the students as well as the academic details like marks
obtained in each semester and total aggregated percentage of students. On behalf of the interview
process companies can select or reject the students/candidates for the offered job. Companies can
keep the information of selected students. These selected students will added to the List of
Placed Students by the system automatically. The list of placed students is viewable to all
companies, students, admin and also to the guest user. This project Sunder Deep Recruitment
System is able to reduce the workload of the recruitment cells members as they dont need to
sent any mail to all the students for informing about any interview process organizing by college,
they do not required to send the description of the job and profile of the company to the students
as this all automatically performed by the system itself. The single online task to do for
administrator is to approve the registered students and companies only to make secure the system
and to avoid imposter, frauds and also to barrier the fake accounts.

1.2 Purpose
The main purpose of this project is to automate the recruitment process for the college and

companies and available professionals. It is design to reduce the work done by administrators of
the college placement cell and also to automatic informing the students about the interviews. Yet
the placement cells members of the college sends the e-mails to students for informing them
about interviews organized by them, however there are some students are still remain uninformed because of dont getting the mail from placement cell because of any reason like forget
to add the mail address of some students into emails list, not having the mail address of some
1

students or any other technical fault in the networks. So this project may be a great option to
remove these kinds of problems and to fulfill the gap of lack of information between the
placement cell and the students. This project is a web based application to simplify the procedure
of recruitments in college.

1.3 Scope
1.
2.
3.
4.

Admin panel for approval of new registered users (may be student or company).
Only authenticated user is allowed to enter in the system.
Only authenticated user can perform the respective action.
Automation of recruitment process.

1.4 Functions to be provided

Online recruitment.

Create new jobs posts.

Search of particular job.

Browse job opportunities.

Deletion of particular job.

Admin panel for new user registrations approval.

List of registered companies.

List of registered students.

Admin notification page to informing about upcoming organized interviews by


companies.

List of placed students.

Online apply for a job.

1.5 Constraints, Assumptions & Dependencies


2

Constraints

Admin panel is only shown to the administrator not to the other users.

Only company can create a job post.

Student is not allowed to create post anything. They can only use the available options
like to view jobs and apply jobs etc.

Assumptions & Dependencies


Proper backup is available with respect to maintaining server & database so that the
website remains active and storage space is not in shortage.
The end user should have a basic knowledge of English and computer usage.

1. 6 Feasibility Analysis
The concept of feasibility is to determine whether or not a project is worth doing.
The key consideration involve in feasibility analysis are:Technical feasibility:- This is concerned with specifying equipments and software that will
successfully satisfy the user requirement. The Technology needed to carry out this project
includes JAVA as front end, (PHP in additional version) and MySql as back end. So the
technology required to carry out this project is easily available and affordable, hence the project
is technically feasible.
Behavioral feasibility:- A online recruitment system helps the students to view and apply online
for a job even they are at home. Companies also can post jobs and select candidates online. On
other hand admin can control full system securely. Hence project is behaviorally feasible.

CHAPTER 2
SYSTEM REQUIREMENTS

2.1 HARDWARE REQUIREMENTS


The project will run on any machine with minimum requirement of the following:

Processor

Intel Pentium III or higher

Main Memory

512 MB or more

Hard Disk Capacity

80 GB or more

Monitor

Color for display

Keyboard

Any window Supported Keyboard

Mouse

Any Normal Mouse

2.2 SOFTWAREREQUIREMENTS
The project has the following software requirement:

Operating System

Windows XP / 7 / 8

Tool Used
Database Used
Programming/Scripting
Language Used

NetBeans IDE, Notepad++, XAMPP


My Sql
JAVA (JSP), HTML.

Server Used

Apache Tomcat/Glassfish.

2.3 INPUT/OUTPUT REQUIREMENTS

Input Device

Keyboard and Mouse

Output Device

Color Monitor
4

CHAPTER 3
MODULE DESCRIPTION

3.1 MODULE DESCRIPTION


Basically three main modules in Sunder Deep Recruitment System are :

Administration Module
Companies Module
Student Module

Description of modules:
1 ADMIN MODULE
Admin can approve users to access the system. Brows all the registered students and
companies. Brows the jobs posted by companies, and also can delete the old job posts of
companies. In short the admin panel can control the full system and the activities goes under the
system.
2 COMPANIES MODULE
Companies can create a new job post, brows the basic and academic information of the
applied students. A company can also recruit the candidates on behalf of interview and can brows
the list of selected candidates for the corresponding job post.
3 STUDENTS MODULE
Students can view their profile, basic information, marks details, brows the posted jobs
by the companies, can apply for a job and brows the all pre applied jobs in the corresponding
companies.

CHAPTER 4
PROJECT PHASES

4.1 SOFTWARE REQUIREMENT SPECIFICATION


In software engineering, requirement analysis encompasses those tasks those do into determining
the needs or conditions to meet for a new or altered product, taking amount of the possibly
conflicting requirements of the various stakeholders, such as beneficiates or users. Systematic
requirement analysis is also known as Requirements Engineering. It is sometimes referred to
loosely by names such as requirement gathering, requirement capture, or requirement
specification. Requirement analysis is critical to the success of the development of the project.
Requirement must be actionable, measurable, testable, related to identified business needs or
opportunities, and defined to a level of detail sufficient for system design.
Requirement analysis is done in order to understand the problem the software system is to solve
the problem could be automating an existing manual process, developing a new automated
system, or a combination of the two. The emphasis in requirement analysis is on identifying what
is needed from the system, not how the system will achieve its goals. There are at least two
parties involved in the software development a client and a developer. The developer has to
develop a system to satisfy the clients needs. The developer does not understand the clients
problem domain and the client does not understand the issues involved in the software system.
This causes a communication gap, which has to be adequately bridged during requirement
analysis.

4.2 System Analysis:


The primary goal of the system analyst is to improve the efficiency of the existing
system. For that the study of specification of the requirements is very essential. For the
development of the new system, a preliminary survey of the existing system will be conducted.
Investigation done whether the up gradation of the system into an application program could
solve the problems and eradicate the inefficiency of the existing system.
Feasibility Study:
The initial investigation points to the question whether the project is feasible. A feasibility
is conducted to identify the best system that meets the all the requirements. This includes an
6

identification description, an evaluation of the proposed systems and selection of the best system
for the job. The requirements of the system are specified with a set of constraints such as system
objectives and the description of the out puts. It is then duty of the analyst to evaluate the
feasibility of the proposed system to generate the above results. Three key factors are to be
considered during the feasibility study.
Operation Feasibility:
An estimate should be made to determine how much effort and care will go into the
developing of the system including the training to be given to the user. Usually, people are
reluctant to changes that come in their progression. The computer initialization will certainly
affected the turn over, transfer and employee job status. Hence an additional effort is to be made
to train and educate the users on the new way of the system.
Technical Feasibility:
The main consideration is to be given to the study of available resources of the
organization where the software is to be implemented. Here the system analyst evaluates the
technical merits of the system giving emphasis on the performance, Reliability, maintainability
and productivity.
By taking the consideration before developing the proposed system, the resources availability of
the organization was studied. The organization was immense computer facilities equipped with
sophisticated machines and the software hence this technically feasible.
Economic Feasibility:
Economic feasibility is the most important and frequently used method for evaluating the
effectiveness of the proposed system. It is very essential because the main goal of the proposed
system is to have economically better result along with increased efficiency. Cost benefit
analysis is usually performed for this purpose. It is the comparative study of the cost verses the
benefit and savings that are expected from the proposed system. Since the organization is well
equipped with the required hard ware, the project was found to be economically.

4.3 System Design:


System design is the solution to the creation of a new system. This phase is composed of
several systems. This phase focuses on the detailed implementation of the feasible system. It
emphasis on translating design specifications to performance specification. System design has
two phases of development logical and physical design.

During logical design phase the analyst describes inputs (sources), out puts (destinations),
databases (data sores) and procedures (data flows) all in a format that meats the uses
requirements. The analyst also specifies the user needs and at a level that virtually determines the
information flow into and out of the system and the data resources. Here the logical design is
done through data flow diagrams and database design.
The physical design is followed by physical design or coding. Physical design produces the
working system by defining the design specifications, which tell the programmers exactly what
the candidate system must do. The programmers write the necessary programs that accept input
from the user, perform necessary processing on accepted data through call and produce the
required report on a hard copy or display it on the screen.
Logical Design:
Logical design of an information system shows the major features and also how they are related
to one another. The first step of the system design is to design logical design elements. This is the
most creative and challenging phase and important too. Design of proposed system produces the
details of the state how the system will meet the requirements identified during the system
analysis that is, in the design phase we have to find how to solve the difficulties faced by the
existing system. The logical design of the proposed system should include the details that contain
how the solutions can be implemented. It also specifies how the database is to be built for storing
and retrieving data, what kind of reports are to be created and what are the inputs to be given to
the system. The logical design includes input design, output design, and database design and
physical design
Input Design:
The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data into a usable form for processing data entry. The activity of putting data into the
computer for processing can be achieved by inspecting the computer to read data from a written
or printed document or it can occur by having people keying the data directly into the system.
The design of input focuses on controlling the amount of input required, controlling errors,
avoiding delay, avoiding extra steps and keeping the process simple.
The system needs the data regarding the asset items, depreciation rates, asset transfer, physical
verification for various validation, checking, calculation and report generation.. The error raising
method is also included in the software, which helps to raise error message while wrong entry of
input is done. So in input design the following things are considered.
What data should be given as input?
How the data should be arranged or coded?
The dialogue to guide the operating personnel in providing input.
8

Methods for preparing input validations and steps to follow when error occur.
The samples of screen layout are given in the appendix.
Output Design:
Computer output is the most important and direct information source to the user. Output design is
a process that involves designing necessary outputs in the form of reports that should be given to
the users according to the requirements. Efficient, intelligible output design should improve the
system's relationship with the user and help in decision making. Since the reports are directing
referred by the management for taking decisions and to draw conclusions they must be designed
with almost care and the details in the reports must be simple, descriptive and clear to the user.
So while designing output the following things are to be considered.
Determine what information to present.
Arrange the presentation of information in an acceptable format.
Decide how to distribute the output to intended receipts.
Depending on the nature and future use of output required, they can be displayed on the
monitor for immediate need and for obtaining the hardcopy. The options for the output
reports are given in the appendix.

Physical Design:
The process of developing the program software is referred to as physical design. We have to
design the process by identifying reports and the other outputs the system will produce. Coding
the program for each module with its logic is performed in this step. Proper software
specification is also done in this step.
Modular Design:
A software system is always divided into several sub systems that makes it easier for the
development. A software system that is structured into several subsystems makes it easy for the
development and testing. The different subsystems are known as the modules and the process of
dividing an entire system into subsystems is known as modularization or decomposition.
A system cannot be decomposed into several subsystems in any way. There must some logical
barrier, which facilitates the separation of each module. The separation must be simple but yet
must be effective so that the development is not affected.
The system under consideration has been divided into several modules taking in consideration
the above-mentioned criteria. The different modules are
1. ADMIN MODULE
2. COMPANY MODULE

3. STUDENT MODULE

CHAPTER 5
SYSTEM DESIGN AND DEVELOPEMENT
5.1 DESIGN DOCUMENT

The entire system is projected with a physical diagram which specifics the actual storage
parameters that are physically necessary to be stored on to the disk. The overall systems
existential idea is derived from this diagram.

The content level DFD is provided to have an idea of the functional inputs and outputs
that are achieved through the system. The system depicts the input and output standards
at the high level of the systems existence.
5.1.1 DATA FLOW DIAGRAMS
Data flows are data structures in motion, while data stores are data structures. Data flows are
paths or pipe lines, along which data structures travel, where as the data stores are place where
data structures are kept until needed.
Data flows are data structures in motion, while data stores are data structures at rest. Hence it is
possible that the data flow and the data store would be made up of the same data structure.
Data flow diagrams is a very handy tool for the system analyst because it gives the analyst the
overall picture of the system, it is a diagrammatic approach.
A DFD is a pictorial representation of the path which data takes from its initial interaction with
the existing system until it completes any interaction. The diagram will describe the logical data
flows dealing the movements of any physical items. The DFD also gives the insight into the data
that is used in the system i.e., who actually uses it is temporarily stored.
A DFD does not show a sequence of steps. A DFD only shows what the different process in a
system is and what data flows between them.
The following are some DFD symbols used in the project

External entities
10

Process: A transaction of information that resides


within the bounds of the system to be module.

DATASTORE:A repository of data that is


to be stored for use by one or more
processes may be as simple as buffer of
queue or as a relational database.

Fig No 5.1
Steps to Construct Data Flow Diagrams:
Four steps are commonly used to construct a DFD
Process should be named and numbered for easy reference. Each name should be
representative of the process.
The direction of flow is from top to bottom and from left to right.
When a process is exploded into lower level details they are numbered.
The names of data stores, sources and destinations are written in capital letters.

Rules for constructing a Data Flow Diagram:


Arrows should not cross each other.
Squares, Circles and files must bear names.
Decomposed data flow squares and circles can have same names.
Choose meaningful names for dataflow.
Draw all data flows around the outside of the diagram

11

fig 5.2 0-level DFD

Fig 5. 3

1-level dfd for user (student or company) registration

12

fig 5.4 DFD for student selection

RECRUITMENT
PROCESS

STUDENT
Apply for
Job

Not selected

13

selected
Selected_cand

5.1.2

ER DIAGAMES

In software engineering, an entityrelationship model (ER model) is a data model for describing
the data or information aspects of a business domain or its process requirements, in an abstract
way that lends itself to ultimately being implemented in a database such as a relational database.
The main components of ER models are entities (things) and the relationships that can exist
among them.
Entity-relationship modeling was developed by Peter Chen and published in a 1976 paper.
However, variants of the idea existed previously, and have been devised subsequently such as
super type and subtype data entities and commonality relationships.
An entity may be defined as a thing which is recognized as being capable of an independent
existence and which can be uniquely identified. An entity is an abstraction from the complexities
of a domain. When we speak of an entity, we normally speak of some aspect of the real world
which can be distinguished from other aspects of the real world.
An entity may be a physical object such as a house or a car, an event such as a house sale or a car
service, or a concept such as a customer transaction or order. Although the term entity is the one
most commonly used, following Chen we should really distinguish between an entity and an
entity-type. An entity-type is a category. An entity, strictly speaking, is an instance of a given
entity-type. There are usually many instances of an entity-type. Because the term entity-type is
somewhat cumbersome, most people tend to use the term entity as a synonym for this term.
Entities can be thought of as nouns. Examples: a computer, an employee, a song, a mathematical
theorem.
A relationship captures how entities are related to one another. Relationships can be thought of as
verbs, linking two or more nouns. Examples: an owns relationship between a company and a
computer, a supervises relationship between an employee and a department, a performs
relationship between an artist and a song, a proved relationship between a mathematician and a
theorem.
Every entity (unless it is a weak entity) must have a minimal set of uniquely identifying
attributes, which is called the entitys primary key. Certain cardinality constraints on relationship
sets may be indicated as well.

ER Diagram

Fig 5.5 ER Diagram

department
Roll no

ID
14

contact

apply/recruit

DOB

Cname

email
Sname

Studen
t

Compa
ny

website

APPROVAL OF
USERS

contact

address

address

Admin
DELETE
JOBS
Sunder

BROWS USERS

Deep

5.2 USE CASE DIAGRAM


FIG 5.6 USE CASE DIAGRAM

Recruitmen

VIEW PLACED
STUDENTS

NEW JOB POST

RECRUITMEN

APPLY FOR JOB

ADMIN

GUEST
BROWS APPLIED
15
JOBS

STUDENT
COMPANY

CHAPTER 6
TABLES USED

6.1 There are 10 tables created in database (DB name campus) namely:1. admin
2. candidate_info
3. s_register
4. c_register
5. s_info
6. c_info
7. selected_cand
8. marks
9. c_jobpost
10. notification
16

Detailed description of tables:-

1. admin

2. candidate_info

17

3. s_register

4. c_register

5. s_info

18

6. c_info

7. selected_cand

8. marks

19

9. c_jobpost

10. notification

20

CHAPTER 7
OVERVIEW OF TECHNOLOGY USED

7.1 About J2EE & MYSQL


JAVA
Java is a small, simple, safe, object oriented, interpreted or dynamically optimized, byte coded,
architectural, garbage collected, multithreaded programming language with a strongly typed
exception-handling for writing distributed and dynamically extensible programs.
Java is an object oriented programming language. Java is a high-level, third generation language
like C, FORTRAN, Small talk, Pearl and many others. You can use java to write computer
21

applications that crunch numbers, process words, play games, store data or do any of the
thousands of other things computer software can do.
Special programs called applets that can be downloaded from the internet and played safely
within a web browser. Java a supports this application and the follow features make it one of the
best programming language.
It is simple and object oriented
It helps to create user friendly interfaces.
It is very dynamic.
It supports multithreading.
It is platform independent
It is highly secure and robust.
It supports internet programming
Java is a programming language originally developed by Sun Microsystems and released in
1995 as a core component of Sun's Java platform. The language derives much of its syntax from
C and C++ but has a simpler object model and fewer low-level facilities. Java applications are
typically compiled to byte code which can run on any Java virtual machine (JVM) regardless of
computer architecture.
The original and reference implementation Java compilers, virtual machines, and class libraries
were developed by Sun from 1995. As of May 2007, in compliance with the specifications of the
Java Community Process, Sun made available most of their Java technologies as free software
under the GNU General Public License. Others have also developed alternative implementations
of these Sun technologies, such as the GNU Compiler for Java and GNU Classpath.
The Java language was created by James Gosling in June 1991 for use in a set top box project.
The language was

22

initially called Oak, after an oak tree that stood outside Gosling's office - and also went by the
name Green - and ended up later being renamed to Java, from a list of random words. Gosling's
goals were to implement a virtual machine and a language that had a familiar C/C++ style of
notation.
Primary goals
There were five primary goals in the creation of the Java language:
1. It should use the object-oriented programming methodology.
2. It should allow the same program to be executed on multiple operating systems.
3. It should contain built-in support for using computer networks.
4. It should be designed to execute code from remote sources securely.
5. It should be easy to use by selecting what were considered the good parts of other objectoriented languages.
The Java platform is the name for a bundle of related programs, or platform, from Sun which
allow for developing and running programs written in the Java programming language. The
platform is not specific to any one processor or operating system, but rather an execution engine
(called a virtual machine) and a compiler with a set of standard libraries which are implemented
for various hardware and operating systems so that Java programs can run identically on all of
them.
Different "editions" of the platform are available, including:
Java ME (Micro Edition): Specifies several different sets of libraries (known as profiles) for
devices which are sufficiently limited that supplying the full set of Java libraries would
take up unacceptably large amounts of storage.
Java SE (Standard Edition): For general purpose use on desktop PCs, servers and similar
devices.
Java EE (Enterprise Edition): Java SE plus various APIs useful for multi-tier client-server
enterprise applications.
The Java Platform consists of several programs, each of which provides a distinct portion of
its overall capabilities. For example, the Java compiler, which converts Java source code into
Java bytecode (an intermediate language for the Java Virtual Machine (JVM)), is provided as
part of the Java Development Kit (JDK). The sophisticated Java Runtime Environment
(JRE), complementing the JVM with a just-in-time (JIT)

23

compiler, converts intermediate bytecode into native machine code on the fly. Also supplied
are extensive libraries (pre-compiled into Java bytecode) containing reusable code, as well as
numerous ways for Java applications to be deployed, including being embedded in a web
page as an applet. There are several other components, some available only in certain
editions.
The essential components in the platform are the Java language compiler, the libraries, and
the runtime environment in which Java intermediate bytecode "executes" according to the
rules laid out in the virtual machine specification.
Java Virtual Machine
The heart of the Java Platform is the concept of a "virtual machine" that executes Java bytecode
programs. This bytecode is the same no matter what hardware or operating system the program is
running under. There is a JIT compiler within the Java Virtual Machine, or JVM. The JIT
compiler translates the Java bytecode into native processor instructions at run-time and caches
the native code in memory during execution.
The use of bytecode as an intermediate language permits Java programs to run on any platform
that has a virtual machine available. The use of a JIT compiler means that Java applications, after
a short delay during loading and once they have "warmed up" by being all or mostly JITcompiled, tend to run about as fast as native programs. Since JRE version 1.2, Sun's JVM
implementation has included a just-in-time compiler instead of an interpreter.
Although Java programs are Platform Independent, the code of the Java Virtual Machine (JVM)
that execute these programs are not. Every Operating System has its own JVM.
Class libraries
In most modern operating systems, a large body of reusable code is provided to simplify the
programmer's job. This code is typically provided as a set of dynamically loadable libraries that
applications can call at runtime. Because the Java Platform is not dependent on any specific
operating system, applications cannot rely on any of the existing libraries. Instead, the Java
Platform provides a comprehensive set of standard class libraries, containing much of the same
reusable functions commonly found in modern operating systems.
The Java class libraries serve three purposes within the Java Platform. Like other standard code
libraries, they provide the programmer a well-known set of functions to perform common tasks,
such as maintaining lists of items or performing complex string parsing. In addition, the class
libraries provide an abstract interface to tasks that would normally depend heavily on the
hardware and operating system. Tasks such as network access and file access are often heavily
dependent on the native capabilities of the platform. The Java java.net and java.io libraries
implement the required native code internally, then provide a standard interface for the Java
applications to perform those tasks. Finally, when some underlying platform does not support all
24

of the features a Java application expects, the class libraries can either emulate those features
using whatever is available, or at least provide a consistent way to check for the presence of a
specific feature.
Platform independence
One characteristic, platform independence, means that programs written in the Java language
must run similarly on any supported hardware/operating-system platform. One should be able to
write a program once, compile it once, and run it anywhere.
This is achieved by most Java compilers by compiling the Java language code halfway (to Java
bytecode) simplified machine instructions specific to the Java platform. The code is then run
on a virtual machine (VM), a program written in native code on the host hardware that interprets
and executes generic Java bytecode. (In some JVM versions, bytecode can also be compiled to
native code, either before or during program execution, resulting in faster execution.) Further,
standardized libraries are provided to allow access to features of the host machines (such as
graphics, threading and networking) in unified ways. Note that, although there is an explicit
compiling stage, at some point, the Java bytecode is interpreted or converted to native machine
code by the JIT compiler.
The first implementations of the language used an interpreted virtual machine to achieve
portability. These implementations produced programs that ran more slowly than programs
compiled to native executable, for instance written in C or C++, so the language suffered a
reputation for poor performance. More recent JVM implementations produce programs that run
significantly faster than before, using multiple techniques.
One technique, known as just-in-time compilation (JIT), translates the Java bytecode into native
code at the time that the program is run, which results in a program that executes faster than
interpreted code but also incurs compilation overhead during execution. More sophisticated VMs
use dynamic recompilation, in which the VM can analyze the behavior of the running program
and selectively recompile and optimize critical parts of the program. Dynamic recompilation can
achieve optimizations superior to static compilation because the dynamic compiler can base
optimizations on knowledge about the runtime environment and the set of loaded classes, and
can identify the hot spots (parts of the program, often inner loops, that take up the most
execution time). JIT compilation and dynamic recompilation allow Java programs to take
advantage of the speed of native code without losing portability.
Another technique, commonly known as static compilation, is to compile directly into native
code like a more traditional compiler. Static Java compilers, such as GCJ, translate the Java
language code to native object code, removing the intermediate bytecode stage. This achieves
good performance compared to interpretation, but at the expense of portability; the output of
these compilers can only be run on a single architecture. Some see avoiding the VM in this

25

manner as defeating the point of developing in Java; however it can be useful to provide both a
generic bytecode version, as well as an optimized native code version of an application.
Automatic memory management
One of the ideas behind Java's automatic memory management model is that programmers be
spared the burden of having to perform manual memory management. In some languages the
programmer allocates memory for the creation of objects stored on the heap and the
responsibility of later deal locating that memory also resides with the programmer. If the
programmer forgets to deallocate memory or writes code that fails to do so, a memory leak
occurs and the program can consume an arbitrarily large amount of memory. Additionally, if the
program attempts to deallocate the region of memory more than once, the result is undefined and
the program may become unstable and may crash. Finally, in non-garbage collected
environments, there is a certain degree of overhead and complexity of user-code to track and
finalize allocations. Often developers may box themselves into certain designs to provide
reasonable assurances that memory leaks will not occur.
In Java, this potential problem is avoided by automatic garbage collection. The programmer
determines when objects are created, and the Java runtime is responsible for managing the
object's lifecycle. The program or other objects can reference an object by holding a reference to
it (which, from a low-level point of view, is its address on the heap). When no references to an
object remain, the Java garbage collector automatically deletes the unreachable object, freeing
memory and preventing a memory leak. Memory leaks may still occur if a programmer's code
holds a reference to an object that is no longer neededin other words, they can still occur but at
higher conceptual levels.
The use of garbage collection in a language can also affect programming paradigms. If, for
example, the developer assumes that the cost of memory allocation/recollection is low, they may
choose to more freely construct objects instead of pre-initializing, holding and reusing them.
With the small cost of potential performance penalties (inner-loop construction of large/complex
objects), this facilitates thread-isolation (no need to synchronize as different threads work on
different object instances) and data-hiding. The use of transient immutable value-objects
minimizes side-effect programming.
Comparing Java and C++, it is possible in C++ to implement similar functionality (for example,
a memory management model for specific classes can be designed in C++ to improve speed and
lower memory fragmentation considerably), with the possible cost of adding comparable runtime
overhead to that of Java's garbage collector, and of added development time and application
complexity if one favors manual implementation over using an existing third-party library. In
Java, garbage collection is built-in and virtually invisible to the developer. That is, developers
may have no notion of when garbage collection will take place as it may not necessarily correlate
with any actions being explicitly performed by the code they write. Depending on intended
application, this can be beneficial or disadvantageous: the programmer is freed from performing
low-level tasks, but at the same time loses
26

the option of writing lower level code. Additionally, the garbage collection capability demands
some attention to tuning the JVM, as large heaps will cause apparently random stalls in
performance.
Java does not support pointer arithmetic as is supported in, for example, C++. This is because the
garbage collector may relocate referenced objects, invalidating such pointers. Another reason
that Java forbids this is that type safety and security can no longer be guaranteed if arbitrary
manipulation of pointers is allowed.
Performance
Java's performance has improved substantially since the early versions, and performance of JIT
compilers relative to native compilers has in some tests been shown to be quite similar. The
performance of the compilers does not necessarily indicate the performance of the compiled
code; only careful testing can reveal the true performance issues in any system.
Java Runtime Environment
The Java Runtime Environment, or JRE, is the software required to run any application deployed
on the Java Platform. End-users commonly use a JRE in software packages and Web browser
plugins. Sun also distributes a superset of the JRE called the Java 2 SDK (more commonly
known as the JDK), which includes development tools such as the Java compiler, Javadoc, Jar
and debugger.
One of the unique advantages of the concept of a runtime engine is that errors (exceptions)
should not 'crash' the system. Moreover, in runtime engine environments such as Java there exist
tools that attach to the runtime engine and every time that an exception of interest occurs they
record debugging information that existed in memory at the time the exception was thrown
(stack and heap values). These Automated Exception Handling tools provide 'root-cause'
information for exceptions in Java programs that run in production, testing or development
environments.

7.2 OVERVIEW OF J2EE


Today more and more developments want to write distributed transactional applications for the
enterprise and leverage the speed, security and reliability of server side technology. J2EE is a
platform independent, java centric environment from sun for developing, building and deploying
web based enterprise application online. The J2EE platform consists of a set of services, APIs
and protocols that provide functionality for developing multitier web based application.
At the client side tier, J2EE supports pure HTML as well as java applets or applications. It relies
on JSP and Servlet codes to create HTML or other formatted data for the client. EJB provide
another layer where the platforms logic is stored. An EJB server provides functions such as
threading, concurrency, security and memory management. To reduce costs and fast-track
enterprise application design and development, the java2 platform, Enterprise edition (J2EE)
27

technology provides a component-based approach to the design, development, assembly and


distributed application model, the ability to reuse components, integrated Extensible Markup
Language (XML) based data interchange, a unified security model, and flexible transaction
control.
DISTRIBUTED MULTI TIERED APPLICATIONS
The J2EE platform uses a multi-tiered distributed application model. Application logic is divided
into components according to function, and the various application components that make up a
J2EE application are installed on different machines depending on the tier in the multi-tiered
J2EE environment to which the application component belongs. The figure shown below shows
two multi-tiered j2EE applications divided into the tiers described in the following list. The J2EE
application parts shown in Figure.
J2EE COMPONENTS:
J2EE applications are made up of components. A J2EE component is a self-contained functional
software unit that is assembled into a J2EE application with its related classes and files and that
following J2EE components: Application clients and applets are components that run on the
client.
Java Servlet and Java Server Pages (JSP) technology components are Web components that
run on the server.
Enterprise Java Beans (EJB) components are business components that run on the server.
J2EE components are written in the java programming language and are compiled in the
same way as any program in the language. The difference between J2EE components and
standard java classes is that J2EE components are assembled into a J2EE application.
Verified to be well formed and in compliance with managed by the J2EE server.

28

Fig 7.1 J2EE MODEL


J2EE CONTAINERS
Normally, thin-client multi-tiered applications are hard to write because they involve many lines
of intricate code to handle transaction and state management, multithreading, resource pooling,
and other complex low-level details. The component-based and platform-independent J2EE
architecture makes J2EE applications easy to write because business logic is organized into
reusable components. In addition, the J2EE server provides underlying services in the form of a
container for every component type. Because you do not have to develop these services yourself,
you are free to concentrate on solving the business problem at hand.
Containers provide the runtime support for J2EE application components.
Containers provide a federated view of the underlying J2EE APIs to the application components.
J2EE application components never interact directly with other J2EE application components.
They use the protocols and methods of the container for interacting with each other and with
platform services. Interposing a container between the application components and the J2EE
services allows the container to transparently inject the services defined by the components
deployment descriptors, such as declarative transaction management, security checks, resource
29

pooling, and state management. A typical J2EE product will provide a container for each
application component type: application client container, applet container, web component
container, and enterprise bean container.

Fig 7.2

It is a runtime portion of a J2EE product. A J2EE server provides EJB and Web containers. The
component-based and platform-independent J2EE architecture makes J2EE applications easy to
write because business logic is organized into reusable components and the J2EE server provides
underlying services in the form of a container for every component type.
CONTAINERS AND SERVICES
Components are installed in their containers during deployment and are the interface between a
component and the low-level platform-specific functionality that supports the component. Before
a web, enterprise bean, or application client component can be executed, it must be assembled
onto a J2EE application and deployed into its container. The assembly process involves
specifying container settings for each component in the J2EE application and for the J2EE
application itself. Container settings customize the underlying support provided by the J2EE
Server, which include services such as security, transaction management, Java Naming and
Directory Interface (JNDI) lookups, and remote connectivity. Here are some of the highlights:
The J2EE security model lets you configure a web component or enterprise bean so system
resources are accessed only by authorized users.

30

The J2EE transaction model lets you specify relationships among methods that make up a
single transaction so all methods in one transaction are treated as a single unit.
JNDI lookup services provide a unified interface to multiple naming and directory services
in the enterprise so application components can access naming and directory services.
The J2EE remote connectivity model manages low-level communications between clients
and enterprise beans. After an enterprise bean is created, a client invokes methods on it as if it
were in the same virtual machine.
J2EE PLATFORM ROLES
The J2EE platform also defines a number of distinct roles that are performed during the
application development and deployment life cycle:
The product provider designs and offers the J2EE platform, APIs, and other features that are
defined in the J2EE specification for purchase.
The tool provider offers tools that are used for the development and packaging of
application components as part of the J2EE specifications.
The application component provider creates Web components, enterprise beans, applets, or
application clients to use in J2EE applications.
The application assembler takes a set of components that are developed by component
providers and assembles them in the form of an enterprise archive (EAR) file.
The developer is responsible for deploying an enterprise application into a specific
operational environment that corresponds to a J2EE platform product.
The system administrator is responsible for the operational environment in which the
application runs.

7.3 J2EE BENEFITS


The J2EE specification provides customers a standard which can be used to ensure investment
protection when purchasing or developing applications. Comprehensive, independent
Compatibility Test Suites ensure vendor compliance with J2EE standards.
Some benefits of deploying to J2EE architecture include:
A simplified architecture that is based on standard components, services, and clients. The
architecture maximizes the write-once, run-anywhere Java technology.
Services providing integration with existing systems, including Java Data Base
Connectivity (JDBC); Java Message Service (JMS); Java Connector Architecture (JCA); Java
Interface Definition Language (Java IDL); the Java Mail API; and Java Transaction API (JTA
and JTS) for reliable business transactions.
Scalability to meet demand, by distributing containers across multiple systems and using
database connection pooling, for example.

31

A better choice of application development tools and components from vendors providing
standard solutions.

7.4 Netbeans IDE


Setting up the Project
To create an IDE project:
1. Start NetBeans IDE.
2. In the IDE, choose File > New Project (Ctrl-Shift-N), as shown in the figure below.

Fig : 7.3

3. In the New Project wizard, expand the Java category and select Java Application as
shown in the figure below. Then click Next.

32

Fig 7.4

4. In the Name and Location page of the wizard, do the following (as shown in the figure
below): In the Project Name field, type HelloWorldApp.
Leave

the Use Dedicated Folder for Storing Libraries checkbox unselected. (If you
are using NetBeans IDE 6.0, this option is not available.)

In the Create Main Class field, type helloworldapp.HelloWorldApp.


Leave the Set as Main Project checkbox selected.

33

Fig 7.5
5. Click Finish.
The project is created and opened in the IDE. You should see the following components:
The Projects window, which contains a tree view of the components of the project, including
source files, libraries that your code depends on, and so on.
The Source Editor window with a file called HelloWorldApp open.
The Navigator window, which you can use to quickly navigate between elements within the
selected class.

34

Fig 7.6

Compiling the Source File


To compile your source file, choose Build > Build Main Project (F11) from the IDE's main
menu.
You can view the output of the build process by choosing Window > Output > Output.
The Output window opens and displays output similar to what you see in the following figure.

35

Fig 7.7

7.5 MYSQL
MySQL relational database management system Michael WideniusSQL Structured Query
Language("My Sequel") is the world's second most widely used open-source (RDBMS). It is
named after co-founder 's daughter. The phrase stands for .
The MySQL development project has made its source code GNU General Public
Licenseproprietary for-profit Swedish MySQL ABOracle Corporationavailable under the terms
of the , as well as under a variety of agreements. MySQL was owned and sponsored by a single
firm, the company , now owned by .
The official MySQL Workbench MySQL GUI Toolsis a free integrated environment developed
by MySQL AB,that enables users to graphically administer MySQL databases and visually
design database structures. MySQL Workbench replaces the previous package of software, .
Similar to other third-party packages, but still considered the authoritative MySQL front end,
MySQL Workbench lets users manage database design & modeling, SQL development
(replacing MySQL Query Browser) and Database administration (replacing MySQL
Administrator).

36

Features
As of April 2009, MySQL offered MySQL 5.1 in two different variants: the open source MySQL
Community Server and the commercial Enterprise Server. MySQL 5.5 is offered under the same
licenses. They have a common code base and include the following features:
A broad subset of ANSI SQL 99, as well as extensions
Cross-platform support
Stored proceduresSQL/PSM , using a procedural language that closely adheres to
Updatable views
Information schema
Strict mode (ensures MySQL does not truncate or otherwise modify data to conform to an
underlying data type, when an incompatible value is inserted into that type)
SSL support
Query caching
Sub-SELECTs (i.e. nested SELECTs)
Replication Multi-master replication MySQL Clustersupport (i.e. Master-Master Replication &
Master-Slave Replication) with one master per slave, many slaves per master. is provided in ,
and multi-master support can be added to unclustered configurations using Gal era Cluster.
Full-text indexing and searching (initially a MyISAM-only feature; supported by InnoDB since
the release of MySQL 5.6)
Embedded database library
Partitioned tables with pruning of partitions in optimizer
Shared-nothing MySQL Cluster clustering through
Commit grouping, gathering multiple transactions from multiple connections together to
increase the number of commits per second. (PostgreeSQL) has an advanced form of this
functionality
The developers release monthly versions of the MySQL Server. The sources can be obtained
from MySQL's website or from MySQL's Bazaar repository, both under the GPL license.
Limitations
Like other SQL databases, MySQL does not currently comply with the full SQL standard for
some of the implemented functionality, including foreign key references when using some
storage engines other than the 'standard' InnoDB (or third-party engines which supports foreign
keys).

37

Up until MySQL 5.7, triggers are limited to one per action / timing, meaning that at most one
trigger can be defined to be executed after an INSERT operation, and one before INSERT on the
same table. No triggers can be defined on views.
MySQL, like most other transactional solid-state drives mechanical latencyrelational databases,
is strongly limited by hard disk performance. This is especially true in terms of write latency.
Given the recent appearance of very affordable consumer grade SATA interface that offer zero , a
fivefold speedup over even an eight drive RAID array can be had for a smaller investment.

Fig 7.8
Java Virtual Machine
The heart of the Java Platform is the concept of a "virtual machine" that executes Java bytecode
programs. This bytecode is the same no matter what hardware or operating system the program is
running under. There is a JIT compiler within the Java Virtual Machine, or JVM. The JIT
compiler translates the Java bytecode into native processor instructions at run-time and caches
the native code in memory during execution.
The use of bytecode as an intermediate language permits Java programs to run on any platform
that has a virtual machine available. The use of a JIT compiler means that Java applications, after

38

a short delay during loading and once they have "warmed up" by being all or mostly JITcompiled, tend to run about as fast as native programs. Since JRE version 1.2, Sun's JVM
implementation has included a just-in-time compiler instead of an interpreter.
Although Java programs are Platform Independent, the code of the Java Virtual Machine (JVM)
that execute these programs are not. Every Operating System has its own JVM.
Class libraries
In most modern operating systems, a large body of reusable code is provided to simplify the
programmer's job. This code is typically provided as a set of dynamically loadable libraries that
applications can call at runtime. Because the Java Platform is not dependent on any specific
operating system, applications cannot rely on any of the existing libraries. Instead, the Java
Platform provides a comprehensive set of standard class libraries, containing much of the same
reusable functions commonly found in modern operating systems.
The Java class libraries serve three purposes within the Java Platform. Like other standard code
libraries, they provide the programmer a well-known set of functions to perform common tasks,
such as maintaining lists of items or performing complex string parsing. In addition, the class
libraries provide an abstract interface to tasks that would normally depend heavily on the
hardware and operating system. Tasks such as network access and file access are often heavily
dependent on the native capabilities of the platform. The Java java.net and java.io libraries
implement the required native code internally, then provide a standard interface for the Java
applications to perform those tasks. Finally, when some underlying platform does not support all
of the features a Java application expects, the class libraries can either emulate those features
using whatever is available, or at least provide a consistent way to check for the presence of a
specific feature.
Platform independence
One characteristic, platform independence, means that programs written in the Java language
must run similarly on any supported hardware/operating-system platform. One should be able to
write a program once, compile it once, and run it anywhere.
This is achieved by most Java compilers by compiling the Java language code halfway (to Java
bytecode) simplified machine instructions specific to the Java platform. The code is then run
on a virtual machine (VM), a program written in native code on the host hardware that interprets
and executes generic Java bytecode. (In some JVM versions, bytecode can also be compiled to
native code, either before or during program execution, resulting in faster execution.) Further,
standardized libraries are provided to allow access to features of the host machines (such as
graphics, threading and networking) in unified ways. Note that, although there is an explicit
compiling stage, at some point, the Java bytecode is interpreted or converted to native machine
code by the JIT compiler.
39

The first implementations of the language used an interpreted virtual machine to achieve
portability. These implementations produced programs that ran more slowly than programs
compiled to native executable, for instance written in C or C++, so the language suffered a
reputation for poor performance. More recent JVM implementations produce programs that run
significantly faster than before, using multiple techniques.
One technique, known as just-in-time compilation (JIT), translates the Java bytecode into native
code at the time that the program is run, which results in a program that executes faster than
interpreted code but also incurs compilation overhead during execution. More sophisticated VMs
use dynamic recompilation, in which the VM can analyze the behavior of the running program
and selectively recompile and optimize critical parts of the program. Dynamic recompilation can
achieve optimizations superior to static compilation because the dynamic compiler can base
optimizations on knowledge about the runtime environment and the set of loaded classes, and
can identify the hot spots (parts of the program, often inner loops, that take up the most
execution time). JIT compilation and dynamic recompilation allow Java programs to take
advantage of the speed of native code without losing portability.
Another technique, commonly known as static compilation, is to compile directly into native
code like a more traditional compiler. Static Java compilers, such as GCJ, translate the Java
language code to native object code, removing the intermediate bytecode stage. This achieves
good performance compared to interpretation, but at the expense of portability; the output of
these compilers can only be run on a single architecture. Some see avoiding the VM in this
manner as defeating the point of developing in Java; however it can be useful to provide both a
generic bytecode version, as well as an optimized native code version of an application.
Automatic memory management
One of the ideas behind Java's automatic memory management model is that programmers be
spared the burden of having to perform manual memory management. In some languages the
programmer allocates memory for the creation of objects stored on the heap and the
responsibility of later deal locating that memory also resides with the programmer. If the
programmer forgets to deallocate memory or writes code that fails to do so, a memory leak
occurs and the program can consume an arbitrarily large amount of memory. Additionally, if the
program attempts to deallocate the region of memory more than once, the result is undefined and
the program may become unstable and may crash. Finally, in non-garbage collected
environments, there is a certain degree of overhead and complexity of user-code to track and
finalize allocations. Often developers may box themselves into certain designs to provide
reasonable assurances that memory leaks will not occur.
In Java, this potential problem is avoided by automatic garbage collection. The programmer
determines when objects are created, and the Java runtime is responsible for managing the
object's lifecycle. The program or other objects can reference an object by holding a reference to
it (which, from a low-level point of view, is its address on the heap). When no references to an
40

object remain, the Java garbage collector automatically deletes the unreachable object, freeing
memory and preventing a memory leak. Memory leaks may still occur if a programmer's code
holds a reference to an object that is no longer neededin other words, they can still occur but at
higher conceptual levels.
The use of garbage collection in a language can also affect programming paradigms. If, for
example, the developer assumes that the cost of memory allocation/recollection is low, they may
choose to more freely construct objects instead of pre-initializing, holding and reusing them.
With the small cost of potential performance penalties (inner-loop construction of large/complex
objects), this facilitates thread-isolation (no need to synchronize as different threads work on
different object instances) and data-hiding. The use of transient immutable value-objects
minimizes side-effect programming.
Comparing Java and C++, it is possible in C++ to implement similar functionality (for example,
a memory management model for specific classes can be designed in C++ to improve speed and
lower memory fragmentation considerably), with the possible cost of adding comparable runtime
overhead to that of Java's garbage collector, and of added development time and application
complexity if one favors manual implementation over using an existing third-party library. In
Java, garbage collection is built-in and virtually invisible to the developer. That is, developers
may have no notion of when garbage collection will take place as it may not necessarily correlate
with any actions being explicitly performed by the code they write. Depending on intended
application, this can be beneficial or disadvantageous: the programmer is freed from performing
low-level tasks, but at the same time loses the option of writing lower level code. Additionally,
the garbage collection capability demands some attention to tuning the JVM, as large heaps will
cause apparently random stalls in performance.
Java does not support pointer arithmetic as is supported in, for example, C++. This is because the
garbage collector may relocate referenced objects, invalidating such pointers. Another reason
that Java forbids this is that type safety and security can no longer be guaranteed if arbitrary
manipulation of pointers is allowed.
Performance
Java's performance has improved substantially since the early versions, and performance of JIT
compilers relative to native compilers has in some tests been shown to be quite similar. The
performance of the compilers does not necessarily indicate the performance of the compiled
code; only careful testing can reveal the true performance issues in any system.
Java Runtime Environment
The Java Runtime Environment, or JRE, is the software required to run any application deployed
on the Java Platform. End-users commonly use a JRE in software packages and Web browser
plugins. Sun also distributes a superset of the JRE called the Java 2 SDK (more commonly
41

known as the JDK), which includes development tools such as the Java compiler, Javadoc, Jar
and debugger.
One of the unique advantages of the concept of a runtime engine is that errors (exceptions)
should not 'crash' the system. Moreover, in runtime engine environments such as Java there exist
tools that attach to the runtime engine and every time that an exception of interest occurs they
record debugging information that existed in memory at the time the exception was thrown
(stack and heap values). These Automated Exception Handling tools provide 'root-cause'
information for exceptions in Java programs that run in production, testing or development
environments.
Today more and more developments want to write distributed transactional applications for the
enterprise and leverage the speed, security and reliability of server side technology. J2EE is a
platform independent, java centric environment from sun for developing, building and deploying
web based enterprise application online. The J2EE platform consists of a set of services, APIs
and protocols that provide functionality for developing multitier web based application.
At the client side tier, J2EE supports pure HTML as well as java applets or applications. It relies
on JSP and Servlet codes to create HTML or other formatted data for the client. EJB provide
another layer where the platforms logic is stored. An EJB server provides functions such as
threading, concurrency, security and memory management. To reduce costs and fast-track
enterprise application design and development, the java2 platform, Enterprise edition (J2EE)
technology provides a component-based approach to the design, development, assembly and
distributed application model, the ability to reuse components, integrated Extensible Markup
Language (XML) based data interchange, a unified security model, and flexible transaction
control.

42

CHAPTER 8
SYSTEM TESTING & IMPLEMENTATION
8.1 SYSTEM TESTING
System testing is a critical aspect of Software Quality Assurance and represents the ultimate
review of specification, design and coding. Testing is a process of executing a program with the
intent of finding an error. A good test is one that has a probability of finding an as yet
undiscovered error. The purpose of testing is to identify and correct bugs in the developed
system. Nothing is complete without testing. Testing is the vital to the success of the system.
In the code testing the logic of the developed system is tested. For this every module of the
program is executed to find an error. To perform specification test, the examination of the
specifications stating what the program should do and how it should perform under various
conditions.
Unit testing focuses first on the modules in the proposed system to locate errors. This enables to
detect errors in the coding and logic that are contained within that module alone. Those resulting
from the interaction between modules are initially avoided. In unit testing step each module has
to be checked separately.
System testing does not test the software as a whole, but rather than integration of each module
in the system. The primary concern is the compatibility of individual modules. One has to find
areas where modules have been designed with different specifications of data lengths, type and
data element name.
Testing and validation are the most important steps after the implementation of the developed
system. The system testing is performed to ensure that there are no errors in the implemented
system. The software must be executed several times in order to find out the errors in the
different modules of the system.
Validation refers to the process of using the new software for the developed system in a live
environment i.e., new software inside the organization, in order to find out the errors. The
validation phase reveals the failures and the bugs in the developed system. It will be come to
know about the practical difficulties the system faces when operated in the true environment. By
testing the code of the implemented software, the logic of the program can be examined. A
specification test is conducted to check whether the specifications stating the program are
performing under various conditions. Apart from these tests, there are some special tests
conducted which are given below:

43

Peak Load Tests: This determines whether the new system will handle the volume of activities
when the system is at the peak of its processing demand. The test has revealed that the new
software for the agency is capable of handling the demands at the peak time.
Storage Testing: This determines the capacity of the new system to store transaction data on a
disk or on other files. The proposed software has the required storage space available, because of
the use of a number of hard disks.
Performance Time Testing: This test determines the length of the time used by the system to
process transaction data.
In this phase the software developed Testing is exercising the software to uncover errors and
ensure the system meets defined requirements. Testing may be done at 4 levels
Unit Level
Module Level
Integration & System
Regression
Unit Testing
A Unit corresponds to a screen /form in the package. Unit testing focuses on verification of the
corresponding class or Screen. This testing includes testing of control paths, interfaces, local data
structures, logical decisions, boundary conditions, and error handling. Unit testing may use Test
Drivers, which are control programs to co-ordinate test case inputs and outputs, and Test stubs,
which replace low-level modules. A stub is a dummy subprogram.
Module Testing
Module Testing is done using the test cases prepared earlier. Module is defined during the time of
design.

44

Integration Testing
Integration testing is used to verify the combining of the software modules. Integration testing
addresses the issues associated with the dual problems of verification and program construction.
System testing is used to verify, whether the developed system meets the requirements.
Regression Testing
Each modification in software impacts unmodified areas, which results serious injuries to that
software. So the process of re-testing for rectification of errors due to modification is known as
regression testing.
Installation and Delivery:
Installation and Delivery is the process of delivering the developed and tested software to the
customer. Refer the support procedures. Acceptance and Project Closure:
Acceptance is the part of the project by which the customer accepts the product. This will be
done as per the Project Closure, once the customer accepts the product, closure of the project is
started. This
includes metrics collection, PCD, etc.

45

CHAPTER 9
SCREENSHOTS

HOME PAGE

46

ADMIN LOGIN

47

ADMIN HOME/NOTIFICATION

48

APPROVE STUDENT

49

APPROVE COMPANY

50

USER LIST
STUDENTS LIST

51

COMPANYS LIST

52

STUDENTS CORNER

53

STUDENT REGISTRATION

54

STUDENT LOGIN

55

STUDENT HOME

56

MARKS

57

JOB OPPORTUNITIES

58

JOB NOTIFICATION

59

COMPANYS CORNER

60

COMPANY HOME

61

NEW JOB POSTING

62

RECRUITMENT

63

SELECTED STUDENTS

64

LIST OF PLACED STUDENTS

65

CHAPTER 10
LIMITATION OF THE PROJECT
The project entitled Sunder Deep Recruitment System is web based application for simplifying
and automation of recruitment process and to minimize the work of placement cell of college.
We all here knows that nothing is completely ideal in this world and not any project can solve all
the issues regarding to the subject. Our project is able to perform some general tasks regarding to
the recruitment procedure but still is also not the perfect one. Some limitations of the projects
which are comes into our minds are
This project is web based application so it requires a 24*7 working server to perform its
functions normally, and can be access without internet connection.
It is design for a single college and hence can not be used by the other college however it
is possible to extend its limitation to make it useable to other colleges also.
Only registered students are informed about interviews and other may not.

66

BIBLOGRAPHY
1) java 7 black book.
2) www.w3school.com
3) Php fpr the web Larry Ullman (for additional version)
4) Online tutorials from you tube.

67

You might also like