100% found this document useful (1 vote)
133 views

Chapter-1 1.1 Project Description

The document describes an existing point of sale (POS) system and a proposed solution called FRoDO. Current POS systems require a network connection to process credit card transactions, making them vulnerable to data breaches when customer devices are used as inputs. Previous offline payment solutions were either fully online, semi-offline, weakly offline, or fully offline. FRoDO aims to improve on existing approaches by providing a secure fully offline micro-payment solution that is resilient to POS data breaches. The document outlines FRoDO's architecture, components, protocols, and security analysis to show its effectiveness.

Uploaded by

Sireesha
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
133 views

Chapter-1 1.1 Project Description

The document describes an existing point of sale (POS) system and a proposed solution called FRoDO. Current POS systems require a network connection to process credit card transactions, making them vulnerable to data breaches when customer devices are used as inputs. Previous offline payment solutions were either fully online, semi-offline, weakly offline, or fully offline. FRoDO aims to improve on existing approaches by providing a secure fully offline micro-payment solution that is resilient to POS data breaches. The document outlines FRoDO's architecture, components, protocols, and security analysis to show its effectiveness.

Uploaded by

Sireesha
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 63

CHAPTER-1

INTRODUCTION

INTRODUCTION:

1.1 PROJECT DESCRIPTION:

Credit and debit card data theft is one of the earliest forms of cybercrime. Still, it is
one of the most common nowadays. Attackers often aim at stealing such customer
data by targeting the Point of Sale (for short, PoS) system, i.e. the point at which a
retailer first acquires customer data. Modern PoS systems are powerful computers
equipped with a card reader and running specialized software. Increasingly often,
user devices are leveraged as input to the PoS. In these scenarios, malware that can
steal card data as soon as they are read by the device has flourished. As such, in
cases where customer and vendor are persistently or intermittently disconnected
from the network, no secure on-line payment is possible. This paper describes
FRoDO, a secure off-line micro-payment solution that is resilient to PoS data
breaches. Our solution improves over up to date approaches in terms of flexibility
and security. To the best of our knowledge, FRoDO is the first solution that can
provide secure fully off-line payments while being resilient to all currently known
PoS breaches. In particular, we detail FRoDO architecture, components, and
protocols. Further, a thorough analysis of FRoDO functional and security
properties is provided, showing its effectiveness and viability.

Objectives
The objective of the web development is to handle the entire activity of
a website. The software keeps track of all the information about the

1|Page
entire website. The system contains database where all the information
will be stored safely.

• To save the time and resources


The website system will take less time in entering the data,
processing it and getting its output. Fewer resources will be used as
no large registers, files, ledgers, pens; correctors will be needed or
used.

 To reduce the number of workers

After the system will be computerized only a single computer


operator will be needed to operate the system while now more than
one workers work in the system.
• To reduce the space being used
Every data will be stored in the computer memory whereas now it is stored in
registers and files which then stored in bookshelves or cupboards and they
need a large space.
• To reduce the work load
As the new system will be computerized, the database will be
automatically updated at the time of entry. Everything will be
done automatically just by clicking few buttons. There will be no
need to maintain any files or registers.
• To make it easy to search any record
It will be much easier to find particular record rather than opening
such huge files and finding a single record from them
• To edit records and update the database easily
2|Page
Records will be easily edited and the database will easily be
updated at the time of entering a record.
• To make the system user friendly
The system will be much more easy to use and the operator will feel no
difficulty.

1.2 COMPANY PROFILE:

About the company :-

Company :-mPowerGlobal.

Formation :- 2005

Purpose / Speciality :- “We delight customers through better, faster and cost
effective IT solutions.”

Location :- India

Origin :- Bangalore

Class of company :- Private (Non –Government)

1.2.1. Profile of the Company


mPower is a leading software services company established in 2005 by a group of
likeminded technology professionals. Head Quartered in Bangalore with offices in
USA and Malaysia, mPower specializes in Enterprise Web Portal Technology.
With a healthy mix of senior management and technology professionals mPower
has carved a niche for itself in emerging as a leading player in the Open Source
and Web Portal Technology Space.

3|Page
Website :-http://www.mpowerglobal.com
Industry :-Computer Software
Headquarters :-USA
Type :-Privately Held
Founded :-2005
Specialties :- mPowerGlobal
focuses on your current business workflows, identify areas for improvement
and provide consultancy integrating the right combination of software, technology,
training, customization and hand holding to enable the transformation of your
business process to achieve your goals faster and better.

LABS PRODUCTS :-

We offer a number of platforms and kits to help you speed up and increase
confidence in your product development cycles.

 WEB DEVELOPMENT
Web development is a broad term for the work involved in
developing a web site for the Internet WWW (World Wide Web).
This can include web content development, web design, client-
side/server-side scripting, web server, network security
configuration and eCommerce development. Web development can
range from developing the simple static single page or plain text to
the most complex web-based internet applications, social network
services.

4|Page
What we do?
1. Responsive Web Design
Our web developers are experts in building highly interactive and
deeply pleasant full- screen websites that work as flawlessly on
smartphones as they do on desktops or any other device of your
users’ choice.
1. E Commerce Development

Whether you need an open-source ecommerce platform-based


website, or a next-gen online store designed from the ground up
for your specific business, we have just the expertise you need.
2. Enterprise Web Development

We build large-scale digital solutions for enterprises in B2B


and B2C segments. The dynamic, interactive, and high-
performance websites we craft create ultimate brand
experiences for enterprises
3. Support and Maintenance

We offer fast, reliable, and expert management and maintenance


services to support your website. We’ll fix its bugs, enhance its
features, and ensure that it works smoothly

 ECOMMERCE
The e-commerce segment industries has a very shining future.
With the technology advancement, the methods by which the retail
and e-commerce industry communicates with its customers is
anticipated to experience enormous changes. ATS attempts to
bring consumers and business closer by various IT solutions it

5|Page
designs for the e-commerce industries.
We are competent in developing advanced IT solutions and web
solutions which will be advantageous in heightening your retail
business. We help you in achieving maximum efficiency for your
shopping and e-commerce business. Our software developers are
proficient in designing best E-commerce IT solutions keeping in
mind your target audience and behavior of your customers, which
will eventually spring up your e- commerce business.

1.2.3. Services :-

 Computer Software Developers.


 Internet Website Designers.
 Internet Websites for Information.
 Online Shopping Websites.
 Website Hosting.
 Mobile Applications.
 CRM solution providers
 IT Solution Providers.
 Website Management.

 Skillset:-

 Internet Of Things
 Conversational UI

6|Page
 Web & Mobile Apps
 Energy Efficiency

CHAPTER 2

LITERATURE SURVEY

LITERATURE SURVEY:
2.1 EXISTING SYSTEM AND PROPOSED SYSTEM:
2.1.1 Existing System:
 PoS systems act as gateways and require some sort of
network connection in order to contact external credit card
processors. This is mandatory to validate transactions.
 To reduce cost and simplify administration and
maintenance, PoS devices may be remotely managed over
these internal networks.
 Mobile payment solutions proposed so far can be classified
as fully on-line, semi off-line, weak off-line or fully off-line.
 The previous work called FORCE that, similarly to FRoDO,
was built using a PUF based architecture. FORCE provided
a weak prevention strategy based on data obfuscation and
did not address the most relevant attacks aimed at
threatening customer sensitive data, thus being vulnerable to
many advanced attack techniques

DISADVANTAGES OF EXISTING SYSTEM:

7|Page
 Off-line scenarios are harder to protect, customer data is kept within the PoS
for much longer time, thus being more exposed to attackers.
 Skimmers: in this attack, the customer input device that belongs to the PoS
system is replaced with a fake one in order to capture customer’s card data.
 The main issue with a fully off-line approach is the difficulty of checking the
trustworthiness of a transaction without a trusted third party. In fact, keeping
track of past transactions with no available connection to external parties or
shared databases can be quite difficult, as it is difficult for a vendor to check
if some digital coins have already been spent. Although many works have
been published, they all focused on transaction anonymity and coin
unforgeability. However, previous solutions lack a thorough security
analysis. While they focus on theoretical attacks, discussion on real world
attacks such as skimmers, scrapers and data vulnerabilities is missing.

2.1.2 PROPOSED SYSTEM:

 In this paper, FRoDO is the first solution that neither requires trusted third
parties, nor bank accounts, nor trusted devices to provide resiliency against
frauds based on data breaches in a fully off-line electronic payment systems.
Furthermore, by allowing FRoDO customers to be free from having a bank
account, makes it also particularly interesting as regards to privacy.
 In fact, digital coins used in FRoDO are just a digital version of real cash
and, as such, they are not linked to anybody else than the holder of both the
identity and the coin element.
 Differently from other payment solutions based on tamper-proof hardware,
FRoDO assumes that only the chips built upon PUFs can take advantage

8|Page
from the tamper evidence feature. As a consequence, our assumptions are
much less restrictive than other approaches.
 This paper introduces and discusses FRoDO, a secure off-line micro-
payment approach using multiple physical unclonable functions (PUFs).
 FRoDO features an identity element to authenticate the customer, and a coin
element where coins are not locally stored, but are computed on-the fly
when needed.
 The communication protocol used for the payment transaction does not
directly read customer coins. Instead, the vendor only communicates with
the identity element in order to identify the user. This simplification
alleviates the communication burden with the coin element that affected
previous approach.
 The main benefit is a simpler, faster, and more secure interaction between
the involved actors/entities. Among other properties, this two-steps protocol
allows the bank or the coin element issuer to design digital coins to be read
only by a certain identity element, i.e., by a specific user. Furthermore, the
identity element used to improve the security of the users can also be used to
thwart malicious users.
 To the best of our knowledge, this is the first solution that can provide
secure fully off-line payments while being resilient to all currently known
PoS breaches.

ADVANTAGES OF PROPOSED SYSTEM:

 FRoDO has been designed to be a secure and reliable encapsulation scheme


of digital coins.

9|Page
 FRoDO also applicable to multiple-bank scenarios. Indeed, as for credit and
debit cards where trusted third parties (for short, TTPs) such as card issuers
guarantee the validity of the cards, some common standard convention can
be used in FRoDO to make banks able to produce and sell their own coin
element.
 The identity and the coin element can be considered tamper-proof devices
with a secure storage and execution environment for sensitive data.

2.2FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During
system analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For
feasibility analysis, some understanding of the major requirements for the system
is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

10 | P a g e
This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the
research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.

TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user.
This includes the process of training the user to use the system efficiently. The user
must not feel threatened by the system, instead must accept it as a necessity. The
level of acceptance by the users solely depends on the methods that are employed
to educate the user about the system and to make him familiar with it. His level of
confidence must be raised so that he is also able to make some constructive
criticism, which is welcomed, as he is the final user of the system.

2.3 HARDWARE AND SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENTS:

11 | P a g e
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

• Operating system : - Windows XP/7.


• Coding Language : JAVA/J2EE
• Data Base : MYSQL

2.4 TOOLS AND TECHNOLOGIES USED:


SOFTWARE ENVIRONMENT

Java Technology
Java technology is both a programming language and a platform.

The Java Programming Language


The Java programming language is a high-level language that can be characterized
by all of the following buzzwords:

 Simple

12 | P a g e
 Architecture neutral
 Object oriented
 Portable
 Distributed
 High performance
 Interpreted
 Multithreaded
 Robust
 Dynamic
 Secure

With most programming languages, you either compile or interpret a


program so that you can run it on your computer. The Java programming
language is unusual in that a program is both compiled and interpreted. With the
compiler, first you translate a program into an intermediate language called Java
byte codes —the platform-independent codes interpreted by the interpreter on the
Java platform. The interpreter parses and runs each Java byte code instruction on
the computer. Compilation happens just once; interpretation occurs each time the
program is executed. The following figure illustrates how this works.

13 | P a g e
You can think of Java byte codes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether it’s a development
tool or a Web browser that can run applets, is an implementation of the Java VM.
Java byte codes help make “write once, run anywhere” possible. You can compile
your program into byte codes on any platform that has a Java compiler. The byte
codes can then be run on any implementation of the Java VM. That means that as
long as a computer has a Java VM, the same program written in the Java
programming language can run on Windows 2000, a Solaris workstation, or on an
iMac.

The Java Platform


A platform is the hardware or software environment in which a
program runs. We’ve already mentioned some of the most popular platforms
like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be
described as a combination of the operating system and hardware. The Java
platform differs from most other platforms in that it’s a software-only
platform that runs on top of other hardware-based platforms.

14 | P a g e
The Java platform has two components:
 The Java Virtual Machine (Java VM)
 The Java Application Programming Interface (Java API)
You’ve already been introduced to the Java VM. It’s the base for the Java
platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components


that provide many useful capabilities, such as graphical user interface
(GUI) widgets. The Java API is grouped into libraries of related classes and
interfaces; these libraries are known as packages. The next section, What
Can Java Technology Do? Highlights what functionality some of the
packages in the Java API provide.
The following figure depicts a program that’s running on the Java
platform. As the figure shows, the Java API and the virtual machine insulate
the program from the hardware.

Native code is code that after you compile it, the compiled code runs
on a specific hardware platform. As a platform-independent environment,
the Java platform can be a bit slower than native code. However, smart
compilers, well-tuned interpreters, and just-in-time byte code compilers can
bring performance close to that of native code without threatening
portability.

What Can Java Technology Do?

15 | P a g e
The most common types of programs written in the Java programming
language are applets and applications. If you’ve surfed the Web, you’re
probably already familiar with applets. An applet is a program that adheres
to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute,
entertaining applets for the Web. The general-purpose, high-level Java
programming language is also a powerful software platform. Using the
generous API, you can write many types of programs.
An application is a standalone program that runs directly on the Java
platform. A special kind of application known as a server serves and
supports clients on a network. Examples of servers are Web servers, proxy
servers, mail servers, and print servers. Another specialized program is a
servlet. A servlet can almost be thought of as an applet that runs on the
server side. Java Servlets are a popular choice for building interactive web
applications, replacing the use of CGI scripts. Servlets are similar to applets
in that they are runtime extensions of applications. Instead of working in
browsers, though, servlets run within Java Web servers, configuring or
tailoring the server.
How does the API support all these kinds of programs? It does so with
packages of software components that provides a wide range of
functionality. Every full implementation of the Java platform gives you the
following features:
 The essentials: Objects, strings, threads, numbers, input and output,
data structures, system properties, date and time, and so on.
 Applets: The set of conventions used by applets.

16 | P a g e
 Networking: URLs, TCP (Transmission Control Protocol), UDP
(User Data gram Protocol) sockets, and IP (Internet Protocol)
addresses.
 Internationalization: Help for writing programs that can be localized
for users worldwide. Programs can automatically adapt to specific
locales and be displayed in the appropriate language.
 Security: Both low level and high level, including electronic
signatures, public and private key management, access control, and
certificates.
 Software components: Known as JavaBeansTM, can plug into existing
component architectures.
 Object serialization: Allows lightweight persistence and
communication via Remote Method Invocation (RMI).
 Java Database Connectivity (JDBCTM): Provides uniform access to
a wide range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility,
servers, collaboration, telephony, speech, animation, and more. The
following figure depicts what is included in the Java 2 SDK.

17 | P a g e
How Will Java Technology Change My Life?

We can’t promise you fame, fortune, or even a job if you learn the Java
programming language. Still, it is likely to make your programs better and requires
less effort than other languages. We believe that Java technology will help you do
the following:

 Get started quickly: Although the Java programming language is a


powerful object-oriented language, it’s easy to learn, especially for
programmers already familiar with C or C++.
 Write less code: Comparisons of program metrics (class counts,
method counts, and so on) suggest that a program written in the Java
programming language can be four times smaller than the same
program in C++.
 Write better code: The Java programming language encourages good
coding practices, and its garbage collection helps you avoid memory
leaks. Its object orientation, its JavaBeans component architecture,
and its wide-ranging, easily extendible API let you reuse other
people’s tested code and introduce fewer bugs.
18 | P a g e
 Develop programs more quickly: Your development time may be as
much as twice as fast versus writing the same program in C++. Why?
You write fewer lines of code and it is a simpler programming
language than C++.
 Avoid platform dependencies with 100% Pure Java: You can keep
your program portable by avoiding the use of libraries written in other
languages. The 100% Pure JavaTM Product Certification Program has a
repository of historical process manuals, white papers, brochures, and
similar materials online.
 Write once, run anywhere: Because 100% Pure Java programs are
compiled into machine-independent byte codes, they run consistently
on any Java platform.
 Distribute software more easily: You can upgrade applets easily
from a central server. Applets take advantage of the feature of
allowing new classes to be loaded “on the fly,” without recompiling
the entire program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming
interface for application developers and database systems providers. Before ODBC
became a de facto standard for Windows programs to interface with database
systems, programmers had to use proprietary languages for each database they
wanted to connect to. Now, ODBC has made the choice of the database system
almost irrelevant from a coding perspective, which is as it should be. Application
developers have much more important things to worry about than the syntax that is
needed to port their program from one database to another when business needs
suddenly change.

19 | P a g e
Through the ODBC Administrator in Control Panel, you can specify the
particular database that is associated with a data source that an ODBC application
program is written to use. Think of an ODBC data source as a door with a name on
it. Each door will lead you to a particular database. For example, the data source
named Sales Figures might be a SQL Server database, whereas the Accounts
Payable data source could refer to an Access database. The physical database
referred to by a data source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95.
Rather, they are installed when you setup a separate database application, such as
SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in
Control Panel, it uses a file called ODBCINST.DLL. It is also possible to
administer your ODBC data sources through a stand-alone program called
ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each
maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the


application can be written to use the same set of function calls to interface with any
data source, regardless of the database vendor. The source code of the application
doesn’t change whether it talks to Oracle or SQL Server. We only mention these
two as an example. There are ODBC drivers available for several dozen popular
database systems. Even Excel spreadsheets and plain text files can be turned into
data sources. The operating system uses the Registry information written by
ODBC Administrator to determine which low-level ODBC drivers are needed to
talk to the data source (such as the interface to Oracle or SQL Server). The loading
of the ODBC drivers is transparent to the ODBC application program. In a

20 | P a g e
client/server environment, the ODBC API even handles many of the network
issues for the application programmer.
The advantages of this scheme are so numerous that you are probably
thinking there must be some catch. The only disadvantage of ODBC is that it isn’t
as efficient as talking directly to the native database interface. ODBC has had
many detractors make the charge that it is too slow. Microsoft has always claimed
that the critical factor in performance is the quality of the driver software that is
used. In our humble opinion, this is true. The availability of good ODBC drivers
has improved a great deal recently. And anyway, the criticism about performance
is somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives
you the opportunity to write cleaner programs, which means you finish sooner.
Meanwhile, computers get faster every year.

JDBC
In an effort to set an independent database standard API for Java; Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a
generic SQL database access mechanism that provides a consistent interface to a
variety of RDBMSs. This consistent interface is achieved through the use of “plug-
in” database connectivity modules, or drivers. If a database vendor wishes to have
JDBC support, he or she must provide the driver for each platform that the
database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on
ODBC. As you discovered earlier in this chapter, ODBC has widespread support
on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring
JDBC drivers to market much faster than developing a completely new
connectivity solution.
21 | P a g e
JDBC was announced in March of 1996. It was released for a 90 day public
review that ended June 8, 1996. Because of user input, the final JDBC v1.0
specification was released soon after.
The remainder of this section will cover enough information about JDBC for you
to know what it is about and how to use it effectively. This is by no means a
complete overview of JDBC. That would fill an entire book.

JDBC Goals
Few software packages are designed without goals in mind. JDBC is one
that, because of its many goals, drove the development of the API. These goals, in
conjunction with early reviewer feedback, have finalized the JDBC class library
into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some
insight as to why certain classes and functionalities behave the way they do. The
eight design goals for JDBC are as follows:

1. SQL Level API


The designers felt that their main goal was to define a SQL interface for Java.
Although not the lowest database interface level possible, it is at a low enough
level for higher-level tools and APIs to be created. Conversely, it is at a high
enough level for application programmers to use it confidently. Attaining this
goal allows for future tool vendors to “generate” JDBC code and to hide many
of JDBC’s complexities from the end user.

2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In
an effort to support a wide variety of vendors, JDBC will allow any query

22 | P a g e
statement to be passed through it to the underlying database driver. This allows
the connectivity module to handle non-standard functionality in a manner that is
suitable for its users.

3. JDBC must be implemental on top of common database interfaces


The JDBC SQL API must “sit” on top of other common SQL level APIs.
This goal allows JDBC to use existing ODBC level drivers by the use of a
software interface. This interface would translate JDBC calls to ODBC and
vice versa.
4. Provide a Java interface that is consistent with the rest of the Java system
Because of Java’s acceptance in the user community thus far, the designers
feel that they should not stray from the current design of the core Java system.

5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for
only one method of completing a task per mechanism. Allowing duplicate
functionality only serves to confuse the users of the API.

6. Use strong, static typing wherever possible


Strong typing allows for more error checking to be done at compile time; also,
less error appear at runtime.

7. Keep the common cases simple


Because more often than not, the usual SQL calls used by the programmer are
simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries
should be simple to perform with JDBC. However, more complex SQL
statements should also be possible.

23 | P a g e
Finally we decided to proceed the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access


database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple Architecture-neutral
Object-oriented Portable

Distributed High-performance

Interpreted multithreaded
Robust Dynamic
Secure

Java is also unusual in that each Java program is both compiled and
interpreted. With a compile you translate a Java program into an
intermediate language called Java byte codes the platform-independent
code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the


program is executed. The figure illustrates how this works.

24 | P a g e
Java Program Interpreter

Compilers My Program

You can think of Java byte codes as the machine code instructions
for the Java Virtual Machine (Java VM). Every Java interpreter,
whether it’s a Java development tool or a Web browser that can run
Java applets, is an implementation of the Java VM. The Java VM can
also be implemented in hardware.
Java byte codes help make “write once, run anywhere” possible.
You can compile your Java program into byte codes on my platform
that has a Java compiler. The byte codes can then be run any
implementation of the Java VM. For example, the same Java program
can run Windows NT, Solaris, and Macintosh.

Networking

TCP/IP stack

The TCP/IP stack is shorter than the OSI one:

25 | P a g e
TCP is a connection-oriented protocol; UDP (User Datagram Protocol)
is a connectionless protocol.

IP datagram’s

The IP layer provides a connectionless and unreliable delivery system.


It considers each datagram independently of the others. Any association
between datagram must be supplied by the higher layers. The IP layer
supplies a checksum that includes its own header. The header includes the
source and destination addresses. The IP layer handles routing through an
Internet. It is also responsible for breaking up large datagram into smaller
ones for transmission and reassembling them at the other end.

26 | P a g e
UDP

UDP is also connectionless and unreliable. What it adds to IP is a


checksum for the contents of the datagram and port numbers. These are
used to give a client/server model - see later.

TCP

TCP supplies logic to give a reliable connection-oriented protocol


above IP. It provides a virtual circuit that two processes can use to
communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses
an address scheme for machines so that they can be located. The address is
a 32 bit integer which gives the IP address. This encodes a network ID and
more addressing. The network ID falls into various classes according to the
size of the network address.

Network address

Class A uses 8 bits for the network address with 24 bits left over for
other addressing. Class B uses 16 bit network addressing. Class C uses 24
bit network addressing and class D uses all 32.

27 | P a g e
Subnet address

Internally, the UNIX network is divided into sub networks. Building 11


is currently on one sub network and uses 10-bit addressing, allowing 1024
different hosts.

Host address

8 bits are finally used for host addresses within our subnet. This places
a limit of 256 machines that can be on the subnet.

Total address

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit


number. To send a message to a server, you send it to the port for that
service of the host that it is running on. This is not location transparency!
Certain of these ports are "well known".

28 | P a g e
Sockets

A socket is a data structure maintained by the system to handle network


connections. A socket is created using the call socket. It returns an integer
that is like a file descriptor. In fact, under Windows, this handle can be
used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here "family" will be AF_INET for IP communications, protocol will


be zero, and type will depend on whether TCP or UDP is used. Two
processes wishing to communicate over a network create a socket each.
These are similar to two ends of a pipe - but the actual pipe does not yet
exist.

JFree Chart

JFreeChart is a free 100% Java chart library that makes it easy for
developers to display professional quality charts in their applications.
JFreeChart's extensive feature set includes:

A consistent and well-documented API, supporting a wide range of


chart types;

A flexible design that is easy to extend, and targets both server-side


and client-side applications;

29 | P a g e
Support for many output types, including Swing components, image
files (including PNG and JPEG), and vector graphics file formats (including
PDF, EPS and SVG);

JFreeChart is "open source" or, more specifically, free software. It is


distributed under the terms of the GNU Lesser General Public Licence
(LGPL), which permits use in proprietary applications.

1. Map Visualizations

Charts showing values that relate to geographical areas. Some


examples include: (a) population density in each state of the United States,
(b) income per capita for each country in Europe, (c) life expectancy in each
country of the world. The tasks in this project include:

Sourcing freely redistributable vector outlines for the countries of the


world, states/provinces in particular countries (USA in particular, but also
other areas);

Creating an appropriate dataset interface (plus default


implementation), a rendered, and integrating this with the existing XYPlot
class in JFreeChart;

Testing, documenting, testing some more, documenting some more.

2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts --- to
display a separate control that shows a small version of ALL the time series
data, with a sliding "view" rectangle that allows you to select the subset of the
time series data to display in the main chart.

30 | P a g e
3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible


dashboard mechanism that supports a subset of JFreeChart chart types (dials,
pies, thermometers, bars, and lines/time series) that can be delivered easily via
both Java Web Start and an applet.

4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of


the properties that can be set for charts. Extend (or reimplement) this
mechanism to provide greater end-user control over the appearance of the
charts.

J2ME (Java 2 Micro edition):-

Sun Microsystems defines J2ME as "a highly optimized Java run-time


environment targeting a wide range of consumer products, including pagers,
cellular phones, screen-phones, digital set-top boxes and car navigation systems."
Announced in June 1999 at the JavaOne Developer Conference, J2ME brings the
cross-platform functionality of the Java language to smaller devices, allowing
mobile wireless devices to share applications. With J2ME, Sun has adapted the
Java platform for consumer products that incorporate or are based on small
computing devices.

1. General J2ME architecture

31 | P a g e
J2ME uses configurations and profiles to customize the Java Runtime Environment
(JRE). As a complete JRE, J2ME is comprised of a configuration, which
determines the JVM used, and a profile, which defines the application by adding
domain-specific classes. The configuration defines the basic run-time environment
as a set of core classes and a specific JVM that run on specific types of devices.
We'll discuss configurations in detail in the The profile defines the application;
specifically, it adds domain-specific classes to the J2ME configuration to define
certain uses for devices. We'll cover profiles in depth in the The following graphic
depicts the relationship between the different virtual machines, configurations, and
profiles. It also draws a parallel with the J2SE API and its Java virtual machine.
While the J2SE virtual machine is generally referred to as a JVM, the J2ME virtual
machines, KVM and CVM, are subsets of JVM. Both KVM and CVM can be

32 | P a g e
thought of as a kind of Java virtual machine -- it's just that they are shrunken
versions of the J2SE JVM and are specific to J2ME.

2. Developing J2ME applications

Introduction In this section, we will go over some considerations you need to keep
in mind when developing applications for smaller devices. We'll take a look at the
way the compiler is invoked when using J2SE to compile J2ME applications.
Finally, we'll explore packaging and deployment and the role preverification plays
in this process.

3. Design considerations for small devices

Developing applications for small devices requires you to keep certain strategies in
mind during the design phase. It is best to strategically design an application for a
small device before you begin coding. Correcting the code because you failed to
consider all of the "gotchas" before developing the application can be a painful
process. Here are some design strategies to consider:

* Keep it simple. Remove unnecessary features, possibly making those features a


separate, secondary application.

* Smaller is better. This consideration should be a "no brainer" for all developers.
Smaller applications use less memory on the device and require shorter installation
times. Consider packaging your Java applications as compressed Java Archive (jar)
files.

33 | P a g e
* Minimize run-time memory use. To minimize the amount of memory used at run
time, use scalar types in place of object types. Also, do not depend on the garbage
collector. You should manage the memory efficiently yourself by setting object
references to null when you are finished with them. Another way to reduce run-
time memory is to use lazy instantiation, only allocating objects on an as-needed
basis. Other ways of reducing overall and peak memory use on small devices are to
release resources quickly, reuse objects, and avoid exceptions.

4. Configurations overview

The configuration defines the basic run-time environment as a set of core classes
and a specific JVM that run on specific types of devices. Currently, two
configurations exist for J2ME, though others may be defined in the future:

* Connected Limited Device Configuration (CLDC) is used specifically with


the KVM for 16-bit or 32-bit devices with limited amounts of memory. This is the
configuration (and the virtual machine) used for developing small J2ME
applications. Its size limitations make CLDC more interesting and challenging
(from a development point of view) than CDC. CLDC is also the configuration that
we will use for developing our drawing tool application. An example of a small
wireless device running small applications is a Palm hand-held computer.

* Connected Device Configuration (CDC) is used with the C virtual machine


(CVM) and is used for 32-bit architectures requiring more than 2 MB of memory.
An example of such a device is a Net TV box.

5. J2ME profiles

What is a J2ME profile?

34 | P a g e
As we mentioned earlier in this tutorial, a profile defines the type of device
supported. The Mobile Information Device Profile (MIDP), for example, defines
classes for cellular phones. It adds domain-specific classes to the J2ME
configuration to define uses for similar devices. Two profiles have been defined for
J2ME and are built upon CLDC: KJava and MIDP. Both KJava and MIDP are
associated with CLDC and smaller devices. Profiles are built on top of
configurations. Because profiles are specific to the size of the device (amount of
memory) on which an application runs, certain profiles are associated with certain
configurations.

A skeleton profile upon which you can create your own profile, the Foundation
Profile, is available for CDC.

Profile 1: KJava

KJava is Sun's proprietary profile and contains the KJava API. The KJava profile is
built on top of the CLDC configuration. The KJava virtual machine, KVM, accepts
the same byte codes and class file format as the classic J2SE virtual machine.
KJava contains a Sun-specific API that runs on the Palm OS. The KJava API has a
great deal in common with the J2SE Abstract Windowing Toolkit (AWT).
However, because it is not a standard J2ME package, its main package is
com.sun.kjava. We'll learn more about the KJava API later in this tutorial when we
develop some sample applications.

Profile 2: MIDP

MIDP is geared toward mobile devices such as cellular phones and pagers. The
MIDP, like KJava, is built upon CLDC and provides a standard run-time
environment that allows new applications and services to be deployed dynamically
on end user devices. MIDP is a common, industry-standard profile for mobile
35 | P a g e
devices that is not dependent on a specific vendor. It is a complete and supported
foundation for mobile application

development. MIDP contains the following packages, the first three of which are
core CLDC packages, plus three MIDP-specific packages.

* java.lang

* java.io

* java.util

* javax.microedition.io

* javax.microedition.lcdui

* javax.microedition.midlet

* javax.microedition.rms

CHAPTER 3

SYSTEM DESIGN

SYSTEM ARCHITECTURE:

36 | P a g e
DATA FLOW DIAGRAM:

1. The DFD is also called as bubble chart. It is a simple graphical formalism


that can be used to represent a system in terms of input data to the system,
various processing carried out on this data, and the output data is generated
by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It
is used to model the system components. These components are the system
process, the data used by the process, an external entity that interacts with
the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that
depicts information flow and the transformations that are applied as data
moves from input to output.

37 | P a g e
4. DFD is also known as bubble chart. A DFD may be used to represent a
system at any level of abstraction. DFD may be partitioned into levels that
represent increasing information flow and functional detail.

FRoDO

User Vendor

FRoDO

PUF

Coin Element
Identity Element

User Card Reader Deposit Verify Withdrawal Verify

Users Users Key

Transaction

Binary Code Convertor

Vendor

UML DIAGRAMS

38 | P a g e
UML stands for Unified Modeling Language. UML is a standardized
general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have
proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software
and the software development process. The UML uses mostly graphical notations
to express the design of software projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.

39 | P a g e
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.

USE CASE DIAGRAM:

 A use case diagram in the Unified Modeling Language (UML) is a type


of behavioral diagram defined by and created from a Use-case analysis.
Its purpose is to present a graphical overview of the functionality
provided by a system in terms of actors, their goals (represented as use
cases), and any dependencies between those use cases. The main purpose
of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can

40 | P a g e
User

Vendor

FRoDO

Binary Code Convertor

Vendo
PUF

User Card Reader

User
Coin Element

Verification

Transaction
Atta

Logout

CLASS DIAGRAM:

41 | P a g e
In software engineering, a class diagram in the Unified Modeling Language
(UML) is a type of static structure diagram that describes the structure of a system
by showing the system's classes, their attributes, operations (or methods), and the
relationships among the classes. It explains which class contains information.

User Vendor

User Details Vendor Details


Deposit Deposit Details
Withdrawal Withdrawal Details

Deposit ()
Withdrawal() Deposit()
Withdrawal()

FRoDO PUF

Identity Element
Coin Element Deposit Verify
Binary Conversion Withdrawal verify
Transaction

Deposit details()
Withdrawal details()
Deposit()
Withdrawal()
Transaction()

SEQUENCE DIAGRAM:

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction


diagram that shows how processes operate with one another and in what order. It is

42 | P a g e
a construct of a Message Sequence Chart. Sequence diagrams are sometimes called
event diagrams, event scenarios, and timing diagrams.

FRoDO PUF
User
Vendor

Identity Element Deposit Verify


Coin Element
Wthdrawal Verify

User Card Reader

Transaction

ACTIVITY DIAGRAM:

Activity diagrams are graphical representations of workflows of stepwise activities


and actions with support for choice, iteration and concurrency. In the Unified
Modeling Language, activity diagrams can be used to describe the business and
operational step-by-step workflows of components in a system. An activity
diagram shows the overall flow of control.

43 | P a g e
User

Vendor FRoDO PUF

Identity Element User Card Reader

Deposit

Coin Element Deposit Verfiy()

Withdrawal
Binary Code Conversion
Withdrawal verify()

Transaction

INPUT DESIGN AND OUTPUT DESIGN

INPUT DESIGN

44 | P a g e
The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for processing
can be achieved by inspecting the computer to read data from a written or printed
document or it can occur by having people keying the data directly into the system.
The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process
simple. The input is designed in such a way so that it provides security and ease of
use with retaining the privacy. Input Design considered the following things:

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error
occur.

OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the input


into a computer-based system. This design is important to avoid errors in the data
input process and show the correct direction to the management for getting correct
information from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be

45 | P a g e
free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided as when needed so that the
user will not be in maize of instant. Thus the objective of input design is to create
an input layout that is easy to follow

OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to
the users and to other system through outputs. In output design it is determined
how the information is to be displaced for immediate need and also the hard copy
output. It is the most important and direct source information to the user. Efficient
and intelligent output design improves the system’s relationship to help user
decision-making.

1. Designing computer output should proceed in an organized, well thought out


manner; the right output must be developed while ensuring that each output
element is designed so that people will find the system can use easily and
effectively. When analysis design computer output, they should Identify the
specific output that is needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by


the system.

46 | P a g e
The output form of an information system should accomplish one or more of the
following objectives.

 Convey information about past activities, current status or projections of the


 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.

CHAPTER 4

IMPLEMENTATION

MODULES:

 System Construction Module


 Identity Element
 Coin Element
 Attack Mitigation

MODULES DESCSRIPTION:

System Construction Module

 In the first module, we develop the System Construction module with the
various entities: Vendor, User, FRoDO, PUF, Attacker. This process is
developed completely on Offline Transaction process.
 We develop the system with user entity initially. The options are available
for a new user to register first and then login for authentication process.

47 | P a g e
Then we develop the option of making the Vendor Registration, such that,
the new vendor should register first and then login the system for
authentication process.
 FRoDO is the first solution that neither requires trusted third parties, nor
bank accounts, nor trusted devices to provide resiliency against frauds based
on data breaches in a fully off-line electronic payment systems. Furthermore,
by allowing FRoDO customers to be free from having a bank account,
makes it also particularly interesting as regards to privacy. In fact, digital
coins used in FRoDO are just a digital version of real cash and, as such, they
are not linked to anybody else than the holder of both the identity and the
coin element.
 Differently from other payment solutions based on tamper-proof hardware,
FRoDO assumes that only the chips built upon PUFs can take advantage
from the tamper evidence feature. As a consequence, our assumptions are
much less restrictive than other approaches.

Identity Element

 In this module, we develop the Identity Element module functionalities.


FRoDO does not require any special hardware component apart from the
identity and the coin element that can be either plugged into the customer
device or directly embedded into the device.
 Similarly to secure elements, both the identity and the coin element can be
considered tamperproof devices with a secure storage and execution
environment for sensitive data. Thus, as defined in the ISO7816-4 standard,
both of them can be accessed via some APIs while maintaining the desired
security and privacy level. Such software components (i.e., APIs) are not

48 | P a g e
central to the security of our solution and can be easily and constantly
updated. This renders infrastructure maintenance easier.

Coin Element

 In this module, we develop Coin Element. In this coin Element we develop


Key Generator and Cryptographic Element. The Key Generator is used to
compute on-the-fly the private key of the coin element. The Cryptographic
Element used for symmetric and asymmetric cryptographic algorithms
applied to data received in input and send as output by the coin element;
 The Coin Selector is responsible for the selection of the right registers used
together with the output value computed by the coin element PUF in order to
obtain the final coin value;
 The Coin Registers used to store both PUF input and output values required
to reconstruct original coin values. Coin registers contain coin seed and coin
helper data. Coin seeds are used as input to the PUF whilst coin helpers are
used in order to reconstruct stable coin values when the PUF is challenged

Attack Mitigation

 In this module we develop the Attack Mitigation process. The read-once


property of the erasable PUF used in this solution prevents an attacker from
computing the same coin twice. Even if a malicious customer creates a fake
vendor device and reads all the coins, it will not be able to spend any of
these coins due to the inability to decrypt the request of other vendors.
 The private keys of both the identity and coin elements are needed to decrypt
the request of the vendor and can be computed only within the customer
device. The fake vendor could then try to forge a new emulated identity/coin
element with private/ public key pair. However, identity/coin element public

49 | P a g e
keys are valid only if signed by the bank. As such, any message received by
an unconfirmed identity/coin element will be immediately rejected;
 Each coin is encrypted by either the bank or the coin element issuer and thus
it is not possible for an attacker to forge new coins

CHAPTER 6
SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to


discover every conceivable fault or weakness in a work product. It provides a way
to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the

Software system meets its requirements and user expectations and does not fail in
an unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

TYPES OF TESTS

Unit testing

Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at

50 | P a g e
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.

Integration testing
Integration tests are designed to test integrated software components to determine
if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that
although the components were individually satisfaction, as shown by successfully
unit testing, the combination of components is correct and consistent. Integration
testing is specifically aimed at exposing the problems that arise from the
combination of components.

Functional test

Functional tests provide systematic demonstrations that functions tested are


available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

51 | P a g e
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.

System Test

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge
of the inner workings, structure and language of the software, or at least its
purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as
most other kinds of tests, must be written from a definitive source document, such
as specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black
box .you cannot “see” into it. The test provides inputs and responds to outputs
without considering how the software works.

52 | P a g e
6.1 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to
be conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in
detail.

Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

53 | P a g e
6.2 Integration Testing

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by
interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

6.3 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

54 | P a g e
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

CHAPTER 7

CONCLUSION

In this paper we have introduced FRoDO that is, to the best of our knowledge, the
first data-breach-resilient fully offline micro-payment approach. The security
analysis shows that FRoDO does not impose trustworthiness assumptions. Further,
FRoDO is also the first solution in the literature where no customer device data
attacks can be exploited to compromise the system. This has been achieved mainly
by leveraging a novel erasable PUF architecture and a novel protocol design.
Furthermore, our proposal has been thoroughly discussed and compared against the
state of the art. Our analysis shows that FRoDO is the only proposal that enjoys all
the properties required to a secure micro-payment solution, while also introducing
flexibility when considering the payment medium (types of digital coins). Finally,
some open issues have been identified that are left as future work. In particular, we
are investigating the possibility to allow digital change to be spent over multiple
off-line transactions while maintaining the same level of security and usability.

CHAPTER 8

FUTURE ENHANCEMENTS

Future Enhancement:

The project has an enormous scope in future. As the requirement of user


keeps changing as it is being used, the system needs to be upgraded and updated.

55 | P a g e
Some of the future enhancements that can be done to this system are:

 The system can be upgraded based on the upcoming or latest


technologies.
 More security can be added by using one time password(OTP)and
captcha.
 Storage capacity can also be upgraded or extended to store large size
files such as audios,videos,etc…..

APPENDIX

BIBLIOGRAPHY

[1] J. Lewandowska. (2013). [Online]. Available:


http://www.frost.com/prod/servlet/press-release.pag?docid=274238535

[2] R. L. Rivest, “Payword and micromint: Two simple micropayment schemes,”


in Proc. Int. Workshop Security Protocols, 1996, pp. 69–87.

[3] S. Martins and Y. Yang, “Introduction to bitcoins: A pseudoanonymous


electronic currency system,” in Proc. Conf. Center Adv. Stud. Collaborative Res.,
2011, pp. 349–350.

[4] Verizon, “2014 data breach investigations report,” Verizon, Tech. Rep., 2014,
http://www.verizonenterprise.com/DBIR/2014/

[5] T. Micro, “Point-of-sale system breaches, threats to the retail and hospitality
industries,” University of Zurich, Department of Informatics, 2010.

56 | P a g e
[6] Mandiant, “Beyond the breach,” Mandiant, 2014,
https://dl.mandiant.com/EE/library/WP_M-Trends2014_140409.pdf

[7] Bogmar, “Secure POS & kiosk support,” Bogmar, 2014,


http://www.bomgar.com/assets/documents/Bomgar_Remote_Support_for_POS_Sy
stems.pdf

[8] V. Daza, R. Di Pietro, F. Lombardi, and M. Signorini, “FORCEFully off-line


secure credits for mobile micro payments,” in Proc. 11th Int. Conf. Security
Cryptography, 2014, pp. 125–136.

[9] W. Chen, G. Hancke, K. Mayes, Y. Lien, and J.-H. Chiu, “Using 3G network
components to enable NFC mobile transactions and authentication,” in Proc. IEEE
Int. Conf. Progress Informat. Comput., Dec. 2010, vol. 1, pp. 441–448.

[10] S. Golovashych, “The technology of identification and authentication of


financial transactions. from smart cards to NFC-terminals,” in Proc. IEEE Intell.
Data Acquisition Adv. Comput. Syst., Sep. 2005, pp. 407–412.

[11] G. Vasco, Maribel, S. Heidarvand, and J. Villar, “Anonymous subscription


schemes: A flexible construction for on-line services access,” in Proc. Int. Conf.
Security Cryptography, Jul. 2010, pp. 1–12.

[12] K. S. Kadambi, J. Li, and A. H. Karp, “Near-field communicationbased secure


mobile payment service,” in Proc. 11th Int. Conf. Electron. Commerce, 2009, pp.
142–151.
57 | P a g e
[13] V. C. Sekhar and S. Mrudula, “A complete secure customer centric
anonymous payment in a digital ecosystem,” in Proc. Int. Conf. Comput., Electron.
Elect. Technol., 2012, pp. 1049–1054.

[14] S. Dominikus and M. Aigner, “mCoupons: An application for near field


communication (NFC),” in Proc. 21st Int. Conf. Adv. Inf. Netw. Appl. Workshops,
2007, pp. 421–428.

[15] T. Nishide and K. Sakurai, “Security of offline anonymous electronic cash


systems against insider attacks by untrusted authorities revisited,” in Proc. 3rd Int.
Conf. Intell. Netw. Collaborative Syst., 2011, pp. 656–661.

[16] W.-S. Juang, “An efficient and practical fair buyer-anonymity exchange
scheme using bilinear pairings,” in Proc. 8th Asia Joint Conf. Inf. Security, Jul.
2013, pp. 19–26.

[17] M. A. Salama, N. El-Bendary, and A. E. Hassanien, “Towards secure mobile


agent based e-cash system,” in Proc. Int. Workshop Security Privacy Preserving e-
Soc., 2011, pp. 1–6.

[18] C. Wang, H. Sun, H. Zhang, and Z. Jin, “An improved off-line electronic cash
scheme,” in Proc. 5th Int. Conf. Comput. Inf. Sci., Jun. 2013, pp. 438–441.

[19] J. Guajardo, S. S. Kumar, G.-J. Schrijen, and P. Tuyls, “FPGA intrinsic PUFs
and their use for IP protection,” in Proc. 9th Int. Workshop Cryptographic
Hardware Embedded Syst., 2007, pp. 63–80.
58 | P a g e
[20] R. Pappu, B. Recht, J. Taylor, and N. Gershenfeld, “Physical oneway
functions,” Science, vol. 297, no. 5589, pp. 2026–2030, 2002.

[21] S. Gomzin, Hacking Point of Sale: Payment Application Secrets, Threats, and
Solutions, 1st ed. New York, NY, USA: Wiley, 2014.

[22] Trustwave, “2013 global security report,” Trustwave, 2013,


http://www2.trustwave.com/rs/trustwave/images/2013-Global-Security-Report.pdf
[23] R. Battistoni, A. D. Biagio, R. Di Pietro, M. Formica, and L. V. Mancini, “A
live digital forensic system for Windows networks,” in Proc. 20th IFIP TC Int. Inf.
Security Conf., 2008, vol. 278, pp. 653–667.

[24] G. Hong and J. Bo, “Forensic analysis of skimming devices for credit fraud
detection,” in Proc. 2nd IEEE Int. Conf. Inf. Financial Eng., Sep. 2010, pp. 542–
546.

[25] C. R. Group, “Alina & other POS malware,” Cymru, 2013, https://www.team-
cymru.com/ReadingRoom/Whitepapers/

[26] W. Whitteker, “Point of sale (POS) systems and security,” SANS Inst.,
Fredericksburg, VA, USA, 2014, http://www.sans.org/reading-
room/whitepapers/bestprac/point-sale-pos-systemssecurity-35357

[27] U. R€uhrmair, F. Sehnke, J. S€olter, G. Dror, S. Devadas, and J.


Schmidhuber, “Modeling attacks on physical unclonable functions,” in Proc. 17th
ACM Conf. Comput. Commun. Security, 2010, pp. 237–249.
59 | P a g e
[28] U. Rhrmair, H. Busch, and S. Katzenbeisser, “Strong PUFs: Models,
constructions, and security proofs,” in Towards Hardware- Intrinsic Security,
series Information Security and Cryptography, A.-R. Sadeghi and D. Naccache,
Eds. New York, NY, USA: Springer, 2010, pp. 79–96.

[29] P. S. Ravikanth. (2001). Physical one-way functions. Ph.D. dissertation,


Massachusetts Inst. Technol., Cambridge, MA, USA [Online]. Available:
http://cba.mit.edu/docs/theses/01.03. pappuphd.powf.pdf
[30] U. Rhrmair, C. Jaeger, and M. Algasinger, “An attack on PUFBased session
key exchange and a hardware-based countermeasure: Erasable PUFs,” in Proc.
15th Int. Conf. Financial Cryptography Data Security, 2012, vol. 7035, pp. 190–
204.

[31] B. Kori, P. Tuyls, and W. Ophey, “Robust key extraction from physical
uncloneable functions,” in Proc. Appl. Cryptography Netw. Security, 2005, vol.
3531, pp. 407–422.

[32] Y. Dodis, R. Ostrovsky, L. Reyzin, and A. Smith, “Fuzzy extractors: How to


generate strong keys from biometrics and other noisy data,” SIAM J. Comput., vol.
38, no. 1, pp. 97–139, Mar. 2008.

[33] R. Maes, P. Tuyls, and I. Verbauwhede, “Low-overhead implementation of a


soft decision helper data algorithm for SRAM PUFs,” in Proc. 11th Int. Workshop
Cryptographic Hardware Embedded Syst., 2009, pp. 332–347.

60 | P a g e
[34] C. B€osch, J. Guajardo, A.-R. Sadeghi, J. Shokrollahi, and P. Tuyls, “Efficient
helper data key extractor on FPGAs,” in Proc. 11th Int. Workshop Cryptographic
Hardware Embedded Syst., 2008, pp. 181–197.

[35] M.-D. Yu and S. Devadas, “Secure and robust error correction for physical
unclonable functions,” IEEE Design Test Comput., vol. 27, no. 1, pp. 48–65, Jan.
2010.

[36] M.-D. Yu, D. MRaihi, R. Sowell, and S. Devadas, “Lightweight and secure
PUF key storage using limits of machine learning,” in Proc. Int. Workshop
Cryptographic Hardware Embedded Syst., 2011, vol. 6917, pp. 358–373.

[37] D. Lim, J. W. Lee, B. Gassend, G. E. Suh, M. van Dijk, and S. Devadas,


“Extracting secret keys from integrated circuits,” IEEE Trans. Very Large Scale
Integr. Syst., vol. 13, no. 10, pp. 1200–1205, Oct. 2005.

[38] P. John, S. Karen, and C. Lily, “Guide to Bluetooth Security,” in NIST


Special Publication 800-121, Revision 1, Jun. 2012.

[39] G. Van Damme, K. M. Wouters, H. Karahan, and B. Preneel, “Offline NFC


Payments with Electronic Vouchers,” in Proc. ACM 1st ACM Workshop Netw.,
Syst., Appl. Mobile Handhelds, 2009, pp. 25–30.

[40] C.-I. Fan, Y.-K. Liang, and C.-N. Wu, “An anonymous fair offline
micropayment scheme,” in Proc. Int. Conf. Inf. Soc., Jun. 2011, pp. 377–381.

61 | P a g e
[41] N. Kiran and G. Kumar, “Building robust m-commerce payment system on
offline wireless network,” in Proc. IEEE 5th Int. Conf. Adv. Netw. Telecommun.
Syst., Dec. 2011, pp. 1–3.

[42] C.-L. Chen and J.-J. Liao, “Fair offline digital content transaction system,”
IET Inf. Security, vol. 6, no. 3, pp. 123–130, Sep. 2012.

[43] C.-I. Fan, V. S.-M. Huang, and Y.-C. Yu, “User efficient recoverable off-line
e-cash scheme with fast anonymity revoking,” Math. Comput. Model., vol. 58, no.
12, pp. 227–237, 2013.

[44] J. Liu, J. Liu, and X. Qiu, “A proxy blind signature scheme and an off-line
electronic cash scheme,” Wuhan Univ. J. Natural Sci., vol. 18, no. 2, pp. 117–125,
2013.

[45] N. Kiran and G. Kumar, “Reliable OSPM schema for secure transaction using
mobile agent in micropayment system,” in Proc. 4th Int. Conf. Comput., Commun.
Netw. Technol., Jul. 2013, pp. 1–6.

[46] B. Yahid, M. Nobakht, and A. Shahbahrami, “Providing security for e-wallet


using e-cheque,” in Proc. 7th Int. Conf. e-Commerce Develop. Countries: Focus e-
Security, Apr. 2013, pp. 1–14.

[47] P. Choi and D. K. Kim, “Design of security enhanced TPM chip against
invasive physical attacks,” in Proc. IEEE Int. Symp. Circuits Syst., 2012, pp.
1787–1790.

62 | P a g e
[48] J. Kr€amer, D. Nedospasov, A. Schl€osser, and J.-P. Seifert, “Differential
photonic emission analysis,” in Proc. 4th Int. Conf. Constructive Side-Channel
Anal. Secure Design, 2013, pp. 1–16.

[49] E. S. Canlar, M. Conti, B. Crispo, and R. Di Pietro, “Windows mobile LiveSD


forensics,” J. Netw. Comput. Appl., vol. 36, no. 2, pp. 677–684, Mar. 2013.

[50] W. P. Griffin, A. Raghunathan, and K. Roy, “CLIP: Circuit level IC


protection through direct injection of process variations,” IEEE Trans. Very Large
Scale Integr. Syst., vol. 20, no. 5, pp. 791–803, May 2012.

63 | P a g e

You might also like