100% found this document useful (1 vote)
1K views

Secure Data Transfer and Deletion From Counting Bloom Filter in Cloud Computing.

The document proposes a counting Bloom filter-based scheme for secure data transfer and deletion from cloud storage. The proposed scheme allows for provable data transfer between different cloud servers, as well as publicly verifiable data deletion without requiring a trusted third party. It aims to address security issues with existing cloud data migration and deletion methods, such as integrity verification of transferred data and ensuring complete removal of data from the original cloud server. The scheme is analyzed through security evaluation and simulation experiments which demonstrate its practicality and efficiency.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
1K views

Secure Data Transfer and Deletion From Counting Bloom Filter in Cloud Computing.

The document proposes a counting Bloom filter-based scheme for secure data transfer and deletion from cloud storage. The proposed scheme allows for provable data transfer between different cloud servers, as well as publicly verifiable data deletion without requiring a trusted third party. It aims to address security issues with existing cloud data migration and deletion methods, such as integrity verification of transferred data and ensuring complete removal of data from the original cloud server. The scheme is analyzed through security evaluation and simulation experiments which demonstrate its practicality and efficiency.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

SECURE DATA TRANSFER AND DELETION FROM COUNTING

BLOOM FILTER IN CLOUD COMPUTING

ABSTRACT

With the rapid development of cloud storage, an increasing number of data owners prefer to
outsource their data to the cloud server, which can greatly reduce the local storage overhead.
Because different cloud service providers offer distinct quality of data storage service, e.g.,
security, reliability, access speed and prices, cloud data transfer has become a fundamental
requirement of the data owner to change the cloud service providers. Hence, how to securely
migrate the data from one cloud to another and permanently delete the transferred data from the
original cloud becomes a primary concern of data owners. To solve this problem, we construct a
new counting Bloom filter-based scheme in this paper. The proposed scheme not only can achieve
secure data transfer but also can realize permanent data deletion. Additionally, the proposed
scheme can satisfy the public verifiability without requiring any trusted third party. Finally, we also
develop a simulation implementation that demonstrates the practicality and efficiency of our
proposal.

Advantages
 Data confidentiality: The outsourced file may contain some private information that should
be kept secret. Hence, to protect the data confidentiality, the data owner needs to use secure
algorithms to encrypt the file before uploading it to the cloud server.
 Data integrity: The cloud A might only migrate part of the data, or deliver some unrelated
data to the cloud B. Besides, the data might be polluted during the transfer process. Hence, the
data owner and the cloud B should be able to verify the transferred data integrity to guarantee
that the transferred data is intact.
 Public verifiability: The cloud A may not move the data to the cloud B or delete the data
faithfully. So, the verifiability of the transfer and deletion results should be satisfied from the
data owner’s point of view.

Disadvantages
o In the existing work, the system does not provide Data integrity proof.
o This system is less performance due to lack of strong encryption techniques.
1. INTRODUCTION
Cloud computing, an emerging and very promising computing paradigm, connects large-scale
distributed storage resources, computing resources and network bandwidths together[1,2]. By
using these resources, it can provide tenants with plenty of high-quality cloud services. Due to
the attractive advantages, the services (especially cloud storage service) have been widely
applied[3,4], by which the resource-constraint data owners can outsource their data to the cloud
server, which can greatly reduce the data owners’ local storage overhead[5,6]. According to the
report of Cisco[7], the number of Internet consumers will reach about 3.6 billion in 2019, and
about 55 percent of them will employ cloud storage service. Because of the promising market
prospect, an increasing number of companies (e.g., Microsoft, Amazon, Alibaba) offer data
owners cloud storage service with different prices, security, access speed, etc. To enjoy more
suitable cloud storage service, the data owners might change the cloud storage service providers.
Hence, they might migrate their outsourced data from one cloud to another, and then delete the
transferred data from the original cloud. According to Cisco[7], the cloud traffic is expected to
be 95% of the total traffic by the end of 2021, and almost 14% of the total cloud traffic will be
the traffic between different cloud data centers. Foreseeably, the outsourced data transfer will
become a fundamental
Requirement from the data owners’ point of view.

To realize secure data migration, an outsourced data transfer app, Cloudsfer [8], has been
designed utilizing cryptographic algorithm to prevent the data from privacy disclosure in the
transfer phase. But there are still some security problems in processing the cloud data migration
and deletion. Firstly, for saving network bandwidth, the cloud server might merely migrate part
of the data, or even deliver some unrelated data to cheat the data owner[9]. Secondly, because of
the network instability, some data blocks may lose during the transfer process. Meanwhile, the
adversary may destroy the transferred data blocks[10]. Hence, the transferred data may be
polluted during the migration process. Last but not least, the original cloud server might
maliciously reserve the transferred data for digging the implicit benefits[11]. The data
reservation is unexpected from the data owners’ point of view. In short, the cloud storage service
is economically attractive, but it inevitably suffers from some serious security challenges,
specifically for the secure data transfer, integrity verification, verifiable deletion. These
challenges, if not solved suitably, might prevent the public from accepting and employing cloud
storage
service.

Contributions In this work, we study the problems of secure data transfer and deletion in cloud
storage, and focus on realizing the public verifiability. Then we propose a counting Bloom filter-
based scheme, which not only can realize provable data transfer between two different clouds but
also can achieve publicly verifiable data deletion. If the original cloud server does not migrate or
remove the data honestly, the verifier (the data owner and the target cloud server) can detect
these malicious operations by verifying the returned transfer and deletion evidences. Moreover,
our proposed scheme does not need any Trusted third party (TTP), which is different from the
existing solutions. Furthermore, we prove that our new proposal can satisfy the desired design
goals through security analysis. Finally, the simulation experiments show that our new proposal
is efficient and practical.

1.1 SCOPE OF THE PROJECT

 Trusted third party (TTP), which is different from the existing solutions. Finally, the
simulation experiments show that our new proposal is efficient and practical.
 Secure data transfer but also can realize permanent data deletion. Additionally, the
proposed scheme can satisfy the public verifiability without requiring any trusted third
party. Finally, we also develop a simulation implementation that demonstrates the
practicality and efficiency of our proposal.

1.2 EXISTING SYSTEM:

 Xue et al.[19] studied the goal of secure data deletion, and put forward a key-policy
attribute based encryption scheme, which can achieve data fine grained access control
and assured deletion. They reach data deletion by removing the attribute and use Merkle
hash tree (MHT) to achieve verifiability, but their scheme requires a trusted authority.
 Du et al.[20] designed a scheme called Associated deletion scheme for multi-copy
(ADM), which uses pre-deleting sequence and MHT to achieve data integrity verification
and provable deletion. However, their scheme also requires a TTP to manage the data
keys. In 2018, Yang et al.[21] presented a Block chain-based cloud data deletion scheme,
in which the cloud executes deletion operation and publishes the corresponding deletion
evidence on Block chain. Then any verifier can check the deletion result by verifying the
deletion proof. Besides, they solve the bottleneck of requiring a TTP. Although these
schemes all can achieve verifiable data deletion, they cannot realize secure data transfer.

1.3 PROPOSED
SYSTEM:

 In the proposed work, the system studies the problems of secure data transfer and
deletion in cloud storage, and focus on realizing the public verifiability. Then the system
proposes a counting Bloom filter-based scheme, which not only can realize provable data
transfer between two different clouds but also can achieve publicly verifiable data
deletion. If the original cloud server does not migrate or remove the data honestly, the
verifier (the data owner and the target cloud server) can detect these malicious operations
by verifying the returned transfer and deletion evidences.
 Moreover, our proposed scheme does not need any Trusted third party (TTP), which is
different from the existing solutions. Furthermore, we prove that our new proposal can
satisfy the desired design goals through security analysis. Finally, the simulation
experiments show that our new proposal is efficient and practical.
2. SYSTEM ANALYSIS
2.1 FEASIBILY STUDY
PRELIMINARY INVESTIGATION

The first and foremost strategy for development of a project starts from the thought of
designing a mail enabled platform for a small firm in which it is easy and convenient of sending
and receiving messages, there is a search engine ,address book and also including some
entertaining games. When it is approved by the organization and our project guide the first
activity, ie. preliminary investigation begins. The activity has three parts:

 Request Clarification
 Feasibility Study
 Request Approval

REQUEST CLARIFICATION

After the approval of the request to the organization and project guide, with an
investigation being considered, the project request must be examined to determine precisely what
the system requires.

Here our project is basically meant for users within the company whose systems
can be interconnected by the Local Area Network(LAN). In today’s busy schedule man need
everything should be provided in a readymade manner. So taking into consideration of the vastly
use of the net in day to day life, the corresponding development of the portal came into
existence.
FEASIBILITY ANALYSIS

An important outcome of preliminary investigation is the determination that the system


request is feasible. This is possible only if it is feasible within limited resource and time. The
different feasibilities that have to be analyzed are

 Operational Feasibility
 Economic Feasibility
 Technical Feasibility
Operational Feasibility
Operational Feasibility deals with the study of prospects of the system to be developed.
This system operationally eliminates all the tensions of the Admin and helps him in effectively
tracking the project progress. This kind of automation will surely reduce the time and energy,
which previously consumed in manual work. Based on the study, the system is proved to be
operationally feasible.
Economic Feasibility
Economic Feasibility or Cost-benefit is an assessment of the economic justification for a
computer based project. As hardware was installed from the beginning & for lots of purposes
thus the cost on project of hardware is low. Since the system is a network based, any number of
employees connected to the LAN within that organization can use this tool from at anytime. The
Virtual Private Network is to be developed using the existing resources of the organization. So
the project is economically feasible.

Technical Feasibility
According to Roger S. Pressman, Technical Feasibility is the assessment of the technical
resources of the organization. The organization needs IBM compatible machines with a graphical
web browser connected to the Internet and Intranet. The system is developed for platform
Independent environment. Java Server Pages, JavaScript, HTML, SQL server and Web Logic
Server are used to develop the system. The technical feasibility has been carried out. The system
is technically feasible for development and can be developed with the existing facility.

REQUEST APPROVAL
Not all request projects are desirable or feasible. Some organization receives so many
project requests from client users that only few of them are pursued. However, those projects that
are both feasible and desirable should be put into schedule. After a project request is approved, it
cost, priority, completion time and personnel requirement is estimated and used to determine
where to add it to any project list. Truly speaking, the approval of those above factors,
development works can be launched.
SYSTEM DESIGN AND DEVELOPMENT

INPUT DESIGN

Input Design plays a vital role in the life cycle of software development, it requires very
careful attention of developers. The input design is to feed data to the application as accurate as
possible. So inputs are supposed to be designed effectively so that the errors occurring while
feeding are minimized. According to Software Engineering Concepts, the input forms or screens
are designed to provide to have a validation control over the input limit, range and other related
validations.

This system has input screens in almost all the modules. Error messages are developed to
alert the user whenever he commits some mistakes and guides him in the right way so that
invalid entries are not made. Let us see deeply about this under module design.

Input design is the process of converting the user created input into a computer-based
format. The goal of the input design is to make the data entry logical and free from errors. The
error is in the input are controlled by the input design. The application has been developed in
user-friendly manner. The forms have been designed in such a way during the processing the
cursor is placed in the position where must be entered. The user is also provided with in an
option to select an appropriate input from various alternatives related to the field in certain cases.

Validations are required for each data entered. Whenever a user enters an erroneous data,
error message is displayed and the user can move on to the subsequent pages after completing all
the entries in the current page.

OUTPUT DESIGN
The Output from the computer is required to mainly create an efficient method of
communication within the company primarily among the project leader and his team members,
in other words, the administrator and the clients. The output of VPN is the system which allows
the project leader to manage his clients in terms of creating new clients and assigning new
projects to them, maintaining a record of the project validity and providing folder level access to
each client on the user side depending on the projects allotted to him. After completion of a
project, a new project may be assigned to the client. User authentication procedures are
maintained at the initial stages itself. A new user may be created by the administrator himself or
a user can himself register as a new user but the task of assigning projects and validating a new
user rests with the administrator only.

The application starts running when it is executed for the first time. The server has to be started
and then the internet explorer in used as the browser. The project will run on the local area
network so the server machine will serve as the administrator while the other connected systems
can act as the clients. The developed system is highly user friendly and can be easily understood
by anyone using it even for the first time.
2.2 FUNCTIONAL REQUIRMENTS

Owner module is to upload their files using some access policy. First they get the public
key for particular upload file after getting this public key owner request the secret key for
particular upload file. Using that secret key owner upload their file and performs Find all cost
and memory Details,View Transfer data from one to another cloud based on the price (Storage
Mode Switching),Check all cloud VM details and Price list.
This module is used to help the client to search the file using the file id and file name. If
the file id and name is incorrect means the user does not get the file, If the user wants the
decryption file means user have the secret key and performs View all attackers, View Resource
Utilization Profiles (Total memory used for each and every data owner),View All VM and Price
details ,Resource Migration Check pointing (if it exceeds Threshold).

2.3 NON FUNCTIONAL REQUIRMENTS

• Maintainability: Maintainability is used to make future maintenance easier, meet new


requirements. Our project can support expansion.
• Robustness: Robustness is the quality of being able to withstand stress, pressures or
changes in procedure or circumstance. Our project also provides it.
• Reliability: Reliability is an ability of a person or system to perform and maintain its
functions in circumstances. Our project also provides it.
• Size: The size of a particular application plays a major role, if the size is less then
efficiency will be high. The size of database we have developed is 5.05 MB.
• Speed: If the speed is high then it is good. Since the no of lines in our code is less,
hence the speed is high.
• Power Consumption: In battery-powered systems, power consumption is very
important. In the requirement stage, power can be specified in terms of battery life.
However the allowable wattage can’t be defined by the customer. Since the no of lines
of code is less CPU uses less time to execute hence power usage will be less.
2.4SYSTEM
REQUIRMENTS

H/W System Configuration:-


➢ Processor - Pentium –IV
➢ RAM - 4 GB (min)
➢ Hard Disk - 20 GB
➢ Key Board - Standard Windows Keyboard
➢ Mouse - Two or Three Button Mouse
➢ Monitor - SVGA

2.5 Software Requirements:


 Operating System - Windows XP
 Coding Language - Java/J2EE(JSP , Servelet )
 Front End - J2EE
 Back End - My SQL
3. SYSTEM DESIGN
3.1 DATA FLOW DIAGRAM (DFD)

1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used
to represent a system in terms of input data to the system, various processing carried out on
this data, and the output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to
model the system components. These components are the system process, the data used by
the process, an external entity that interacts with the system and the information flows in the
system.
3. DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level
of abstraction. DFD may be partitioned into levels that represent increasing information
flow and functional detail.
Level -0

Cloud Server 1

Upload File files


Cloud Server 2
Upload
Data Owner

Send Transaction Details Cloud Server 3

Cloud Server 4

Send Log for Access


data, uploading etc

Proxy Server
Level -1

Req File
1 Receiver Cloud Servers (, cs2, cs3, cs4)

Check fname and


sk
Authorize
the file

Enter correct File name or


Secret Key
Level -2

Process Data
Cloud Servers Proxy
integrity

Req Res

Res

Verify File

Data Integrity
Data Owner Verification
3.2 UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized general-purpose


modeling language in the field of object-oriented software engineering. The standard is managed,
and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object oriented
computer software. In its current form UML is comprised of two major components: a Meta-model
and a notation. In the future, some form of method or process may also be added to; or associated
with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business modeling
and other non-software systems.
The UML represents a collection of best engineering practices that have proven successful
in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the software
development process. The UML uses mostly graphical notations to express the design of software
projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop
and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks, patterns
and components.
7. Integrate best practices.
3.2.1 USE CASE DIAGRAM

A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented as
use cases), and any dependencies between those use cases. The main purpose of a use case diagram
is to show what system functions are performed for which actor. Roles of the actors in the system
can be depicted.
3.2.2 CLASS DIAGRAM

In software engineering, a class diagram in the Unified Modeling Language (UML)


is a type of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.

3.2.3

SEQUENCE DIAGRAM
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that
shows how processes operate with one another and in what order. It is a construct of a Message
Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and
timing diagrams

Data Owner End User Proxy Server Cloud servers


Register data owner
Registration Confirmation
Request
Login for VM Find access
Log
Send File
File stored confirmation
Register

RequestRegister
file confirmation
File sending response Authorize
Verify file integrity
user
Checks the
MAC value
Inform about safety of the file View Files
View
Delete file Request View Users
View Cloud details file blocked
user
Unblock Attacker
Receive
user modify
file
file
View End
users

3.4 DATABASE TABLES


In a relational database terms a table is responsible for storing data in the database.
In this project we are using MYSQL 5.5 and we created a new database. The new database
name is sdt.
Database consists of rows and columns. In this database we used the commands create,
alter, drop, truncate, insert, update, delete. It is also used to perform specific task, functions and
queries of data.

3.4.1 charm _amazons

3.4.2 charm _amazons3

3.4.3.charm_alynosstr
3.4.4 charm _avmcost

3.4.5 charm_cloud

3.5 SYSTEM ARCHITECTURE


CS1 ---- Rack space
CS2 ---- Amazon S3
CS3 ---- Windows Azure
CS4 ---- Aliyun OSS

Price memory
2000 18000
6000 40000
8000 1000000
4. MODULES

4.1 Modules
Multi-cloud:
Lots of data centers are distributed around the world, and one region such as America, Asia,
usually has several data centers belonging to the same or different cloud providers. So technically
all the data centers can be access by a user in a certain region, but the user would experience
different performance. The latency of some data centers is very low while that of some ones may be
intolerable high. System chooses clouds for storing data from all the available clouds which meet
the performance requirement, that is, they can offer acceptable throughput and latency when they
are not in outage. The storage mode transition does not impact the performance of the service.
Since it is not a latency-sensitive process, we can decrease the priority of transition operations, and
implement the transition in batch when the proxy has low workload.

Data Owner:
In this section, we elaborate a cost-efficient data hosting model with high availability in
heterogeneous multi-cloud, named “MULTI CLOUD”. The architecture of CHARM is shown in
Figure 3. The whole model is located in the proxy in this system. There are four main components
in MULTI CLOUD: Data Hosting, Storage Mode Switching (SMS), Workload Statistic, and
Predictor. Workload Statistic keeps collecting and tackling access logs to guide the placement of
data. It also sends statistic information to Predictor which guides the action of SMS. Data Hosting
stores data using replication or erasure coding, according to the size and access frequency of the
data. SMS decides whether the storage mode of certain data should be changed from replication to
erasure coding or in reverse, according to the output of Predictor. The implementation of changing
storage mode runs in the background, in order not to impact online service. Predictor is used to
predict the future access frequency of files. The time interval for prediction is one month, that is,
we use the former months to predict access frequency of files in the next month. However, we do
not put emphasis on the design of predictor, because there have been lots of good algorithms for
prediction. Moreover, a very simple predictor, which uses the weighted moving average approach,
works well in our data hosting model. Data Hosting and SMS are two important modules in
MULTI CLOUD. Data Hosting decides storage mode and the clouds that the data should be stored
in. This is a complex integer programming problem demonstrated in the following subsections.
Then we illustrate how SMS works in detail in x V, that is, when and how many times should the
transition be implemented.

Cloud Storage:
Cloud storage services have become increasingly popular. Because of the importance of
privacy, many cloud storage encryption schemes have been proposed to protect data from those
who do not have access. All such schemes assumed that cloud storage providers are safe and cannot
be hacked; however, in practice, some authorities (i.e., coercers) may force cloud storage providers
to reveal user secrets or confidential data on the cloud, thus altogether circumventing storage
encryption schemes. In this paper, we present our design for a new cloud storage encryption
scheme that enables cloud storage providers to create convincing fake user secrets to protect user
privacy. Since coercers cannot tell if obtained secrets are true or not, the cloud storage providers
ensure that user privacy is still securely protected. Most of the proposed schemes assume cloud
storage service providers or trusted third parties handling key management are trusted and cannot
be hacked; however, in practice, some entities may intercept communications between users and
cloud storage providers and then compel storage providers to release user secrets by using
government power or other means. In this case, encrypted data are assumed to be known and
storage providers are requested to release user secrets. we aimed to build an encryption scheme that
could help cloud storage providers avoid this predicament. In our approach, we offer cloud storage
providers means to create fake user secrets. Given such fake user secrets, outside coercers can only
obtained forged data from a user’s stored ciphertext. Once coercers think the received secrets are
real, they will be satisfied and more importantly cloud storage providers will not have revealed any
real secrets. Therefore, user privacy is still protected. This concept comes from a special kind of
encryption scheme called deniable encryption.

Owner Module:
Owner module is to upload their files using some access policy. First they get the public key
for particular upload file after getting this public key owner request the secret key for particular
upload file. Using that secret key owner upload their file and performs Find all cost and memory
Details,View Owner’s VMs Details and purchase, Browse and enc file and upload,Check Data
Integrity Proof, Transfer data from one to another cloud based on the price (Storage Mode
Switching),Check all cloud VM details and Price list.

User Module:
This module is used to help the client to search the file using the file id and file name. If the
file id and name is incorrect means the user does not get the file, otherwise server ask the secret key
and get the encryption file. If the user wants the decryption file means user have the secret key and
performs View all attackers, View Resource Utilization Profiles (Total memory used for each and
every data owner),View All VM and Price details ,Resource Migration Check pointing (if it
exceeds Threshold).

4.2 SAMPLE CODE

1.CloudServerMain.jsp
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Cloud Main</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<link href="css/style.css" rel="stylesheet" type="text/css" />
<link rel="stylesheet" type="text/css" href="css/coin-slider.css" />
<script type="text/javascript" src="js/cufon-yui.js"></script>
<script type="text/javascript" src="js/droid_sans_400-droid_sans_700.font.js"></script>
<script type="text/javascript" src="js/jquery-1.4.2.min.js"></script>
<script type="text/javascript" src="js/script.js"></script>
<script type="text/javascript" src="js/coin-slider.min.js"></script>
<style type="text/css">
<!--
.style1 {color: #0000FF}
.style2 {color: #00FF00}
.style3 {font-weight: bold}
.style4 {
color: #FF0000;
font-weight: bold;
}
-->
</style>
</head>
<body>
<div class="main">
<div class="header">
<div class="header_resize">
<div class="logo">
<h1><a href="index.html">Secure Data Transfer<small>and Deletion from Counting Bloom
Filter in Cloud Computing</small></a></h1>
</div>
<div class="clr"></div>
<div class="menu_nav">
<ul>
<li ><a href="index.html"><span>Home Page</span></a></li>
<li><a href="DataOwner.html"><span>Data Owner </span></a></li>
<li><a href="ProxyServer.html"><span>Proxy Server</span></a></li>
<li class="active"><a href="CloudServer.html"><span>CloudServer</span></a></li>
<li><a href="EndUser.html"><span>EndUser</span></a></li>
</ul>
</div>
<div class="clr"></div>
<div class="slider">
<div id="coin-slider"> <a href="#"><img src="images/slide1.jpg" width="960" height="313"
alt="" /> </a> <a href="#"><img src="images/slide2.jpg" width="960" height="313" alt="" /> </a>
<a href="#"><img src="images/slide3.jpg" width="960" height="313" alt="" /> </a> </div>
<div class="clr"></div>
</div>
<div class="clr"></div>
</div>
</div>
<div class="content">
<div class="content_resize">
<div class="mainbar">
<div class="article">
<%
String name = null;
String a = (String) application.getAttribute("cloudName");

String s = a, usr2 = "";

//Rackspace
//Amazon S3
//Windows Azure
//Aliyun OSS

if (!(a.equalsIgnoreCase("Amazon S3") || a.equalsIgnoreCase("Windows Azure") || a


.equalsIgnoreCase("Aliyun OSS"))) {
usr2 = a;

application.setAttribute("ocn", usr2);
%>

<h2 class="style4">Welcome To <%=usr2%> Cloud Server Control Panel</h2>

<span class="style4">
<%
}

if (!(a.equalsIgnoreCase("Rackspace") || a.equalsIgnoreCase("Windows Azure") || a


.equalsIgnoreCase("Aliyun OSS"))) {

String b=(String)application.getAttribute("cnames2");
usr2 = b;
application.setAttribute("ocn", usr2);
%>
</span>
<h2 class="style4">Welcome To <%=usr2%> Control Panel</h2>

<span class="style4">
<%
}
if (!(a.equalsIgnoreCase("Rackspace") || a.equalsIgnoreCase("Amazon S3") || a
.equalsIgnoreCase("Aliyun OSS"))) {

String c=(String)application.getAttribute("cnames3");
usr2 = c;
application.setAttribute("ocn", usr2);
%>
</span>
<h2 class="style4">Welcome To <%=usr2%> Control Panel</h2>

<span class="style4">
<%
}
if (!(a.equalsIgnoreCase("Rackspace") || a.equalsIgnoreCase("Amazon S3") || a
.equalsIgnoreCase("Windows Azure"))) {

String d=(String)application.getAttribute("cnames4");
usr2 = d;
application.setAttribute("ocn", usr2);
%>
</span>
<h2 class="style4">Welcome To <%=usr2%> Control Panel</h2>

<%
}
%>

<p class="infopost"><span class="style3 style2">CloudServer </span>&nbsp;&nbsp;|


<span class="style1">&nbsp;&nbsp;Control Panel </span></p>
<div align="center">
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><img src="images/gal6.jpg" width="374" height="186" />
</p>
</div>
<div class="img"></div>
<div class="clr"></div>
</div>
</div>
<div class="sidebar">
<div class="searchform">
<form id="formsearch" name="formsearch" method="post" action="#">
<span>
<input name="editbox_search" class="editbox_search" id="editbox_search"
maxlength="80" value="Search our ste:" type="text" />
</span>
<input name="button_search" src="images/search.gif" class="button_search" type="image"
/>
</form>
</div>
<div class="clr"></div>
<div class="gadget">
<h2 class="star"><span>Cloud Menu</span></h2>
<div class="clr"></div>
<ul class="sb_menu style3">

<li><a href="ViewDataOwners.jsp">View Data Owners</a></li>


<li><a href="ViewUsers.jsp">View Users</a></li>
<li><a href="GetThreshold.jsp">View Threshold Details</a></li>
<li><a href="GetVMR.jsp">VM Resources</a></li>
<li><a href="ViewMigrateDetails.jsp">View Transfer Cloud</a></li>

<li><a href="ViewCloudFiles.jsp">View All Files</a></li>


<li><a href="MemoryUtil.jsp">View Memory Utillization</a></li>
<li><a href="ViewAttackers.jsp">View All Attackers</a></li>
<li><a href="UnblockUser.jsp">UnRevoke Vendor</a></li>
<li><a href="index.html">Log Out</a></li>
</ul>
</div>
</div>
<div class="clr"></div>
</div>
</div>
<div class="fbg"></div>
<div class="footer">
<div class="footer_resize">
<p class="lf"></p>
<p class="rf"></p>
<div style="clear:both;"></div>
</div>
</div>
</div>
<div align=center></div></body>
</html>

2.DataOwnerMain.jsp

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"


"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Data Owner Main</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<link href="css/style.css" rel="stylesheet" type="text/css" />
<link rel="stylesheet" type="text/css" href="css/coin-slider.css" />
<script type="text/javascript" src="js/cufon-yui.js"></script>
<script type="text/javascript" src="js/droid_sans_400-droid_sans_700.font.js"></script>
<script type="text/javascript" src="js/jquery-1.4.2.min.js"></script>
<script type="text/javascript" src="js/script.js"></script>
<script type="text/javascript" src="js/coin-slider.min.js"></script>
<style type="text/css">
<!--
.style1 {
color: #FF0000;
font-weight: bold;
}
.style2 {color: #0000FF}
.style3 {color: #00FF00}
.style4 {
color: #000000;
font-weight: bold;
}
-->
</style>
</head>
<body>
<div class="main">
<div class="header">
<div class="header_resize">
<div class="logo">
<h1><a href="index.html">Secure Data Transfer<small>and Deletion from Counting Bloom
Filter in Cloud Computing</small></a></h1>
</div>
<div class="clr"></div>
<div class="menu_nav">
<ul>
<li ><a href="index.html"><span>Home Page</span></a></li>
<li class="active"><a href="DataOwner.html"><span>Data Owner </span></a></li>
<li><a href="ProxyServer.html"><span>Proxy Server</span></a></li>
<li><a href="CloudServer.html"><span>CloudServer</span></a></li>
<li><a href="EndUser.html"><span>EndUser</span></a></li>
</ul>
</div>
<div class="clr"></div>
<div class="slider">
<div id="coin-slider"> <a href="#"><img src="images/slide1.jpg" width="960" height="313"
alt="" /> </a> <a href="#"><img src="images/slide2.jpg" width="960" height="313" alt="" /> </a>
<a href="#"><img src="images/slide3.jpg" width="960" height="313" alt="" /> </a> </div>
<div class="clr"></div>
</div>
<div class="clr"></div>
</div>
</div>
<div class="content">
<div class="content_resize">
<div class="mainbar">
<div class="article">
<h2 class="style1">Welcome <%=application.getAttribute("ocl") %> DataOwner :: <
%=application.getAttribute("onname") %> </h2>
<p class="infopost"><span class="style3">Owner </span>&nbsp;&nbsp;|
&nbsp;&nbsp;<span class="style2">Control Panel </span></p>
<div class="clr"></div>
<div><p align="center" class="style4"><img src="images/Owner.jpg" width="577"
height="337" /></p>
<p class="spec"><a href="#" class="rm"></a></p>
</div>
<div class="clr"></div>
</div>
</div>
<div class="sidebar">
<div class="searchform"></div>
<div class="clr"></div>
<div class="gadget">
<h2 class="star"><span>Owner Menu</span></h2>
<div class="clr"></div>
<ul class="sb_menu">
<li><strong><a href="GetCloudCost.jsp">Find Cost and Memory</a></strong></li>
<li><strong><a href="PurchaseVm.jsp">Purchase VM</a></strong></li>
<li><strong><a href="Vmdetails.jsp">My VM Details</a></strong></li>
<li><strong><a href="Upload.jsp">Upload</a></strong></li>
<li><strong><a href="Verify.jsp">Data Integrity Proof</a></strong></li>
<li><strong><a href="Migrate.jsp">Transfer Your Data</a></strong></li>
<li><strong><a href="VReq.jsp">View Request</a></strong></li>
<li><strong><a href="ViewOwnerDetails.jsp">View Owner Files </a></strong></li>

<li><strong><a href="index.html">Log Out</a></strong></li>


</ul>
</div>
</div>
<div class="clr"></div>
</div>
</div>
<div class="fbg"></div>
<div class="footer">
<div class="footer_resize">
<p class="lf"></p>
<p class="rf"></p>
<div style="clear:both;"></div>
</div>
</div>
</div>
<div align=center></div></body>
</html>
5. TESTING

5.1 SYSTEM TESTING

TESTING METHODOLOGIES
The following are the Testing Methodologies:

o Unit Testing.
o Integration Testing.
o User Acceptance Testing.
o Output Testing.
o Validation Testing.

1. Unit Testing
Unit testing focuses verification effort on the smallest unit of Software design that is the
module. Unit testing exercises specific paths in a module’s control structure to
ensure complete coverage and maximum error detection. This test focuses on each module
individually, ensuring that it functions properly as a unit. Hence, the naming is Unit Testing.
During this testing, each module is tested individually and the module interfaces are
verified for the consistency with design specification. All important processing path are tested for
the expected results. All error handling paths are also tested.

2. Integration Testing
Integration testing addresses the issues associated with the dual problems of verification
and program construction. After the software has been integrated a set of high order tests are
conducted. The main objective in this testing process is to take unit tested modules and builds a
program structure that has been dictated by design.

The following are the types of Integration Testing:


1. Top Down Integration
This method is an incremental approach to the construction of program structure. Modules are
integrated by moving downward through the control hierarchy, beginning with the main program
module. The module subordinates to the main program module are incorporated into the structure
in either a depth first or breadth first manner.
In this method, the software is tested from main module and individual stubs are replaced
when the test proceeds downwards.
2. Bottom-up Integration
This method begins the construction and testing with the modules at the lowest level in the
program structure. Since the modules are integrated from the bottom up, processing required for
modules subordinate to a given level is always available and the need for stubs is eliminated. The
bottom up integration strategy may be implemented with the following steps:

 The low-level modules are combined into clusters into clusters that perform a
specific Software sub-function.
 A driver (i.e.) the control program for testing is written to coordinate test case input and
output.
 The cluster is tested.
 Drivers are removed and clusters are combined moving upward in the program
structure
The bottom up approaches tests each module individually and then each module is module is
integrated with a main module and tested for functionality.

3. User Acceptance Testing


User Acceptance of a system is the key factor for the success of any system. The system
under consideration is tested for user acceptance by constantly keeping in touch with the
prospective system users at the time of developing and making changes wherever required. The
system developed provides a friendly user interface that can easily be understood even by a person
who is new to the system.

4. Output Testing
After performing the validation testing, the next step is output testing of the proposed
system, since no system could be useful if it does not produce the required output in the specified
format. Asking the users about the format required by them tests the outputs generated or displayed
by the system under consideration. Hence the output format is considered in 2 ways – one is on
screen and another in printed format.

5. Validation Checking
Validation checks are performed on the following fields.

Text Field:
The text field can contain only the number of characters lesser than or equal to its size. The
text fields are alphanumeric in some tables and alphabetic in other tables. Incorrect entry always
flashes and error message.

Numeric Field:
The numeric field can contain only numbers from 0 to 9. An entry of any character flashes
an error messages. The individual modules are checked for accuracy and what it has to perform.
Each module is subjected to test run along with sample data. The individually tested modules
are integrated into a single system. Testing involves executing the real data information is used in
the program the existence of any program defect is inferred from the output. The testing should be
planned so that all the requirements are individually tested.

A successful test is one that gives out the defects for the inappropriate data and produces
and output revealing the errors in the system.
Preparation of Test Data

Taking various kinds of test data does the above testing. Preparation of test data plays a
vital role in the system testing. After preparing the test data the system under study is tested
using that test data. While testing the system by using test data errors are again uncovered and
corrected by using above testing steps and corrections are also noted for future use.
Using Live Test Data:

Live test data are those that are actually extracted from organization files. After a system is
partially constructed, programmers or analysts often ask users to key in a set of data from their
normal activities. Then, the systems person uses this data as a way to partially test the system. In
other instances, programmers or analysts extract a set of live data from the files and have them
entered themselves.
It is difficult to obtain live data in sufficient amounts to conduct extensive testing. And,
although it is realistic data that will show how the system will perform for the typical processing
requirement, assuming that the live data entered are in fact typical, such data generally will not test
all combinations or formats that can enter the system. This bias toward typical values then does not
provide a true systems test and in fact ignores the cases most likely to cause system failure.

Using Artificial Test Data:


Artificial test data are created solely for test purposes, since they can be generated to test all
combinations of formats and values. In other words, the artificial data, which can quickly be
prepared by a data generating utility program in the information systems department, make possible
the testing of all login and control paths through the program.

The most effective test programs use artificial test data generated by persons other than
those who wrote the programs. Often, an independent team of testers formulates a testing plan,
using the systems specifications.

The package “Virtual Private Network” has satisfied all the requirements specified as per
software requirement specification and was accepted.

5.2 USER TRAINING

Whenever a new system is developed, user training is required to educate them about the
working of the system so that it can be put to efficient use by those for whom the system has been
primarily designed. For this purpose the normal working of the project was demonstrated to the
prospective users. Its working is easily understandable and since the expected users are people who
have good knowledge of computers, the use of this system is very easy.
5.3 MAINTAINENCE

This covers a wide range of activities including correcting code and design errors. To
reduce the need for maintenance in the long run, we have more accurately defined the user’s
requirements during the process of system development. Depending on the requirements, this
system has been developed to satisfy the needs to the largest possible extent. With development in
technology, it may be possible to add many more features based on the requirements in future. The
coding and designing is simple and easy to understand which will make maintenance easier.

TESTING STRATEGY :

A strategy for system testing integrates system test cases and design techniques into a well
planned series of steps that results in the successful construction of software. The testing strategy
must co-operate test planning, test case design, test execution, and the resultant data collection and
evaluation .A strategy for software testing must accommodate low-level tests that are necessary
to verify that a small source code segment has been correctly implemented as well as high level
tests that validate major system functions against user requirements.

Software testing is a critical element of software quality assurance and represents the ultimate
review of specification design and coding. Testing represents an interesting anomaly for the
software. Thus, a series of testing are performed for the proposed system before the system is
ready for user acceptance testing.

SYSTEM TESTING:
Software once validated must be combined with other system elements (e.g. Hardware,
people, database). System testing verifies that all the elements are proper and that overall system
function performance is
achieved. It also tests to find discrepancies between the system and its original objective, current
specifications and system documentation.

UNIT TESTING:

In unit testing different are modules are tested against the specifications produced during
the design for the modules. Unit testing is essential for verification of the code produced during the
coding phase, and hence the goals to test the internal logic of the modules. Using the detailed
design description as a guide, important Conrail paths are tested to uncover errors within the
boundary of the modules. This testing is carried out during the programming stage itself. In this
type of testing step, each module was found to be working satisfactorily as regards to the expected
output from the module.

In Due Course, latest technology advancements will be taken into consideration. As part
of technical build-up many components of the networking system will be generic in nature so that
future projects can either use or interact with this. The future holds a lot to offer to the
development and refinement of this project.
6. OUTPUT SCREENSHOTS
HOME PAGE

DATA OWNER LOGIN

PROXY SERVER
CLOUD SERVER LOGIN

END USER LOGIN


OWNER FILE DETAILS

7. CONCLUSION AND FUTURE IMPLEMENTATION


CONCLUSIONS
Conclusions In cloud storage, the data owner does not believe that the cloud server might execute
the data transfer and deletion operations honestly. To solve this problem, we propose a CBF-based
secure data transfer scheme, which can also realize verifiable data deletion. In our scheme, the
cloud B can check the transferred data integrity, which can guarantee the data is entirely migrated.
Moreover, the cloud A should adopt CBF to generate a deletion evidence after deletion, which will
be used to verify the deletion result by the data owner. Hence, the cloud A cannot behave
maliciously and cheat the data owner successfully. Finally, the security analysis and simulation
results validate the security and practicability of our proposal, respectively.

Future work Similar to all the existing solutions, our scheme considers the data transfer between
two different cloud servers. However, with the development of cloud storage, the data owner might
want to simultaneously migrate the outsourced data from one cloud to the other two or more target
clouds. However, the multi-target clouds might collude together to cheat the data owner
maliciously. Hence, the provable data migration among three or more clouds requires our further
exploration.
REFERENCE

[1] C. Yang and J. Ye, “Secure and efficient fine-grained data access control scheme in cloud
computing”, Journal of High Speed Networks, Vol.21, No.4, pp.259–271, 2015.
[2] X. Chen, J. Li, J. Ma, et al., “New algorithms for secure outsourcing of modular
exponentiations”, IEEE Transactions on Parallel and Distributed Systems, Vol.25, No.9, pp.2386–
2396, 2014.
[3] P. Li, J. Li, Z. Huang, et al., “Privacy-preserving outsourced classification in cloud computing”,
Cluster Computing, Vol.21, No.1, pp.277–286, 2018.
[4] B. Varghese and R. Buyya, “Next generation cloud computing: New trends and research
directions”, Future Generation Computer Systems, Vol.79, pp.849–861, 2018.
[5] W. Shen, J. Qin, J. Yu, et al., “Enabling identity-based integrity auditing and data sharing with
sensitive information hiding for secure cloud storage”, IEEE Transactions on Information
Forensics and Security, Vol.14, No.2, pp.331–346, 2019.
[6] R. Kaur, I. Chana and J. Bhattacharya J, “Data deduplication techniques for efficient cloud
storage management: A systematic review”, The Journal of Supercomputing, Vol.74, No.5,
pp.2035–2085, 2018.
[7] Cisco, “Cisco global cloud index: Forecast and methodology, 2014–2019”, available at:
https://www.cisco.com/c/en/us-/solutions/collateral/service-provider/global-cloud-index-gci/
white-paper-c11-738085.pdf, 2019-5-5.
[8] Cloudsfer, “Migrate & backup your files from any cloud to any cloud”, available at:
https://www.cloudsfer.com/, 2019-5-5.
[9] Y. Liu, S. Xiao, H. Wang, et al., “New provable data transfer from provable data possession
and deletion for secure cloud storage”, International Journal of Distributed Sensor Networks,
Vol.15, No.4, pp.1–12, 2019.
[10] Y. Wang, X. Tao, J. Ni, et al., “Data integrity checking with reliable data transfer for secure
cloud storage”, International Journal of Web and Grid Services, Vol.14, No.1, pp.106–121, 2018.
[11] Y. Luo, M. Xu, S. Fu, et al., “Enabling assured deletion in the cloud storage by overwriting”,
Proc. of the 4th ACM International Workshop on Security in Cloud Computing, Xi’an, China,
pp.17–23, 2016.
[12] C. Yang and X. Tao, “New publicly verifiable cloud data deletion scheme with efficient
tracking”, Proc. of the 2th International Conference on Security with Intelligent Computing and
Big-data Services, Guilin, China, pp.359–372, 2018.
[13] Y. Tang, P.P Lee, J.C. Lui, et al., “Secure overlay cloud storage with access control and
assured deletion”, IEEE Transactions on Dependable and Secure Computing, Vol.9, No.6, pp.903–
916, 2012.
[14] Y. Tang, P.P.C. Lee, J.C.S. Lui, et al., “FADE: Secure overlay cloud storage with file assured
deletion”, Proc. Of the 6th International Conference on Security and Privacy in Communication
Systems, Springer, pp.380-397, 2010.
[15] Z. Mo, Y. Qiao and S. Chen, “Two-party fine-grained assured deletion of outsourced data in
cloud systems”, Proc. of the 34th International Conference on Distributed Computing Systems,
Madrid, Spain, pp.308–317, 2014.
[16] M. Paul and A. Saxena, “Proof of erasability for ensuring comprehensive data deletion in
cloud computing”, Proc. of the International Conference on Network Security and Applications,
Chennai, India, pp.340–348, 2010.
[17] A. Rahumed, H.C.H. Chen, Y. Tang, et al., “A secure cloud backup system with assured
deletion and version control”, Proc. of the 40th International Conference on Parallel Processing
Workshops, Taipei City, Taiwan, pp.160–167, 2011.
[18] B. Hall and M. Govindarasu, “An assured deletion technique for cloud-based IoT”, Proc. of
the 27th International Conference on Computer Communication and Networks, Hangzhou, China,
pp.1–8, 2018.
[19] L. Xue, Y. Yu, Y. Li, et al., “Efficient attribute based encryption with attribute revocation for
assured data deletion”, Information Sciences, Vol.479, pp.640–650, 2019.
[20] L. Du, Z. Zhang, S. Tan, et al., “An Associated Deletion Scheme for Multi-copy in Cloud
Storage”, Proc. of the 18th International Conference on Algorithms and Architectures for Parallel
Processing, Guangzhou, China, pp.511–526, 2018.
[21] C. Yang, X. Chen and Y. Xiang, “Block chain-based publicly verifyable data deletion scheme
for cloud storage”, Journal of Network and Computer Applications, Vol.103, pp.185–193, 2018.
[22] Y. Yu, J. Ni, W. Wu, et al., “Provable data possession supporting secure data transfer for
cloud storage”, Proc. Of 2015 10th International Conference on Broadband and Wireless
Computing, Communication and Applications (BWCCA 2015), Krakow, Poland, pp.38–42, 2015.
[23] J. Ni, X. Lin, K. Zhang, et al., “Secure outsourced data transfer with integrity verification in
cloud storage”, Proc. of 2016 IEEE/CIC International Conference on Communications in China,
Chengdu, China, pp.1–6, 2016.
[24] L. Xue, J. Ni, Y. Li, et al., “Provable data transfer from provable data possession and deletion
in cloud storage”, Computer Standards & Interfaces, Vol.54, pp.46–54, 2017.
[25] Y. Liu, X. Wang, Y. Cao, et al., “Improved provable data transfer from provable data
possession and deletion in cloud storage ”, Proc. of Conference on Intelligent Networking and
Collaborative Systems, Bratislava, Slovakia, pp.445–452, 2018.

You might also like