0% found this document useful (0 votes)
109 views57 pages

A Searchable and Verifiable Data Protection Scheme For Scholarly Big Data

A SEARCHABLE AND VERIFIABLE DATA PROTECTION SCHEME FOR SCHOLARLY BIG DATA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views57 pages

A Searchable and Verifiable Data Protection Scheme For Scholarly Big Data

A SEARCHABLE AND VERIFIABLE DATA PROTECTION SCHEME FOR SCHOLARLY BIG DATA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

1.

INTRODUCTION
The evolution of civilization is dependent improvement in science and technology.
Everything that scientists study affects people in some way, in every area of their
lives. As time goes on and new areas of study open up, there is a growing body of
work representing the fruits of many disciplines. Increases their networks, the
variety the size of their readership, and the number of fields in which they publish,
among other factors, all contribute to the expansion of scholarly data.

Improvements in decades or centuries to come are related to the growth. Because


of the tremendous growth in the study of scientific theories [3, 4], academic big
data has emerged. Information about researchers, such as names and email
addresses, as well as their publications, datasets from experiments, and findings,
are all examples of "big data" in the academic world. Experiment data
pertaining data, may also fall into this category [4], [5]. Other types of data that
may fall into this category include information about the authors' social
relationships, copyright, and right of authorship. The information here is both
intricate and crucial.

If a coauthor on a scholarly work is likely to alter the work, the author's credibility
may suffer. It is possible that the researchers' results have been if malicious users
are uploading data to the system using the identities of legitimate users. Users'
private information or the authors' most recent ideas could be exposed through
their search terms on academic websites. A reader's health may be revealed, for
instance, if he or she conducts research dynamics surrounding that illness. As a
result, academic big data are a boon to the research community as a whole, but also
a vital lifeline to the people who rely on them. Consequently, it is essential to
create novel methods for the study, creation, and preservation [6, 7, 8, 9]. In recent

1
years, cloud computing has become increasingly popular as a distributed
computing solution [10, 11], [12], [13], [14].

There has been a lot of talk about how powerful cloud computing technology is
and how much data can be stored in the cloud on are just some of the many
applications for. Numerous studies have provided the theoretical and practical
groundwork necessary to put cloud computing into practise.

Inevitably, the expansion and refinement of cloud computing infrastructure opens


the door to the archiving of massive datasets used in academic research. Research
involving large amounts of data can benefit greatly from the infrastructure and
technical backing that cloud computing can provide . Data stored in the cloud, for
instance, can be organised in ways that make it simple to perform operations like
searching, updating, and erasing. Furthermore, with the power of the cloud's
computing, one from impacting . Advocating for cloud computing requires
effective methods of protecting user data and privacy. Numerous researchers have
put forth highly effective schemes for protecting cloud-based. Each of these
methods takes a unique approach to securing data in the cloud. Safe data storage,
quick search capabilities, and independent verification of data integrity are all
services that can be accessed via the cloud for academic big data [33]. Therefore, a
cloud-based system for storing scholarly data that is both safe and efficient.

The motivation for this paper is the lack of a dedicated scheme for protecting
academic big data, despite protection. To maximise the benefits of scientific
research, it is important to design an appropriate protection scheme that takes into
account the unique characteristics of scholarly big data. To begin, academic big
data requires a system model tailored to its specific needs. Second needed.
Combining these features of academic big data with the creation of a

2
practical make these datasets much more accessible. Last but not least, it is crucial
that authentic data readers encrypted searches and integrity checks on the returned
results.

What we've brought to the table is a model of a user-differentiated, third-party-


aided system. This paper presents a system model with the goal of accommodating
a variety of user identities and technological implementation processes in order to
fulfil a variety of application requirements related to scholarly big data. We
classify our users as either authors, editors, depending on their role in the creative
process. Papers, methods, experimental data, and protocols, as well as the authors'
bios, can all be uploaded. Journals, editorial staff, and calls for papers/special
issues/biographies can all be uploaded by editors. The system should verify the
integrity of scholarly data in the cloud for valid readers using encrypted keywords.
The system validates assists them in conducting searches and verifying the data
with the aid of an independent third party. The information is stored in a cube-
shaped database.

We create to house academic big data, making it easier to store and search. As an
added bonus, this architecture is able to blocks, as well as independently. To
secure sensitive information while making it accessible to researchers, a new
system is proposed. The scheme is novel in its implementation of data integrity
verification to ensure the safety of search resus.

That is to say, the scheme ensures the security of the search while not
compromising the data being looked up. The scheme can also be expanded upon
easily utilise the to obtain. An innovative aspect of this paper is the use procedure.
Because of this system, the integrity of the data and the security of the search terms
are guaranteed.

3
2.LITERATURE SURVEY

2.1Fair energy scheduling for vehicle-to-grid networks using adaptive


dynamic program

AUTHORS: S. Xie, W. Zhong, K. Xie, R. Yu, and Y. Zh

ABSTRACT Is receiving massive funding from governments and companies


around the world because of its potential to significantly ameliorate global
energy and environmental crises. More and more people are switching to EVs
every year because of the environmental benefits they provide. It is expected
that the smart grid's reliability will be significantly impacted by the massive
charge load brought on by high EV penetration. As a result, this paper proposes
a fair energy scheduling system for EV charging and discharging. The
scheduler uses vehicle-to-grid technology to manage EVs' electricity loads
while maintaining distribution network equity. To ensure that electric vehicles
(EVs) with the greatest contributions receive sufficient charge energy, we
propose a contribution-based fairness system. determine the value of the
contribution. When EVs discharge during peak load times, they can earn higher
contribution values. But if you try to monetize during that time, you'll see a
significant drop in contribution value. We model the equitable allocation of
energy resources as a Markov decision process with infinitely many possible
future states. Using data from online network training, used to optimise long-
term fairness. As shown by the simulation results, can reduce and smooth out
the distribution network's peak load. Equalizing the full charge time for all
electric vehicles in a fair manner is another benefit of contribution-based
fairness.

4
2.2AlgorithmSeer: A System for Extracting and Searching for Algorithms
in Scholarly Big Data

AUTHORS: S. Tuarob, S. Bhatia, P. Mitra, and C. L. G

ABSTRACT:Especially in the fields of fields, algorithms are frequently


published in academic articles. would be made possible with from this ever-
growing collection of scholarly digital documents. CiteSeer $X$ has recently
looked into AlgorithmSeer, with the goal of creating a sizable algorithm
database. There are currently over 200,000 algorithms academic documents. In
this paper, we propose a new thatAlgorithmSeer can use to find from a wide
variety of academic papers. In order to find representations of algorithms, it is
suggested to use a hybrid machine learning approach. Then, strategies for
extracting . At last, a demo version of AlgorithmSeer is shown, one that is
based on the Solr/Lucene open-source indexing and search system.

2.3 Compressive sensing of piezoelectric sensor response signal for phased


array structural health monitoring

AUTHORS: Y. Sun and F

ABSTRACT:Sparse signal representation, observation matrix design, and


signal reconstruction are the three phases that make up compressive sensing.
After compressing and sampling, the original signal's component information
may be lost in the current observation matrix. Then, for w e propose to sparse
samples of ultrasonic wave signals for analysis. In order for the compressed and
sampled sparse signal to contain, adaptively vector. will also be reconstructed
using the highly accurate algorithm. Last but not least, the aluminium plate was
subjected to scientific testing. The proposed method, which is matrix, is able to

5
reconstruct the scene more precisely and thoroughly while simultaneously
decreasing the reconstruction error.

2.4Direction density-based secure routing protocol for healthcare data


in incompletely predictable networks

AUTHORS: J. Shen, C. Wang, C.-F. Lai, A. Wang, and H.-C. Chao

ABSTRACT:Information pertaining to healthcare is becoming increasingly


vital to individuals' daily lives. Medical records can help the young keep tabs on
their health and the elderly avoid sudden illnesses if used correctly and
securely. Having a reliable in a hospital setting is crucial for certain detectable
object sizes. With patient flow in mind, we introduce a new characterise such
settings. There is a pattern to the patients' movements, or they are only active
within a narrow window of time. In this paper, we propose a new routing for
achieving in such networks. Both the direction of motion and the impact of
node group motion are taken into account. The new protocol creatively
accounts node's moving direction, capitalising on the connections between the
individuals as they travel. Furthermore, healthcare data communication is
protected due to authenticated message transmission. The simulation results
demonstrate that our protocol efficiently delivers packets with minimal
overhead and round-trip delay.

2.5 Social big data based content dissemination in internet of vehicles

AUTHORS: Z. Zhou, C. Gao, C. Xu, Y. Zhang, S. Mumtaz, and J.


Rodriguez

ABSTRACT:Sector is the Internet of Vehicles (IoV), which, like the Internet


of Things, minimal or no human intervention. In this paper, we investigate the

6
use of integration to achieve fast content dissemination in D2D-V2V-based
IoV networks. The physical layer incorporates a Wiener process model for
vehicle headway distance and a Kolmogorov equation estimate for D2D-V2V
link connection probabilities. In the social layer, we use Bayesian
nonparametric learning on real-world social big data culled from Sina Weibo,
the most popular Chinese microblogging service, and Youku, the most popular
Chinese video-sharing website, to content selection similarities. To
address join varying QoS constraints, an iterative matching algorithm based on
price increases is proposed. Last but not least, numerical results show that the
proposed algorithm is superior to others in terms gains in matching satisfaction.

7
3.SYSTEM ANALYSIS
3.1EXISTING SYSTEM:

Orencik et al.introduce a searchable that protects user privacy even when.


The scheme can also effectively score and rank results while concealing search
patterns. Focusing on the issue of range queries, present that, by employing
symmetric key encryption systems, offers a highly effective range query.

The method presented by allows for the need secure channel. The scheme has
been proven to preserve data honesty and accessibility.

with multiple keyword search is by Ma et al. [37] for the. They


demonstrate both safe and has a manageable overhead in terms of
communication.

To combat what are known as inside keyword guessing attacks,scheme. They


demonstrate that in their scheme, the server is unable to encrypt a keyword or
conduct a successful inside keyword guessing attack.

Disadvantages

For updates, the system in the current work leaks a lot of information and is not
parallelizable.

In order to store academic big data in a way that allows for easy encrypted
searching, a new data structure is needed, as the current system does not provide
discretization.

8
3.2 PROPOSED SYSTEM:

In this paper, we present a user-differentiated system model that is supported


by a credible third party. This paper presents a system model with the goal
of accommodating a variety of user identities and technological
implementation processes in order to fulfil a variety of application
requirements related to scholarly big data. We classify our users as either
authors, editors, or readers, depending on their role in the creative process.
Papers, methods, experimental data, and protocols, as well as the authors'
bios, can all be uploaded. Journals, editorial staff, and calls for
papers/special issues/biographies can all be uploaded by editors. The system
should verify the integrity of scholarly data in the cloud for valid readers it
using encrypted keywords. The system validates assists them in conducting
searches and verifying the data with the aid of an independent third party.
The data is stored in a to house academic big data, making it easier to store
and search. Not only can this structure check the validity of user data
independently, but it can also effectively categorise and store data based on
its keywords.
blocks.
It is proposed to implement a data protection system that can be both
searched and independently verified. The scheme is novel in its
implementation of data integrity verification to ensure the safety of search
results.
In to implement a secure search that does not compromise the integrity of
the data being searched guarantees. The scheme can also be expanded upon
easily. Creators and editors safeguard their own interests by obtaining
uploaded data. An innovative aspect of this paper is the use of procedure.
Because of this system, the integrity of the data and the security of the
search terms are guaranteed.

9
Advantages

Because was designed, the system is more efficient and the results of scientific
research are better utilised.

As a result of the need for a more secure system model that can function in the
academic big data application environment,

3.3 SYSTEM REQUIREMENTS:


The application's functional requirements are described in detail here. There are
individual modules that make up the SRS, and each module has its own set of
requirements. The following pieces of equipment and computer programmes will be
needed to complete the project..

HARDWARE REQUIREMENTS:

10
3.4 SYSTEM STUDY

FEASIBILITY STUDY

During this stage, the project's viability is evaluated, and a business proposal
outlining the project's broad strokes and some preliminary cost estimates is
presented. The proposed system's viability is to be investigated during the system
analysis phase. This is done so won't end up costing the business more money than
it's worth. Having a firm grasp on the system's primary needs is crucial for
conducting a realistic feasibility study.
There are three main factors to think about during the feasibility study:

11
• FINANCIAL IMPOSSIBILITY

For the sake of clarity, let's break this down:

• TECHNICAL FEASIBILITY

PROBABILITY IN HUMAN SOCIETY

ECONOMICAL FEASIBILITY

The purpose of this analysis is to determine how much money the system
will cost the company. Company resources for developing the system's
infrastructure are limited. Budgetary outlays have to make sense. Therefore, the
developed system is both effective and cost-effective. This is due to the fact that
the majority open source. It was necessary to buy only the personalised items.

COMPATIBILITY IN TECHNICAL ASPECTS

The goal of this analysis is to ensure the system can be implemented


successfully by verifying its technical feasibility. Any new system must not place
an excessive load on the current infrastructure. The resulting strain on our
networked systems will be substantial. The result will be excessive requests made
of the customer. The developed system should have low requirements, as
implementing it will require few or no changes.

12
POSSIBILITY IN SOCIETY

The goal of the research is to determine how well received the system is by
its target audience. User training is a part of this process. The user should feel safe
using the system and should not view it as a threat. There is a direct correlation
between the efforts put forth to familiarise and inform degree to which that system
is adopted by the user population at large. As the system's end user, he needs to
feel more at ease using it before he can offer any always appreciated.

13
Finding Attackers

Res

Domain Manager Decrypt all


related File
Generate Key

Access
Login, Add Domains, View All Domains,
View verified Blocks
1. View all corresponding
owner keys. Req

Reader

Register and Login


,Search File
TTP ,Request File
,View File Response

Login, View Meta Data,Verify All Blocks

,Send Block Verification


14
4.2 FLOW CHART DIAGRAM

The steps of a process can be visually represented in a flowchart. It's a versatile,


general-purpose instrument that can be reworked to describe any number of
processes, from those involved in making a product to those involved in providing
a service or executing a project.

15
4.3 DATA FLOW DIAGRAM:

It's important to note that known as a bubble chart. It's a straightforward


graphical formalism for depicting the data that goes into a system, the operations
performed on that data, and the information that comes out of it.

Second, a is a crucial modelling tool. The system's constituent parts are modelled
using this tool.S system, and information flows within the system are the
aforementioned elements.

Third, DFD illustrates the information's journey and the transformations it


undergoes along the way. It's a visual representation of the transformations
applied to data as it.

Four as a bubble chart.can be represented with a DFD. Structures in DFD can be


broken up into tiers that correspond to progressively more specific information
flows and functional details.

16
Register and
Login

View Readers, View


Authors, View All
Author Files, View View File
File Requests, View Response
Author Attackers and CS
Forward to Access
Broker
Request

Response Search File,Request


Request File

Register and
Login Response
View Blocked Users, View
Domains, View Time Delay
Results, View Throughput
Results

Login,View
Meta
Data,Verify All
Reader
Blocks

Login,AddDomains,View All
TTP
Domains,View verified Blocks
Transactions
Send Block
Verification

Domain Manager
17
4.4 UML DIAGRAMS

Unified Modeling Language is another name for UML.


Commonly development, UML is a standardised general-purpose modelling
language. is responsible for and developed the.
The ultimate adopted as a standard language for modelling OO
software. Currently, UML consists of . Future iterations of UML may
include, or at least be accompanied by, a focus on methods and processes. Is
a standardised language for business modelling, visualisation, construction,
and documentation
The Unified Modeling Language (UML) is a compilation of
engineering best practises that have been shown to work effectively when
modelling.
The Unified Modeling Language (UML) plays a crucial role in creating
object-oriented life cycle. With the UML, software project designs are typically
expressed through diagrams.
GOALS:
When creating the UML, the following were the main priorities:
Give people a powerful visual modelling language that's already built for
them to use to create and share useful models.
Second, allow for mechanisms of specialisation and extension of the
fundamental ideas.
3. Don't rely on any one programming language or method of creation.
Make a logical framework available for learning the modelling language.
5. Motivate the expansion of the OO tools industr

18
You should be able to collaborate on larger development projects, use
frameworks, and create patterns and components.
Seven, adopt successful strategies all at once.

USE CASE DIAGRAM:

(UML), is a specific kind of behavioural diagram that is both defined by and


results . Its goal is to provide a visual representation of a system's capabilities the
use cases. An important goal of any clearly depict which actors are served by
which parts of the system. The system's actors and their respective roles can be
visualised.

19
Use Case Diagram Domain Manager
Add Domains

Upload File, Verify Block


View All Domains

Update Block, Delete File View verified Blocks

Author
View Readers

Cloud Server ,View Authors


View Files, View Verification View Authors and files

,View All Author Files


View File Requests, View
,View File Requests
Attackers

View Blocked Users, View Domains, View


Time Delay Results, View Throughput
Reader Results

TTP Search File

View Meta Data, Verify


All Blocks
Request File

Send Block Verification

View File Response

20
CLASS DIAGRAM:

Between classes are all depicted in a class diagram in the Unified Modeling
Language (UML), a static structure diagram used in software engineering. It
clarifies which data type is involved.

21
Author CS

METHODS: METHODS:

Register and Login,ViewReaders,ViewAuthors,View All


Login,SearchFile,RequestFile,View File Author Files,View File
Response Requests,ViewAttackers,View Blocked
Users,ViewDomains,View Time Delay
MEMBERS: Results,View Throughput Results
File Name,File Verification MEMBERS:
Permission,Block-1,MAC-1,Block-2,MAC- File Name,File Verification Permission,Block-
2,Block-3,MAC-3,Block-4,MAC-4 1,MAC-1,Block-2,MAC-2,Block-3,MAC-
3,Block-4,MAC-4

Reader
Domain Manager

Methods : Name,DOB,BloodGroup,R
Patient METHODS:
Login,AddDomains,View All
Register and Login,SearchFile,RequestFile,View
Domains,View verified Blocks
File Response
MEMBERS : MEMBERS:
File Name,File Verification File Name,File Verification Permission,Block-
Permission,Block-1,MAC-1,Block- 1,MAC-1,Block-2,MAC-2,Block-3,MAC-3,Block-
2,MAC-2,Block-3,MAC-3,Block- 4,MAC-4
4,MAC-4

TTP

Method
Lgoin,View Meta Data,Verify All Blocks,Send
Block Verification

Members
File Name,File Verification Permission,Block-
1,MAC-1,Block-2,MAC-2,Block-3,MAC-3,Block-
4,MAC-4

22
SEQUENCE DIAGRAM:

UML sequence diagrams are a type outline the sequential execution of various
processes and their dependencies. It's an MSC, or Message Chart, an artificial
construct. There are a few different names for sequence diagrams:

23
Domain TTP Reader
Author CS
Manager

Register and Login Register and Login


,Upload File ,Search File
,Verify Block ,Request File
,Update Block ,View File
,Delete File Response

,View Files

,View Verification ,View Readers

,View Authors

,View All Author


Files

,View File
Requests Login
Login
,View Attackers ,View Meta Data
,Add Domains
,View Blocked ,Verify All Blocks
,View All Domains Users
,Send Block
,View verified ,View Domains Verification
Blocks
,View Time Delay
Results

,View Throughput
Results

24
5.SYSTEM DESIGN
The system design process involves handing off a user-focused document to computer
programmers and database administrators. The design is the method that is used to create a new
system. There are a few different stages to this. It explains everything you need to know and
walks you through each step of the process so you can put the system the feasibility report
suggested into action. There are two main phases of design: the logical and the physical. The
logical design phase involves reviewing the existing physical system, developing input/output
specifications, an implementation plan, and a walkthrough of the design.

Through an examination of the various system functions, fashioned, along with the field formats
that will be used. Each database table should have clearly defined fields that explain their
function. Since they take up valuable disc space avoided. Afterwards, the design of screens
should be intuitive. Clear and concise menu options are essential.

SOFTWARE DESIGN

The following guidelines were used in developing the software:

First, software is built with modularity and partitioning in mind. This means that each
system should be made up of a hierarchical set of modules that serve to divide up
different tasks.

Coupling 2: Modules should rely on one another sparingly

Third, cohesiveness; code snippets should perform a single, cohesive task.

Fourth, common usage: having multiple modules that can be called by different
programmes to perform the same task is more efficient than having multiple modules that
do the same thing.

MODULE DESIGN:

The project's primary components are


• Author

25
Here, the author places their encrypted files on a server in the Cloud. before
storing it on the server for safety reasons. The following actions can be taken
by the User in order to manipulate the encrypted data file: File upload, Block
verification, Block update, File deletion, File viewing, and Block
verification viewing.
The Advantages of a Hosted Serve
The function of the Cloud server is to facilitate data storage. To protect their
data, data producers encrypt their files before uploading them to the Server.
Data consumers can by downloading the encrypted files having the Server
decrypt them. If a user makes a request for file authorization and then
performs any of the following actions on the server: View Readers, View
Authors, View All Author Files, View File Requests, View Attackers, View
Blocked Users, View Domains, View Time Delay Results, View
Throughput Results,key.
Reader
Unless the reader knows the secret key, they will not be able to in this
module. Users can perform actions such as registering and logging in,
searching for files, requesting files, and viewing responses to files.
• TTP
The ttp carries out the following tasks during this section: Research
Additional Information
Complete Block Verification; Send Block Verificati
Manager of Domain
The Domain Manager executes the Following Tasks in this Section You can
add domains, look at all domains, and check out verified blocks.

26
INPUT/OUTPUT DESIGN

All sensitive data is encrypted by the author before being uploaded to the . The
User can perform the following operations on : You can do things like upload files,
check blocks, update blocks, delete files, view files, and view block verifications.

Benefits of Using a Hosted Server

The Cloud server's purpose is to provide a convenient place to keep information


safe and accessible. Information providers safeguard their files with encryption
before submitting. To encrypted files, users can simply request this service from
the Server. views Readers, Authors, All Author Files, File Requests, Blocked
Users, Domains, Time Delay Results, or Throughput Results on the server, the
server will lock the user out.

Reader

No one can access the module's protected information without the secret key. A
user can do things like sign up and log in, search for files, request files, and view
responses to requests.

• TTP

At this point, the ttp is responsible for the following activities: Find Out More,
Obviously

Full Block Verification; Block Transmission Domain Verification Manager

In this subsection, we will discuss the duties performed by the Domain Manager.
Domains can be added, browsed, and verified blocks viewed.

27
6. SOFTWARE DEVELOPMENT TOOLS

HTML

It is possible to tell begin,aligned, and much more through the use of html, a language used to
create web pages. In our project,tags.

Table:

Web page authors love tables because they allow for dependable, browser-independent layout of
a page's constituent parts. Tables are commonly used by web page authors for organising content
on their pages.

TR:

Table rows (TR) consist of a TH> element followed by a TD> element. There are a plethora of
characteristics in TR>. To a certain extent,

If you use the ALIGN option, you can control how the text in a table row is aligned horizontally.

You can change the row's background colour by using the BGCOLOR parameter.

A row's external border colour can be modified with the BORDERCOLOR option.

With VALIGN, you can align the contents of this row vertically.

TH:

Tabular headings can be made with TH.

To align the cell's contents horizontally, use the ALIGN setting. Inputs L, R, and C.

For each table cell, you can specify a background image using the BACKGROUND attribute.

A table cell's background colour can be set with the BGCOLOR parameter.

28
When you use the VALIGN option, you can adjust the data's vertical positioning. options for
BEST, MIDDLE, WORST, and STANDARD.

The cell's width, indicated by the parameter WIDTH, can be customised. You can choose a fixed
width in pixels screen.

TD:

Data for a table can be generated using TD and then placed into the table cells.

Table cells can be aligned horizontally using the ALIGN option. L, C, R, or the Left, Center,
Right settings.

Table cell backgrounds can be customised with the BGCOLOR attribute.

The table's background colour can be changed with the BGCOLOR setting.

The cell's width, or "WIDTH," is one of its defining characteristics.

Frames:

The FRAMESET> tag is used to configure frames, which are used to either display content that
scrolls displays only a subset of the intended content. As you work with FRAMESET>, keep in
mind the following two considerations.

A document's FRAMESET> element functions similarly to the BODY> element, but it replaces
the latter.

• Providing frame rates in terms of actual pixels.

Frames are made from FRAME> elements.

Using the COLS attribute of the FRAMESET> element, we can divide the browser window
vertically into two frames

syntax for vertical fragmentation:

29
To indicate a colour scheme of 50% and 50%, type: FRAMESET COLS ="50%, 50%">

</FRAMESET>

In a similar vein, if we switch out COLS for ROWS, we get horizontal division

The grammar for horizontal splitting is as follows:

FRAMSET ROWS=50%, 50%>

</FRAMESET>

Form:

FORM is used to create a form in HTML, a container for other HTML elements like text fields
and buttons.

Attribute:

The ACTION attribute specifies be used to process the information submitted in the form.

If you're using this form in code, you'll want to give it a descriptive name, so that you can easily
find it later.

To send information to the designated action URL, you must specify a "METHOD," which can
be either a method or protocol. All name/value pair information submitted via a form is typically
sent via the GET method because it is the default. When the form's data is encoded in the same
way as when using the GET method, but the data is actually sent as a set of environment
variables.

Controls in HTML:
For a form to have a button, use the INPUT TYPE =BUTTON> tag.

ATTRIBUTES:

The element's name is set with the NAME property. Toggle to numeric only mode.

The size is determined by the SIZE setting.

30
• VALUE: defines the element's caption.

Password input type:

Makes a typed password input box available.

ATTRIBUTES:

• NAME: assigns a name to the element using alphabetic and numeric characters.

• VALUE: defines the element's default content.

INPUT TYPE=RADIO>

Publishes a form radio button.

ATTRIBUTE:

Input the element's name in the NAME field. Coded as a string of alphanumeric digits.

The element's default content is determined by the Marketable securities.

This form accepts submissions of the following types: INPUT TYPE=SUBMIT>

Produces a button submit the form and send its contents back to the server.

ATTRIBUTES:

The element is given a name, which is specified by the NAME attribute. Toggle to numeric only
mode.

VALUE: Changes the button's label from "Submit Query" to something else. Toggle to numeric
only mode.

For text-based input, use the INPUT TYPE=TEXT> tag

Opens a new editable text box for the user

ATTRIBUTES:

31
Inputs a label for the element, denoted by the NAME property. Toggle to numeric only mode.

It stores the text that was entered into the field when it was first created (VALUE). Toggle to
numeric only mode.

Java script Methods:

Writeln: \sDocument. The writeln () function allows you to insert text into the
currently loaded web page.

onClick:

Happens when a button is pressed.

onLoad:

What happens when a page is loaded.

onMouseDown:

Happens whenever a mouse button is clicked.

onMouseMove:

Takes place every time the mouse is moved.

OnUnload:

Occurs when a page is unloaded.

MySQL:

MySQL is a crucial part of the LAMP open source corporate stack. A Linux-based online
development platform, LAMP consists of the Apache web server, the system, and the PHP
object-oriented scripting language.

32
Advantages of MySQL:

WordPress, Drupal, Joomla, Facebook, and Twitter all utilise MySQL as their database
management system due to the technology's stellar reputation for data security.

With MySQL's unique storage-engine design, system administrators may optimise the database
server's settings for peak performance.

Round-the-Clock MySQL's high availability options, such as architectures, ensure that the
database is always accessible.

The Open Source Advantage: MySQL's 24/7 assistance and business indemnity put to rest all the
concerns that may emerge when using an open-source solution. MySQL's reliable software and
dependable processing security together make it a great choice for high-volume transactions. It
improves the end-user experience without adding unnecessary complexity to maintenance,
troubleshooting, or updates.

JDBC Drivers:

The JDBC API does not specify any actual classes, but rather the interfaces through
which database operations like as connecting to and disconnecting from databases, running SQL
queries, and obtaining the results are performed. All of us code according to interfaces rather
than concrete implementations are implemented by classes provided either by the resource
management vendor or a third party. The technical term for these applications is "JDBC driver."
The job of a JDBC driver is to translate ordinary JDBC calls into the API calls required by an
external resource management. You can see how a Java database client connects to a third-
party application programming interface (API) in the diagram below.

33
driver:

It is possible to generally categorise JDBC drivers into four kinds, depending on the underlying
implementation technique.

TYPE1:

Type1 JDBC drivers overlay a lower-level API like ODBC with the JDBC API. Due to
their reliance on system libraries, such drivers are often not transferable. To communicate with
an external data source, ODBC translates JDBC calls into corresponding native library
operations. One is the JDBC-ODBC driver that is included in the J2SE software package.

TYPE2:

Drivers of the Type2 kind are often a mix of Java and the local language. In order to connect to
the data source, Type2 drivers use the vendor-specific native APIs. They use the vendor's native
library to convert the JDBC calls into the vendor's own format.

The reliance on native code makes these drivers less portable than type1 drivers.

34
TYPE3:

When contacting the external data sources, Type3 drivers go via a middleware server. The
requests sent do not rely on any particular database. On the other hand, the middleware server
utilises the vendor-specific native methods to connect to the database. Here, Java is used only for
the driver's development.

TYPE4:

The JDBC interfaces are implemented by Type4 drivers, which are written entirely in
Java and are responsible for translating client requests for access to the database into language-
and vendor-specific API calls. They provide the protocol for transferring data and connecting
networks that the desired resource management requires. The majority of popular database server
providers support type4 drivers.

Driver manager and Driver:

The Java.sql interface is defined by the java.sql package. That class, java.sql, is used by
all JDBC drivers and is required to function as a driver. In order to connect to and configure log
streams, database clients communicate with a DriverManager. If a JDBC client needs to connect
to a resource manager that isn't part of the DriverManager's internal database, it will ask the
resource management to supply a JDBC driver that implements the appropriate driver class.

JAVA.SQL.DRIVERMANAGER:

The class driver manager's main responsibility is taking care of the JDBC drivers' registration
files. It also includes procedures for: • Establishing links to the data sources.

Handling JDBC logs is a must.

To limit how long a login session may last.

Managing drivers:

When making a connection request, JDBC clients are required to provide the JDBC URL. If the
request URL is found to match a driver in the list of registered drivers, the driver management

35
will pass the connection request through to that driver. The standard format for a JDBC URL is
as follows:

Managing controls:

To manage database connections, the DriverManager class is used.

open Static Linking Throws SQLException while using getConnection(String url,Properties


info).

The given JDBC URL and credentials (username and password) are used to establish a secure
connection to the database. If a problem occurs when trying to access the database, this function
will throw an instance of SQLException.

Connetions:

Connecting to a database and keeping that connection open is described by the


java.sql.Connection interface. This interface is implemented by the company that provides the
JDBC driver. A database client that claims to be "vendor-agnostic" will only ever interact with
the interface, never the implementation class. The following operations are defined as methods in
this interface:

There are a few distinct kinds of statements that JDBC clients may use to send SQL commands
to the underlying database: statements.

It's used to enter and configure auto-commit mode.

Metadata retrieval is another important function of a database.

It allows for transactions to be committed or cancelled.

Creating connections:

Database statements may be constructed using the methods defined in the java.sql.Connection
interface. Statements against the database are used to issue queries against the database:

A Pronouncement to the Public The createStatement() function raises a SQLException.

36
For creating new java.sql.Statement objects, use this method. You may use this user interface to
provide SQL commands to the backend. Without parameters, transmit your SQL statements
using the java.sql.Statement interface. If a problem occurs when trying to access the database,
this function will throw an instance of SQLException.

The SQLException class is thrown by the resConcurrency.

JDBC resultset:

Executing SQL SELECT commands on a database using JDBC results in a two-dimensional


array of data called a JDBC resultset. The java.sql.ResultSet interface is used to represent JDBC
result sets. The implementation class of this interface is provided by the JDBC vendor supplier.

Scrolling resultset:

A SQLException is thrown by the public booleannext() method.

The SQLException type is not supported by the public booleanprevious() function.

The first() method on the public boolean does not catch SQLException

SQLException is thrown by public booleanlast().

Statement:

Sending SQL statements without IN or OUT arguments is a common use case for the
java.sql.stament interface. is supplied by the JDBC driver vendor. This interface defines the
common methods used by the various JDBC statements. The java.sql.Statement package offers a
set of methods that may be roughly classified as follows:

• Carrying out SQL commands

37
• Searching for specific information by querying a database

Managing SQL statements in bulk

Different Techniques

The java.sql.statements interface specifies

ways in which a variety of SQL queries, such as may be carried ot

There is a SQLException raised when the public Resultset the Query (string sql).

This diagram illustrates the interrelationship and ResultSet classes.

38
JAVA SERVER PAGES (JSP):

Introduction:

Using Java Server Pages (JSP), you may combine static HTML with data-driven features. The
standard HTML is written in the standard way, using standard Web page creation tools. The code
for the variable sections is then enclosed begin with % and conclude with %>.

The need of JSP:

Servlets have certain uses, and JSP doesn't render them unnecessary. However, • HTML is
difficult to develop and update.

•Typical HTML editing software will not work.

Non-Java programmers cannot see the HTML.

Benefits of JSP:

The advantages of JSP over plain servlets are as follows:

• HTML is simpler to create and update. There are no stray instances of Java syntax,
backslashes, or double quotes here.

Common Web design applications may be used:

For the majority of our JSP pages, we use Macromedia Dreamweaver. It is possible to utilise
HTML tools that are not JSP-aware, since the JSP tags are disregarded by these programmes.

Your development team may be split up into:

Developers proficient with Java may modify the movable code. The Web's representation
layer is open for aggregation by the development community. Such segmentation is crucial
for large-scale endeavours. You may impose a looser or tighter partition between the static
HTML and the dynamic content depending difficulty of your project.

39
Creating template text:

Template text, which is static, makes up a significant portion of our JSP file. In practically
every way, this HTML is identical to standard HTML; it uses the same syntax, and it is
"passed through" to the client via the servlet that was developed to process the page. The
HTML is not only easily recognisable, but it can be generated with your existing Web
development software.

In two cases, some slight adjustments had to be made to the otherwise universal "template text
passed through" principle. In order to have % 0r %> appear in the output port, you must first
enter % or %> into the template. Second, use %— JSP Comment --%> to have a common
appear just on the JSP page and not in the final document.

HTML form annotations:

As an HTML comment:!—>

are generally sent without modification to the.

Types of JSP scrolling elements:

You may include Java code in created from a JSP page by using the scripting elements provided
by JSP. To wit, there are three distinct types:

(1) Expressions of the type %=Java Expression%>, appended to the servlet's output.

2. Java scriptlets of the %Java code%> kind, which are placed inside the servlet's _jspService
function (called by service).

Third, outside of any preexisting methods, put a declaration of the form %! Field/Method
Declaration%> into the main body of the servlet class.

40
Using JSP Expressions:

To include values into the final product, a is needed. The shape it takes is as follows:

Java Expression (%=Java%)>

After being evaluated, the expression is serialised and embedded in the document. Since this
analysis is done in requested), it has access to all of the request details. In the following
example, the date and time of the page request are shown.

As of right now, the time is %=new java.util.Date ()%>.

Predefined variables:

Several built-in variables ("implicit objects") are available for usage in expressions. The unique
feature of _jspService local variables is that their names are specified by the system. The two
most crucial are the HttpServletRequest for making requests and the HttpServletResponse for
receiving responses.

The request's HttpSession is referred to as the "session."

The writer's previous output was typically sent through the out command.

ServletContext, the application context. The web application's servlets and JSPs may all benefit
from using this data structure to store common information.

I'll give you an illustration:

%= request.getRemoteHost ()%> is the name of your server.

Comparing servlets to JSP pages:

When an HTML page's structure is known in advance but its values at different locations need to
be calculated on the fly, JSP shines. JSP is not as helpful if the page's structure changes often.
Servlets may be helpful in such situations. Servlets are the best option when the website mostly

41
contains content. In certain cases, a single technology, such as servlets or JSP, is insufficient,
and a hybrid approach is required.

Writing scriptlets:

For when you need to do more than just display the result of a basic expression. The _jspService
function of a servlet may have custom code inserted into it with the use of JSP scriptlets. As an
example of a scriptlet's structure, consider the following:

The % Java Code%

Expressions and scriptlets share the same library of predefined variables (request, response,
session, out, etc.).

Use the out variable to deliver the result page's output, like in the following example.

<%

In other words, out.println ("Attached GET data: "+ queryData);

%>

Scriptlet Examples:

A JSP page that use the bgColor request parameter to change the page's background colouris
complicated to be expressed in a single JSP expression. If you were to use BODY
BGCOLOR="%= request.getParameter ("bgcolor")%> ">, you'd be breaking a fundamental
principle of reading form data.

Using declarations:

By using a JSP declaration, you may declare new methods and fields that will be added to the
servlet class's main body. The format of a declaration is as follows:

Definition of a Field or Method: %!

42
Declarations are often used in combination to provide output, since they alone do not create any.
Definitions of fields (instance variables), methods, inner classes, and static initializer blocks are
all valid candidates for inclusion in JSP declarations. Declarations often define a field or
function.

It is not a good idea to utilise JSP declarations to replace the default servlet life cycle procedures.
The JSP-to-servlet translation process already takes use of these techniques. Calls to service are
automatically sent to _jspService, placed, therefore declarations are not required to obtain access
to service, doget, or dopost. However, we may utilisejspInit and jspDestroy for initialization and
cleaning since the regular init and destroy procedures are guaranteed to use these methods in the
JSP-derived servlets.

Jakarta Tomcat:

The Servlet/JSP container is Tomcat. Tomcat is compliant with the Servlet 2.4 specifications. It's a great
platform for building and releasing web apps and services because of all the extra capabilities it offers.

Terminology:

Setting - a The Context app is a web-based programme.

The $CATALINA HOME environment variable points to the directory where Tomcat was set up.

Files and instructions:

Launch, are stored in the /bin directory. Unix system files that end in *.sh are equivalent to Windows'
*.bat files in every respect (for Windows systems). These supplementary files make up for the
shortcomings of the Win32 command line.

Directory /conf stores DTDs and configuration files. The server.xml file is crucial to this package. It's the
container's primary configuration file.

Logs are stored in the /logs directory by default.

Web applications live in the /webapps directory.

43
Installation:

Tomcat may function in any setup that includes Java Development Kit (JDK) 1.2 later.all need JDK.

Deployment directions for default web applications:

HTML and JSP Files

Corresponding URLs are located in the main location $CATALINA HOME/webapps/ROOT.


http://host/SomeFile.html http://host/SomeFile.jsp

Extra-Detailed Locale (Arbitrary Subdirectory).

$CATALINA HOME/webapps/ROOT/SomeDirectory

Coordinating web addresses

http://host/SomeDirectory/SomeFile.html

http://host/SomeDirectory/SomeFile.jsp

Individual Servlet and Utility Class Files

• Primary Site (Classes without Packages). $CATALINA HOME/webapps/ROOT/WEB-INF/classes

Corresponding URL (Servlets).

http://host/servlet/ServletName

Extra-Detailed Locale (Classes in Packages).

44
$CATALINA HOME/webapps/ROOT/WEB-INF/classes/packageName

Corresponding Web Address (Servlets in Packages).

http://host/servlet/packageName.ServletName

Servlet and Utility Class Files Bundled in JAR Files

• Location \s $CATALINA HOME/webapps/ROOT/WEB-INF/lib

• Related online addresses (Servlets)

http://host/servlet/ServletName

http://host/servlet/packageName.ServletName

XAMPP:

XAMPP:

The letters in XAMPP stand for "Cross-Platform," "Apache," "MariaDB," "PHP," and "Perl,"
respectively (P).

It is basically a web server if we want to construct functioning website then XAMPP is important
. it operates

The fact that it includes the apache, mysql, and filezilla servers means that they may be used;
doing so facilitates, and the usage of cookies is of great use while browsing the web.

Second, it features the WordPress function, which allows us to create a website using a pre-
existing theme rather than having to code in PHP, HTML, CSS, etc.

3.How to utilise it: If we're using MySQL, step one is to access the database management
system's administrative interface in PHP.

4.To work on php based web pages we merely utilising php pages.

45
7.SOFTWARE TESTING

What do you mean by software testing?

The process of testing is running a programme or system under carefully


regulated settings so that its behaviour can be observed and analysed. Both typical
and extreme circumstances should be replicated in the lab. During testing, it is
important to purposefully trigger unexpected behaviour to see whether features
work as expected or if they fail to activate. The emphasis is on "detection."

7.1 Unit Testing:

The term "unit testing" refers to a stage in the software development lifecycle
pieces of an application are examined separately from one another. Though it is
common to use automation tools for unit testing, manual testing is still an option.
Extreme Programming (XP) includes this technique of testing as part of its
methodical, detailed, and iterative approach to creating software products via a
cycle of prototyping, testing, and modification.

The viewpoint of the programmer is used while creating unit tests. They guarantee
that certain conditions are satisfied when a class's method is called. Each test
verifies that the specified output is obtained from the specified input.

7.2 Integration Testing:

During the software development process known as integration testing


(sometimes or I&T), separate parts of the programme are brought together and

46
tested in a variety of different configurations. Units, in this context, refer to the
smallest observable subset of an application. Prior to causing issues in production,
integration testing might reveal issues with the interfaces between software
components. Extreme Programming (XP) is a practical approach to software
development that emphasises testing and iteration at every stage of the creation
process, including the integration of new features.

7.3 Test cases1:

Test case for Login form:

Test case2:

Test case for User Registration form:

47
Test case3:

Test case for Change Password:

An error message reading "OLD Using THE NEW PASSWORD" will be shown when a user
attempts to log in with an outdated password instead of their current one.

48
Test case 4:

Test case for Forget Password:

A user's login name, postal code, and mobile phone number are required when
they can't remember their password. When they are compared to the user's
existing credentials, the original password is returned if a match is found.

49
50
‘’’

51
8.SCREENSHOTS

52
53
54
9.CONCLUSION
Here, we develop a model for a system that may categorise individuals in
accordance with their respective tasks and the specific needs of academic big data.
Moreover, a novel cube-based. We provide an unique data security approach for
academic big data based on the innovative system and data format, which allows
for data to be both searchable and verifiable. Analyses of security and performance
demonstrate the effectiveness of our system for academic large data. In the near
future, we want to enhance our existing scheme with the creation of a safe data
exchange system for academic big data.

55
10. REFERENCES
[1] S. Xie, W. Zhong, K. Xie, R. Yu, and Y. Zhang, “Fair energy scheduling for
vehicle-to-grid networks using adaptive dynamic programming,” IEEE
Transactions on Neural Networks & Learning Systems, vol. 27, no. 8, pp. 1697–
1707, 2016.

[2] Y. Zhang, R. Yu, S. Xie, and W. Yao, “Home m2m networks: Architectures,
standards, and qos improvement,” Communications Magazine IEEE, vol. 49, no. 4,
pp. 44–52, 2011.

[3] S. Tuarob, S. Bhatia, P. Mitra, and C. L. Giles, “Algorithmseer: A system for


extracting and searching for algorithms in scholarly big data,” IEEE Transactions
on Big Data, vol. 2, no. 1, pp. 3–17, 2016.

[4] Y. Sun and F. Gu, “Compressive sensing of piezoelectric sensor response


signal for phased array structural health monitoring,” International Journal of
Sensor Networks, vol. 23, no. 4, pp. 258–264, 2017

[5] J. Shen, C. Wang, C.-F. Lai, A. Wang, and H.-C. Chao, “Direction density-
based secure routing protocol for healthcare data in incompletely predictable
networks,” IEEE Access, vol. 4, pp. 9163– 9173, 2016.

[6] F. Xia, W. Wang, T. M. Bekele, and H. Liu, “Big scholarly data: A survey,”
IEEE Transactions on Big Data, vol. 3, no. 1, pp. 18–35, 2017.

[7] Z. Zhou, C. Gao, C. Xu, Y. Zhang, S. Mumtaz, and J. Rodriguez, “Social big
data based content dissemination in internet of vehicles,” IEEE Transactions on
Industrial Informatics, vol. 14, no. 2, pp. 768–777, 2018.

56
[8] W.-L. Chin, W. Li, and H.-H. Chen, “Energy big data security threats in iot-
based smart grid communications,” IEEE Communications Magazine, vol. 55, no.
10, pp. 70–75, 2017.

[9] Y. Xiang, W. Zhou, and M. Guo, “Flexible deterministic packet marking: An ip


traceback system to find the real source of attacks,” IEEE Transactions on Parallel
and Distributed Systems, vol. 20, no. 4, pp. 567–580, 2009. [10] X. Chen, J. Li, J.
Ma, Q. Tang, and W. Lou, “New algorithms for secure outsourcing of modular
exponentiations,” IEEE Transactions on Parallel and Distributed Systems, vol. 25,
no. 9, pp. 2386–2396, 2014.

57

You might also like