0% found this document useful (0 votes)
28 views19 pages

Remotesensing 13 02511 v2

Uploaded by

ceciliacrystal0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views19 pages

Remotesensing 13 02511 v2

Uploaded by

ceciliacrystal0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

remote sensing

Communication
Construction and Application of a Knowledge Graph
Xuejie Hao 1 , Zheng Ji 2 , Xiuhong Li 1, *, Lizeyan Yin 3 , Lu Liu 1 , Meiying Sun 1 , Qiang Liu 1
and Rongjin Yang 4

1 College of Global Change and Earth System Science, Beijing Normal University, Beijing 100875, China;
[email protected] (X.H.); [email protected] (L.L.); [email protected] (M.S.);
[email protected] (Q.L.)
2 School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China;
[email protected]
3 Institute of Computing, Modeling and Their Applications, Clermont-Auvergne University,
63000 Clermont-Ferrand, France; [email protected]
4 Chinese Research Academy of Environmental Sciences, No. 8, Da Yang Fang, An Wai, Chao Yang District,
Beijing 100012, China; [email protected]
* Correspondence: [email protected]; Tel.: +86-136-2116-6693

Abstract: With the development and improvement of modern surveying and remote-sensing technol-
ogy, data in the fields of surveying and remote sensing have grown rapidly. Due to the characteristics
of large-scale, heterogeneous and diverse surveys and the loose organization of surveying and
remote-sensing data, effectively obtaining information and knowledge from data can be difficult.
Therefore, this paper proposes a method of using ontology for heterogeneous data integration. Based
on the heterogeneous, decentralized, and dynamic updates of large surveying and remote-sensing
data, this paper constructs a knowledge graph for surveying and remote-sensing applications. First,
data are extracted. Second, using the ontology editing tool Protégé, a knowledge graph mode level

 is constructed. Then, using a relational database, data are stored, and a D2RQ tool maps the data
Citation: Hao, X.; Ji, Z.; Li, X.; Yin, L.;
from the mode level’s ontology to the data layer. Then, using the D2RQ tool, a SPARQL protocol
Liu, L.; Sun, M.; Liu, Q.; Yang, R. and resource description framework query language (SPARQL) endpoint service is used to describe
Construction and Application of a functions such as query and reasoning of the knowledge graph. The graph database is then used
Knowledge Graph. Remote Sens. 2021, to display the knowledge graph. Finally, the knowledge graph is used to describe the correlation
13, 2511. https://doi.org/10.3390/ between the fields of surveying and remote sensing.
rs13132511
Keywords: knowledge graph; surveying; remote sensing; knowledge visualization
Academic Editor: Frédérique Seyler

Received: 5 June 2021


Accepted: 24 June 2021
1. Introduction
Published: 26 June 2021
In 2012, Google officially proposed the concept of the knowledge graph, which aims
to assist intelligent search engines [1]. After the knowledge graph was formally proposed,
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
it was quickly popularized in academia and industry and was widely used in intelligent
published maps and institutional affil-
search, personalized recommendation, intelligence analysis, anti-fraud and other fields.
iations. Essentially, a knowledge graph is a semantic network and knowledge base with a directed
graph structure that describes entities (concepts) and their relationships in the physical
world in symbolic form. The knowledge graph is represented in the form of triples (Entity1-
Relation-Entity2), where the nodes of the graph represent entities or concepts, and the
edges represent the relationships between entities or concepts [2].
Copyright: © 2021 by the authors.
Knowledge graphs are a new method of knowledge representation. In essence, the
Licensee MDPI, Basel, Switzerland.
This article is an open access article
semantic web is an early form of the knowledge graph, which is an abstract concept
distributed under the terms and
that describes entities and relationships between entities in the objective world and is
conditions of the Creative Commons also a networked knowledge base composed of entities, properties, and relationships. A
Attribution (CC BY) license (https:// knowledge graph is a collection of concepts, entities, and their relationships in the abstract
creativecommons.org/licenses/by/ physical world [3]. The knowledge graph has changed the traditional method of informa-
4.0/). tion retrieval. On the one hand, knowledge graphs describe the semantic and attribute

Remote Sens. 2021, 13, 2511. https://doi.org/10.3390/rs13132511 https://www.mdpi.com/journal/remotesensing


Remote Sens. 2021, 13, 2511 2 of 19

relationship between concepts to reason about concepts through fuzzy string matching.
Conversely, knowledge graphs display the structured knowledge of classification and
arrangement to users through the grid graphic information display interface. Concurrently,
knowledge graphs solve the problem of manual filtering of useless information, which has
practical significance for an intelligent society [4].
Knowledge graphs can be divided into general knowledge graphs and domain knowl-
edge graphs. Knowledge graphs used in surveying and remote sensing are domain knowl-
edge graphs. To date, few studies have investigated knowledge graphs for surveying and
remote sensing. Wang and others proposed a framework for remote-sensing interpretation
of knowledge graphs [5]. Xie and others designed a framework for the construction of a
large knowledge graph in the field of remote-sensing satellites [6]. Geoscience knowledge
graphs are also used and have been studied in detail. Xu and others proposed the concept,
framework, theory, and characteristics of geoscience graphs based on geoscience graphs
and geoscience information graphs. Jiang proposed the process of constructing knowl-
edge graphs and explored the key technology of geographic knowledge graphs [7]. Lu
and others systematically reviewed the research progress on topics related to geographic
knowledge graphs and analysed the key issues of current geoscience knowledge graph con-
struction [8]. However, these studies all discuss the construction of knowledge graphs in
theory and do not provide any real examples of constructing knowledge graphs. Based on
previous research experience, this paper describes an example of constructing knowledge
graphs for surveying and remote sensing.
The construction of this knowledge graph can provide services for users of surveying
and remote sensing, surveying and remote-sensing experts, and developers of surveying
and remote-sensing software. Users can visualize knowledge through the knowledge
graph and discover the relationship between knowledge more easily. Searches conducted
through the knowledge graph improve the user’s search efficiency. Remote-sensing experts
gain insights and discover new rules in the field of surveying and remote sensing through
the inference function of the knowledge graph. Software developers can integrate the
knowledge graph into the remote-sensing product e-commerce platform, which can not
only improve search efficiency, but also accurately recommend products for users. The
professional field of surveying and remote sensing is the ladder of social progress. With
the development of social intelligence, it is significant to study professional development
technology for intelligent surveying and remote sensing.

2. The Theoretical Basis for the Construction of Knowledge Graphs


The system framework of a knowledge graph in the fields of surveying and remote
sensing primarily refers to its construction mode structure, which describes the process of
constructing a knowledge graph in the fields of surveying and remote sensing (Figure 1).
The process of knowledge graphing in the field of surveying remote sensing can be divided
into two parts: mode level construction and data layer construction.
The mode level is built on the data layer, and the factual expression in the data layer is
standardized through the ontology library. The ontology is the conceptual template of the
structured knowledge base. The knowledge base constructed through the ontology library
has the advantages of a strong hierarchical structure and low redundancy. The data layer
is composed of a series of knowledge entities or concepts. Knowledge is stored in units of
facts, and the data layer expresses knowledge in the form of triples (Entity 1-Relation-Entity
2) or (Entity-Attribute-Attribute Value). The logical structure of the knowledge graph in
the field of surveying and remote sensing is shown in Figure 2.
Remote Sens. 2021, 13, 2511 3 of 19
Remote Sens. 2021, 13, 2511 3 of 19
Remote Sens. 2021, 13, 2511 3 of 19

Figure 1. The framework of the knowledge


Figure 1. knowledge graph
graph in
in the
the fields
fields of
of surveying
surveying and
and remote
remote sensing.
sensing.
Figure 1. The framework of the knowledge graph in the fields of surveying and remote sensing.

Figure 2. The logical structure of the knowledge graph in the fields of surveying and remote sens-
Figure
Figure 2.
2. The
The logical
logical structure of
of the
the knowledge
knowledgegraph
graphin
inthe
thefields
fieldsofofsurveying
surveyingand
andremote
remote sens-
sensing.
ing.
ing.
2.1. Design of Disciplinary Knowledge Graph Mode in the Field of Surveying and Remote Sensing
2.1. Design of Disciplinary Knowledge Graph Mode in the Field of Surveying and Remote Sens-
2.1. Design of Disciplinary
The mode Knowledge
level construction Graphbegins
process Mode fromin thethe Field of Surveying
application and Remote
domain Sens-
of the knowl-
ing
ing
edge graph, determines the knowledge scope of the ontology, and then constructs the
The The
ontology. mode level construction
construction process begins
of the ontology from the application domain of the
The mode level construction processisbegins
the keyfrom link inthetheapplication
constructiondomainof the pattern
of the
knowledge
layer. The graph, determines
ontology is an explicit the knowledge
description of scope
the of theconceptual
shared ontology, modeand thenand constructs
is used to
knowledge graph, determines the knowledge scope of the ontology, and then constructs
the ontology.
addontology.
semanticsTheThe construction
to semantic web of of the
pages ontology is the key link in the construction of the
the construction the and describe
ontology is thethekeyrelationship
link in the between concepts
construction [9].
of the
pattern
The layer. is
ontology The theontology
pattern is an explicit
layer, description
conceptual mode, of the
and shared
logical conceptual
basis of the mode and
knowledge
pattern layer. The ontology is an explicit description of the shared conceptual mode and
is usedThe
graph. to add
mode semantics
level of theto semantic web pages and describe the relationship between
is used to add semantics to knowledge
semantic web graph
pagesin the
and field of surveying
describe and remotebetween
the relationship sensing
concepts
uses [11]. to
ontology The ontology
realize the is the pattern
storage and layer, conceptual
management of mode, and
knowledge. When logical basis of the
constructing an
concepts [11]. The ontology is the pattern layer, conceptual mode, and logical basis of the
knowledge
ontology, we graph.
mainly The mode level
construct a of the knowledge
collection of concepts graph
from in
the the field words
subject of surveying
of and
remote-
knowledge graph. The mode level of the knowledge graph in the field of surveying and
remote sensing
sensing teaching uses ontology
materials. The to realize the storage and management of knowledge. When
remote sensing uses ontology to relationship is mainly
realize the storage andamanagement
hierarchical relationship
of knowledge. between
When
constructing
the upper and anlower
ontology, we mainly
positions. Entityconstruct
filling isa mainly
collection of concepts
obtained from from the subject
structured data
constructing an ontology, we mainly construct a collection of concepts from the subject
words of remote-sensing teaching materials. The
sources, and the ontology database is filled in a bottom-up manner. relationship is mainly a hierarchical re-
words of remote-sensing teaching materials. The relationship is mainly a hierarchical re-
lationship between
A domain the upper
ontology and lower positions.
is a specialized ontology Entity filling is concepts
that describes mainly obtained from
in a specific
lationship between the upper and lower positions. Entity filling is mainly obtained from
structured
domain dataassources,
(such remote and the ontology
sensing, database
meteorology, is filled in aetc.)
environment, bottom-up
and themanner.
relationship
structured data sources, and the ontology database is filled in a bottom-up manner.
between concepts.
A domain The domain
ontology ontology
is a specialized of surveying
ontology and remote
that describes conceptssensing, as a kind
in a specific do-
A domain ontology is a specialized ontology that describes concepts in a specific do-
of unique
main (suchontology,
as remote cansensing,
clearly meteorology,
describe the relationship
environment, between concepts
etc.) and belongingbe-
the relationship to
main (such as remote sensing, meteorology, environment, etc.) and the relationship be-
and
tweenconcepts
concepts.in this
The domain. The basicof
domain ontology principles
surveying of and
ontology
remote construction
sensing, asare clarity,
a kind of
tween concepts. The domain ontology of surveying and remote sensing, as a kind of
objectivity, consistency, minimum coding deviation, and minimum
unique ontology, can clearly describe the relationship between concepts belonging to and ontology constraints.
unique ontology, can clearly describe the relationship between concepts belonging to and
Remote Sens. 2021, 13, 2511 4 of 19

Ontology construction uses a seven-step method. The steps of this method are as follows:
(1) determine the professional field and category of the ontology; (2) examine the possibility
of reusing existing ontology; (3) list the important terms in the ontology; (4) define the
hierarchical relationship between classes; (5) define the properties of the class; (6) define
the constraints between the properties; and (7) create an instance [10].

2.2. The Data Layer of the Knowledge Graph in the Field of Surveying and Remote Sensing
2.2.1. Data Layer Relational Structure Design
In this article, the data extraction method is used to construct the data layer. When
extracting related entities, the relationship between the entities in the field of surveying
and remote sensing is first defined. The relationship between entities is mainly the upper–
lower relationship and the non-upper–lower relationship. Research on non-subordinate
relationships in the knowledge graphs focuses on two aspects: entity attributes and entity
relationships. The non-subordinate relationship of entity attributes is mainly used for
triples: entity-attribute-attribute value, where the attribute depends on the corresponding
entity. Each attribute will have its corresponding attribute value. In the definition of entity
relationships, there is always a direct or indirect relationship between entities. Through the
relationship analysis between entities, the general relationship between the entities used is
defined. The relationship between entities mainly includes the relationship of belonging,
containing, etc. These relationships are common relationships among entities in the field of
surveying and remote sensing.

2.2.2. Data Layer Construction


The data layer construction process is based on unstructured data and uses manual,
automatic, or semiautomatic techniques to extract knowledge from the data and store
it in the database. Data acquisition is a key step in the construction of the data layer.
The core of knowledge acquisition is how to automatically obtain knowledge elements of
structured information, such as entities, relationships, and properties, from unstructured
and semi-structured data sources. Typically, automatic or semi-automatic machine learning
technology is used to extract entities, relationships, attributes, and other information about
the knowledge graph from open multisource data [11]. Knowledge acquisition includes
the extraction of entities, relationships, properties, etc. Entity acquisition is the process of
automatically identifying called entities (knowledge points, type names, etc.) from text
data sets. Relation extraction is the process of discovering semantic relationships between
entities from data sources using methods such as machine learning. Attribute extraction
is the process of extracting attribute information about entities from data sources. The
difference between attribute extraction and relation extraction is not only to identify the
attribute name of the entity but also to identify the attribute value of the entity. Therefore,
most studies are based on rules for extraction. [12].

3. Construction of Knowledge Graphs


Construction of knowledge graphs is the core content of this article. It includes
five parts: data acquisition and storage, ontology construction and storage, ontology
and database mapping, query and reasoning of knowledge graphs, and visualizing the
knowledge graph on Neo4j.

3.1. Data Acquisition and Storage


3.1.1. Data Acquisition
The data used in this article are from surveying and remote sensing. The data source
is a textbook related to the field of surveying [13] and remote sensing [14]. DeepDive (http:
//deepdive.stanford.edu/, accessed on 6 May 2021) was used to extract semi-structured
table data from unstructured text data [15]. DeepDive can extract structured data from
unstructured data and perform a series of data processing steps to build a knowledge base
and extract relationships. It is very good at handling data sources in different formats.
Remote Sens. 2021, 13, 2511 5 of 19

(http://deepdive.stanford.edu/, accessed on 6 May 2021)was used to extract semi-struc-


Remote Sens. 2021, 13, 2511 5 of 19
tured table data from unstructured text data [17]. DeepDive can extract structured data
from unstructured data and perform a series of data processing steps to build a
knowledge base and extract relationships. It is very good at handling data sources in dif-
ferent formats.
DeepDive has goodDeepDive
databasehassupport,
good database
supportssupport,
PubMedsupports
and other PubMed
databaseand other
data data-
sources,
baseis data
and a datasources,
source thatandhasis abeen
data source that
processed has been
in natural processed
language. in natural
DeepDive language.
has established
aDeepDive
framework has
to established
standardizeathe framework to standardize
construction process of thetheknowledge
construction baseprocess
systemofandthe
knowledge
allows users base system
to design andown
their allows users to
extractors design
and theiraccording
markers own extractors and markers
to the knowledge ac-
base
cording
that needsto to
thebeknowledge
constructed. baseThethat needs to be
relationship constructed.
extraction Theof
process relationship
DeepDive extraction
is shown
in Figureof3. DeepDive
process The DeepDive basedindomain
is shown text
Figure 3. knowledge
The DeepDiveextraction
based domainmethodtextincludes
knowledgethe
following
extractionsteps
method [16]:includes the following steps [18]:
(1) Dataprocessing.
(1) Data processing.First, First,the
the original
original corpus
corpus will
will be be loaded.
loaded. TheThe natural
natural language
language pro-
cessing (NLP) tag is added. A set of candidate relationships and the sparse feature
processing (NLP) tag is added. A set of candidate relationships and the sparse feature
representation of
representation of each
each candidate
candidate relationship
relationship are
are extracted.
extracted.
(2) Remote supervision of data and rules, and
(2) Remote supervision of data and rules, and then various then various strategies
strategies will
will be be
usedused to
to su-
supervise the data set so that we can use machine learning to learn
pervise the data set so that we can use machine learning to learn the weight of the the weight of the
mode.
mode.
(3) Learning and inference: mode specification. Then, the advanced configuration of the
(3) Learning and inference: mode specification. Then, the advanced configuration of the
mode will be specified.
mode will be specified.
(4) Error analysis and debugging. Finally, we will show how to use DeepDive’s tags,
(4) Error analysis and debugging. Finally, we will show how to use DeepDive's tags, er-
error analysis and debugging tools.
ror analysis and debugging tools.

Figure3.3.The
Figure Therelationship
relationshipextraction
extractionprocess
processof
ofDeepDive
DeepDive[15].
[17].

The
Theextraction
extractionprocess
processof ofdomain
domainknowledge
knowledgebased basedon onDeepDive:
DeepDive:
(1) Experiment
(1) Experiment preparation
preparation (preparation
(preparation before
beforeknowledge
knowledgeextraction).
extraction).First,
First,DeepDive
DeepDive
involved input,
stores the involved input, intermediate
intermediate and output data data in
in aa relational
relational database.
database.
DeepDive supports many many databases,
databases, such
such asas Postgres,
Postgres, Greenplum
Greenplumand andMySQL.
MySQL.The The
database used in this experiment
experiment isis Postgres.
Postgres.
(2) Data processing. This part is divided into four steps: ①
(2) Loadingthe
1 Loading theoriginal
originalinput
input
data. First, we convert the targettarget data
data into
into an
an electronic
electronictext
textformat.
format. Only
Only thethetwotwo
fields of document
documentid idand
anddocument
documentcontent
content areare
reserved. WeWe
reserved. store the cleaned
store the cleaneddata
in a comma-separated
data in a comma-separated values (CSV)
values format
(CSV) file. Then
format import
file. Then the original
import text text
the original data
data intoassociated
into the the associated database.
database. We setWe thesetdata
the format
data format
of theofdocument
the document storagestorage
in the
in
app.theddlog
app. file.
ddlog file.
There areThere arefields
two text two text
in thefields in the
articles articles
table, table,
articles articles
(id text, (id
content
text,
text).content
Then, we text).
put Then, we put thefile
the compressed compressed file of
of the original theinto
data original data intoinput
the specified the
specified input folder and run the “DeepDive compile” command. Then, we execute
the “DeepDive do articles” command, which imports the original data into the articles
table of the associated database. At this time, we can query the imported raw data
by executing the query command. 2 Adding NLP markups. We use the CoreNLP
Remote Sens. 2021, 13, 2511 6 of 19

natural language processing system to add annotations to the original data. The steps
of NLP are: first, the input original article is divided into sentences. The sentence is
divided into words, and the part-of-speech tags, standard forms, dependency analysis
and entity recognition tags of the words in the sentence are obtained. After NLP, some
commonly used entities (person names, place names, etc.) can be marked. It is also
necessary to ensure further entity identification of the data processed by NLP. The
input are the data in the sentences table. The output are the marked data. Finally,
we import the final marked data into the sentences_new table in the database. 3
Extracting candidate relation mentions. DeepDive proposes corresponding input
and output interfaces, allowing users to design their entity or relationship extractors.
Generally speaking, the SQL(Structured Query Language) statement is used as the
input interface to extract data from the database; The output is the corresponding
table in the database. We perform entity extraction on the data after entity recognition,
and then establish the corresponding database table structure. 4 Extracting features
for each candidate. First, we extract the feature description and store the feature
in the func_feature table. The purpose is to use certain attributes or characteristics
to represent each candidate pair. There is a library DDlib that can automatically
generate features in DeepDive, which defines features that are not dependent on the
domain. There are also many dictionaries in the DDlib library. These dictionaries
contain words related to the correct classification of descriptions and relationships
and are usually combined with domains and specific applications. We declare the
extract_func_features function in app.ddlog. The input of this function includes the
information of the two entities in the entity_mention table and the NLP results in the
sentence where the two entities are located. The output is the two entities and their
characteristics.
(3) Distant supervision with data and rules. We will use remote supervision to provide
noisy label sets for candidate relationships to train machine learning models. Gener-
ally speaking, we divide the description method into two basic categories: mapping
from secondary data for distant supervision and using heuristic rules for distant
supervision [17]. However, we will use a simple majority voting method to solve
the problem of multiple labels in each example. This method can be implemented
in ddlog. In this method, first, we sum the labels (all −1, 0, or 1). Then, we simply
threshold and add these labels to the decision variable table has_spouse. In addition,
we also need to make sure that all the spouse candidates who are not marked with
rules are not included in this table. Once again, we execute all the above.
(4) Learning and inference: model specification. We need to specify the actual model that
DeepDive will perform learning and inference. DeepDive will learn the parameters of
the model (the weights of features and the potential connections between variables).
Then we perform statistical inferences on the learned model to determine that the
probability of each variable of interest is true. 1 Specifying prediction variables. In
our experiment, we have a variable to predict the mention of each spouse candidate.
In other words, we want DeepDive to predict the value of a Boolean variable for each
mentioned spouse candidate to indicate whether the value is correct. DeepDive cannot
only predict the value of these variables but also predict the marginal probability,
that is, DeepDive’s confidence in each prediction. 2 Specifying features. We need to
define the following: each has_spouse variable will be connected to the elements of
the corresponding spouse_candidate row; We hope that DeepDive understands the
weights of these elements from the data we remotely monitor; those weights of the
element should be the same for the specific function of all instances. 3 Specifying
connections between variables. We can use learning weights or given weights to
specify the dependencies between predictors. In the experiment, we specify two such
rules, which have fixed (given) weights. First, we define the asymmetric connection,
that is, if the model considers that a person mentions p1 and another person mentions
p2 as a spouse relationship in the sentence, then it should also consider the opposite.
weights to specify the dependencies between predictors. In the experiment, we spec-
ify two such rules, which have fixed (given) weights. First, we define the asymmetric
connection, that is, if the model considers that a person mentions p1 and another
person
Remote Sens. 2021, 13, 2511 mentions p2 as a spouse relationship in the sentence, then it should also con- 7 of 19
sider the opposite. The model should be strongly biased towards everyone mention-
ing a sign of marriage. Instead, we use negative weights for this operation. ④ Fi-
nally, we want to The
perform
modellearning
should beand inference
strongly biasedusing
towardstheeveryone
specifiedmentioning
model. This will
a sign of marriage.
build a model based on the data in the database, learn the weights, infer the
Instead, we use negative weights for this operation. 4 Finally, we want to perform expected
or marginal probabilities
learning andof the variables
inference usinginthethespecified
model model.
and then Thisload
will it back
build into the
a model based on the
database. In this way,
data in wethecan see thelearn
database, probability of the
the weights, has_spouse
infer the expected variable
or marginalinferred
probabilities of
by DeepDive. the variables in the model and then load it back into the database. In this way, we can
(5) Error analysis and seedebugging.
the probability Toofaccurately
the has_spouse variable
analyze the inferred
experimental by DeepDive.
results, we
first declare a score or a user-defined query sentence and define the part of the labeledresults, we
(5) Error analysis and debugging. To accurately analyze the experimental
first declare a score or a user-defined query sentence and define the part of the
data used for training. DeepDive uses this score to estimate the accuracy of the ex-
labeled data used for training. DeepDive uses this score to estimate the accuracy
periment. We declare these definitions in deepdive.conf and define deepdive.calibra-
of the experiment. We declare these definitions in deepdive.conf and define deep-
tion.holdout_fraction as 0.25. The test set is 75%asof0.25.
dive.calibration.holdout_fraction the The
labeled data,
test set andofthe
is 75% thetest set data, and
labeled
is used to verify the
the correctness
test set is usedoftothe experimental
verify the correctnessresults.
of the Approximately
experimental results. 1400 la-
Approximately
beled data and approximately 1000 data were used for testing. The graph
1400 labeled data and approximately 1000 data were used for testing. The graph on on the left
the left in
in Figure 4 is the correct Figure
rate graph.4 isUnder
the correct
idealrate graph. Under
conditions, the redideal conditions,
curve shouldthe be red curve
should be close
close to the blue calibration line.toHowever,
the blue calibration
this is notline. However,
the case. It may thisbeis caused
not the by case. It may
the sparseness and noise of the training data of the test data. The middle graph indata. The
be caused by the sparseness and noise of the training data of the test
middle graph in Figure 4 is the predicted number graph of the test set. The forecasted
Figure 4 is the predicted number graph of the test set. The forecasted quantity map
quantity map usually presents a “U” shape. The graph on the right in Figure 4 is
usually presents thea “U” shape. The graph on the right in Figure 4 is the predicted
predicted probability quantity graph of the entire data set. Among them, the
probability quantity graph data
prediction of the entire
falling data
in the set. Among
0.5–0.6 probabilitythem, the prediction
interval indicates that data
there are still
falling in the 0.5–0.6 probability interval indicates that there are still some
some hidden types of instances, and the features of DeepDive are insufficient for these hidden
types of instances,instances.
and theThe features of DeepDive
predicted data whoseare insufficient
probability does for these
not fall instances.
at (0, 0.1) or (0.9, 1.0) are
The predicted data thewhose
data toprobability
be extracted.does An important
not fall atindicator
(0, 0.1) orto improve
(0.9, 1.0)thearequality
the dataof the system
to be extracted. An important indicator to improve the quality of the system is to re- (0, 0.1) or
is to re-speculate the above data and attribute it to the probability interval
speculate the above (0.9, 1.0).and attribute it to the probability interval (0, 0.1) or (0.9, 1.0).
data

FigureFigure
4. The 4. The
test settest set correct
correct rate calibration
rate calibration plottest
plot (Left), (Left), test set prediction
set prediction plot (Middle),
plot (Middle), predictionprediction
plot for all data
plot for all data sets (Right).
sets (Right).

3.1.2. Data Storage3.1.2. Data Storage


The data of the knowledge
The data of graph is usually
the knowledge expressed
graph in expressed
is usually the form in of the
triples,
formrepre-
of triples, repre-
senting the relationship
senting between entities
the relationship or between
between entities
entities and attribute
or between entities and values. In values.
attribute this In this
article, the relational database
article, MySQL
the relational and graph
database MySQL database
and graph Neo4j are Neo4j
database used. are
Theused.
purpose
The purpose of
of using MySQL using is to MySQL
realize is to mapping
the realize the mapping
betweenbetween
data and data and ontology
ontology and and
thenthen
to touseuse existing
tools to realize data query and reasoning. The purpose of using
existing tools to realize data query and reasoning. The purpose of using Neo4j is to update Neo4j is to update and
search data. Neo4j can directly display the query results in the
and search data. Neo4j can directly display the query results in the form of graphs to re- form of graphs to realize the
function of knowledge visualization. Neo4j has local storage and data processing functions
alize the function of knowledge visualization. Neo4j has local storage and data processing
that are different from general databases, which can ensure high readability and integrity
functions that are ofdifferent
data. from general databases, which can ensure high readability and
integrity of data. The extracted data are processed and stored in three tables, called “knowledge”,
The extracted data are processed and
“knowledge_to_genre”, and “genre”,
stored in three
which aretables,
importedcalled
into“knowledge”,
a MySQL database. The
“knowledge_to_genre”,
E-R(EntityandRelationship)
“genre”, which are imported
diagram into a(Figure
is shown below MySQL 5). database. The E-
R(Entity Relationship) diagram is shown below (Figure 5).
Remote
Remote Sens.
Sens. 13,2511
2021,13,
2021, 2511 8 of 19 8 of 1

Figure5.5.E-R
Figure E-Rdiagram
diagramof of data
data stored
stored indatabase.
in the the database.

In
Inthe
theentity
entitytable called
table “knowledge”,
called “knowledge”,eacheach
knowledge point is
knowledge numbered,
point the at- the at
is numbered,
tribute “knowledge_id” is created, and the attribute is used as the primary key. In the entity
tribute “knowledge_id” is created, and the attribute is used as the primary key. In th
table called “genre”, each category is numbered, the attribute “genre_id” is created, and the
entity table called “genre”, each category is numbered, the attribute “genre_id” is created
attribute is used as the primary key. In the relation table called “knowledge_to_genre”, the
and the “genre_id”
properties attribute andis “knowledge_id”
used as thewere primary
set as thekey. Inkeys
foreign theof the
relation table called
entity tables
“knowledge_to_genre”,
“genre” and “knowledge” to the properties
create “genre_id”
the relationship and genre
between “knowledge_id”
and knowledge. were set as the
For
foreign keys of the entity tables “genre” and “knowledge” to create
example, the knowledge “collinear equation” belongs to the field of “surveying”; “genre”the relationship be
tween
is genre
a “class”. In and knowledge.
the “genre” For example,
class, there the knowledge
are two entities: “surveying”“collinear equation”
and “remote belongs to
sensing”;
“genre”
the fieldmeans the genre to which
of “surveying”; the knowledge
“genre” node
is a “class”. In belongs. A knowledge
the “genre” nodeare
class, there belongs
two entities
to either the “surveying”
“surveying” and “remote genresensing”;
or the “remote-sensing”
“genre” means genre.
the genre to which the knowledge
node
3.2. belongs.
Ontology A knowledge
Construction node belongs to either the “surveying” genre or the “remote
and Storage
sensing” genre.
3.2.1. Ontology Construction
There are two ways to construct ontologies: top-down and bottom-up. The ontology
3.2. OntologyofConstruction
construction the open domain and Storage
knowledge graph typically uses a bottom-up method
3.2.1.
to Ontology extract
automatically Construction
concepts, types of concept, and relationships between concepts
from the knowledge graph. The open world is too complex to be considered in a top-down
There are two ways to construct ontologies: top-down and bottom-up. The ontology
manner. As the world changes, the corresponding concepts are still growing. Most domain
construction of the open domain knowledge graph typically uses a bottom-up method to
knowledge graph ontology construction uses a top-down approach. On the one hand, the
automatically
concept and scope extract
of theconcepts,
domain types of concept,
knowledge graph and relationships
are fixed between
or controllable concepts from
compared
the knowledge graph. The open world is too complex
to the open domain knowledge graph; on the other hand, the domain knowledge graph to be considered in a top-down
manner.
must yieldAs the world changes,
high-accuracy the corresponding
results. Currently, domain knowledge concepts are still
graphs growing.
are widely usedMost do
main
in voiceknowledge
assistants [18].graphThese ontology construction
domain knowledge graphsusescan a meet
top-down approach.
most user needs whileOn the one
ensuring accuracy.
hand, the concept and scope of the domain knowledge graph are fixed or controllable
This article
compared to theuses open
a top-down
domain approach to construct
knowledge graph; ontologies,
on theand the creation
other hand, the tool domain
uses Protégé (https://protege.stanford.edu,
knowledge graph must yield high-accuracy accessed on 6Currently,
results. May 2021),domain
an ontology editing graph
knowledge
tool [19]. The specific creation process is described as follows. First, we create the ontology
are widely used in voice assistants [20]. These domain knowledge graphs can meet mos
class and two classes of knowledge and genre. Note that all classes are subclasses of
user needs
“Thing”, andwhile ensuring
all classes must be accuracy.
mutually exclusive; an instance can only be one of the two
This article uses a top-down
classes. Second, the relationship between approach to construct
the classes ontologies,
(i.e., the object andisthe
properties) creation too
created.
usesarticle
This Protégé (https://protege.stanford.edu,
created the object attribute “belong_to” accessed on 6 May
(We classify 2021), an extracted
the knowledge ontology editing
tool [21].
from The specifictextbook
the “Surveying” creation into
process is described class.
the “Surveying” as follows.
We defineFirst,the
werelationship
create the ontology
between
class and thistwo
knowledge
classes and the class “surveying”
of knowledge and genre. as “belong_to”.
Note that all Similarly,
classesweareclassify
subclasses o
the knowledge
“Thing”, and extracted
all classes from the be
must “Remote
mutually Sensing” textbook
exclusive; anasinstance
“Remotecan Sensing”
only beclass.
one of th
We also define the relationship between this knowledge and the class “Remote
two classes. Second, the relationship between the classes (i.e., the object properties) is cre Sensing” as
“belong_to”) to indicate that a certain knowledge point is in a certain field (type). Therefore,
ated. This article created the object attribute “belong_to” (We classify the knowledge ex
its attribute “domain” is defined as the class “knowledge”, and its attribute “range” is
tracted
the classfrom the “Surveying”
“genre”. textbook
“Domain” indicates into the
which class“Surveying”
the attributeclass.belongsWeto.
define the relation
“Range”
ship between
represents this knowledge
the value range of theand the class
attribute, “surveying”
which defines theasinverse
“belong_to”. Similarly,
of this attribute as we clas
sify the knowledge
“belong_to”. extracted
Thus, ontology fromreasoning
describes the “Remote Sensing”
rules for knowledge textbook as “Remote
reasoning. The class,Sensing”
class. properties,
object We also define theproperties
and data relationship between
are shown this(Figure
below knowledge 6). Forand the class
example, “Remote Sens
in resource
description framework (RDF)
ing” as “belong_to”.) data, the
to indicate thatknowledge
a certainpoint “electromagnetic
knowledge point is in waves” belongs
a certain field (type)
to the class of “remote sensing”. When inquiring, the knowledge
Therefore, its attribute “domain” is defined as the class “knowledge”, and its attributepoint “electromagnetic
wave”
“range” canisalso
the be found
class in the class
“genre”. “remoteindicates
“Domain” sensing”.which Finally, classthe
class properties
attribute(data
belongs to
“Range” represents the value range of the attribute, which defines the inverse of this at
tribute as “belong_to”. Thus, ontology describes reasoning rules for knowledge reason
ing. The class, object properties, and data properties are shown below (Figure 6). For ex
ample, in resource description framework (RDF) data, the knowledge point “electromag
. 2021, 13, 2511 9 of 19
Remote Sens. 2021, 13, 2511 9 of 19
Remote Sens. 2021, 13, 2511 9 of 19

point “electromagnetic wave” can also


point “electromagnetic wave”be can
foundalsoinbethe class
found in “remote
the class sensing”. Finally,Finally,
“remote sensing”.
class properties (data
class properties)
properties)
properties (dataare
are created andcreated and
are similar
properties) areobject
to
are createdsimilar to similar
object
properties.
and are properties.
Concurrently, Concur-also
Protégé
to object properties. has
Concur-
rently, Protégéarently,
visual
also display
has a function
visual displayto show the
function structure
to show of the
the ontology.
structure The
of ontology
the structure
ontology.
Protégé also has a visual display function to show the structure of the ontology. is
shown
The ontology structure below
The ontology (Figure
is shown 7).
below
structure (Figure
is shown 7). (Figure 7).
below

Figure 6. Construction of ontology class (left), object properties (middle), and data properties
Figure (right).
Figure6.6.Construction
Constructionof of
ontology classclass
ontology (left), object
(left), properties
object (middle),
properties and data
(middle), andproperties (right).
data properties
(right).

Figure 7. Visual
Figure 7. Visual display
display of
of ontology
ontology structure.
structure.

3.2.2. Ontology Storage


Figure 7. Visual 3.2.2.
display Ontology
of ontologyStorage
structure.
The ontology storage mode in this article is the Relational Database Management
The ontology storage mode in this article is the Relational Database Management
System (RDBMS). The principle of RDBMS is to map the ontology to one or more tables,
3.2.2. OntologySystem
Storage (RDBMS). The principle of RDBMS is to map the ontology to one or more tables,
and then divide it into several modes according to the mapping mode, such as horizon-
and
The ontology then
storagedivide it intoinseveral
mode modesisaccording
thisstorage,
article to the mapping mode,
the Relational such as horizontal
tal storage, decomposition vertical storage andDatabasehybrid storage.Management
As the storage
storage, decomposition storage, vertical storage and hybrid storage. As the storage mech-
System (RDBMS). The principle
mechanism and dataof RDBMS iscapabilities
management to map theofontology relational to one or are
databases more tables,mature,
relatively
anism and data management capabilities of relational databases are relatively mature, re-
and then divide relational databasemodes
it into several storageaccording
is widely used to theinmapping
many storage mode, methods.
such as horizontal
lational database storage is widely used in many storage methods.
storage, decomposition This article
storage, uses the RDF
vertical and web
storage ontology
andontology
hybrid storage.language (OWL) to describe ontolo-
This article uses the RDF and web languageAs (OWL)the storage
to describemech-ontologies
gies
anism and data[22]. [20].
management RDF is used to represent any resource information and describes resources
RDF is usedcapabilities
to representof anyrelational
resourcedatabases
information areandrelatively
describesmature,
resources re-through
through the mode of attribute-attribute value [21]. OWL is used to express the relationship
lational databasethestorage
mode ofisattribute-attribute
widely used in many value storage
[23]. OWL methods.
is used to express the relationship be-
between classes, the constraints of the set cardinality, the equality relationship, the attribute
This article uses
tween the
classes,RDF the and web
constraints ontology
of the set language
type, and the characteristics of the attribute [22]. Compared cardinality, (OWL) to with
the equality describe ontologies
relationship,
other ontology the attribute
descrip-
[22]. RDF is usedtype,
tion to and
languages,the characteristics
represent any has
OWL resource of the
better attribute ability
information
description [24].
andCompared more with
describes
and other ontology
resources
description throughdescrip-
vocabulary. The
tion languages,
the mode of attribute-attribute
description OWL
vocabulary has
value better
[23]. OWL
expands descriptionis used
the reasoning ability and more
to query
and express description
the relationship
capabilities of thevocabulary.
be- The
ontology.
description
tween classes, the constraints vocabulary
The ontology ofisthe expands
set the
cardinality,
constructed and reasoning and query
the equality
stored using capabilities
therelationship,
ontology of the
the
editing ontology.
attribute
tool Protégé. It
The
type, and the characteristics ontology
cannot only construct is constructed
of the attribute
and operate and
[24]. stored
theCompared using
ontology but withthe ontology
alsoother
visually editing
ontology tool
displaydescrip- Protégé. It
the generated
cannot only construct and operate the ontology but also visually display the generated
tion languages,ontology,
OWL has including the display ofability
better description the hierarchical
and morerelationship
description between ontology
vocabulary. Theconcepts,
ontology,
as well as including
the visual the display
display of of the conceptual
ontology hierarchical relationship
entities, entity between ontology
relationships, and con-
entity
description vocabulary expands the reasoning and query capabilities of the ontology.
cepts, as well as the visual display of ontology conceptual
attributes. When the ontology is created using the ontology description language OWL entities, entity relationships,
The ontology is constructed
and Protégé,
entity attributes. and
When stored using the
the ontology ontology
is created usingediting tool Protégé.
the ontology It lan-
description
and semiautomatic construction can be achieved. Protégé can also reason about
cannot only construct
guage
the ontology
and
OWL based operate
and Protégé, the ontology
semiautomatic
on the hierarchical
but also visually
construction
relationship of thecan
display the
be achieved.
ontology,
generated
with theProtégé
help of can also
Jena’s
ontology, including
reason the
about display
the of
ontology the hierarchical
based on the relationship
hierarchical between
relationship
query mode, and can also realize the editing operation of the ontology in the RDF and of ontology
the ontology, con- with the
cepts, as well OWL
as the
help of visual display
Jena’s query
languages of ontology conceptual entities, entity
[23]. mode, and can also realize the editing operation of the ontology in relationships,
the RDFWhen
and entity attributes. and OWL the languages
ontology [25]. is created using the ontology description lan-
3.3. Ontology and Database Mapping
guage OWL and Protégé, semiautomatic construction can be achieved. Protégé can also
reason about the 3.3.ontology
Ontology
Two standardsand Database
based on theMapping
to convert the structured
hierarchical data of of
relationship thetherelational
ontology, database
with the into RDF
format Twodata have
standards beento developed
convert the by the
structuredRDB2RDFdata
help of Jena’s query mode, and can also realize the editing operation of the ontology in studio
of the of W3C.
relational The process
database intois applied
RDF for-
by
mat the
dataD2RQhave tool
been [24].
developed by the RDB2RDF studio of W3C. The process is applied by
the RDF and OWL languages [25].
the D2RQ tool [26].
3.3. Ontology and Database Mapping
Two standards to convert the structured data of the relational database into RDF for-
mat data have been developed by the RDB2RDF studio of W3C. The process is applied by
Remote Sens. 2021, 13, 2511 10 of 1

Remote Sens. 2021, 13, 2511 10 of 19

The first standard is direct mapping, which is defined by the rule that tables in th
database are classes in the associated ontology. For example, there are 3 tables for the dat
The first standard is direct mapping, which is defined by the rule that tables in the
stored in MySQL. After mapping, the ontology has 2 classes instead of the 3 classes pre
database are classes in the associated ontology. For example, there are 3 tables for the
viously
data storeddefined.
in MySQL.TheAfter
columns of tables
mapping, are attributes,
the ontology and the
has 2 classes rows of
instead arethe
instances.
3 classes The con
tent in each cell of tables is text. If a certain column in the cell is
previously defined. The columns of tables are attributes, and the rows are instances.a foreign key, then it wi
notcontent
The be ablein toeach
mapcell
theofdata in the
tables database
is text. to thecolumn
If a certain definedinontology.
the cell is In this case,
a foreign key,the shor
comings
then it willof
notdirect mapping
be able to map theare data
marked.
in theIndatabase
response to this
to the defect,
defined the RDB2RDF
ontology. In this studi
case, the shortcomings
proposed R2RML andofallowsdirect mapping are marked.
users to flexibly In response
edit and to thisrules.
set mapping defect, themappin
This
RDB2RDF studio proposed R2RML and allows users to flexibly edit and set
also provides the ability to view existing relational data in the RDF data mode, which mapping rules.
This mapping by
represented also providesthe
mapping thestructure
ability to selected
view existing
by the relational
customer data
and in the
the target
RDF datavocabulary
mode, which is represented by mapping the structure selected by the customer and the
R2RML mapping is an RDF graph and is recorded in Turtle syntax. R2RML supports di
target vocabulary. R2RML mapping is an RDF graph and is recorded in Turtle syntax.
ferent supports
R2RML types of different
mapping implementations.
types The processor
of mapping implementations. Thecan provide
processor can virtual
provide SPARQ
protocol
virtual and resource
SPARQL description
protocol and resource framework query language
description framework (SPARQL)
query language endpoints, gen
(SPARQL)
erate RDF dumps, or provide link data interfaces on the mapped
endpoints, generate RDF dumps, or provide link data interfaces on the mapped relationalrelational data [27]. Th
specific
data [25]. mapping
The specificdiagram
mappingisdiagram
shown below
is shown (Figure
below 8).
(Figure 8).

Figure8.8.Schematic
Figure Schematic diagram
diagram of ontology
of ontology to database
to database mapping.
mapping.

3.4. Query and Reasoning


3.4. Query and Reasoning
This article describes how to use D2RQ to setup a SPARQL service endpoint and use
This article describes how to use D2RQ to setup a SPARQL service endpoint and us
query operations in the browser. A SPARQL endpoint provides a service that is compliant
with theoperations
query in the browser.
SPARQL protocol. Through the A SPARQL
default orendpoint provides
defined mapping a service
file, RDF data that
canisbecomplian
with the
queried in SPARQL protocol.
the rules of Through
RDF. In other theto
words, default or defined
complete the finalmapping
query, D2RQ file,converts
RDF data can b
queried
the SPARQL in query
the rules
intoofanRDF. In other based
SQL statement words, ontothe
complete
mappingthefile final
and thenquery, D2RQ
returns the convert
the SPARQL query into an SQL statement based on the mapping file and then returns th
result to the user [26].
Thetofollowing
result the user steps
[28]. are executed: (1) start the D2R server; (2) enter the browser to
start the SPARQL endpoint;
The following steps are andexecuted:
(3) enter the
(1)SPARQL
start thestatement
D2R server;to execute the the
(2) enter query.
browser t
Concurrently, operations can also be queried by writing Python scripts. The third-party
start the SPARQL endpoint; and (3) enter the SPARQL statement to execute the query
library SPARQL Wrapper of Python can easily interact with endpoints.
Concurrently, operations can also be queried by writing Python scripts. The third-part
Using D2RQ to open the endpoint service has two disadvantages: it does not support
library SPARQL
publishing RDF data Wrapper of the
directly to Python canthrough
network easily interact with and
the endpoint endpoints.
does not support
Using D2RQ
the inference. To solveto these
openproblems,
the endpoint service
certain has twoofdisadvantages:
components Fuseki, Jena, andit thedoes not suppo
tuple
database (TDB) in Apache Jena are investigated experimentally. Fuseki is a SPARQL server suppo
publishing RDF data directly to the network through the endpoint and does not
the inference.
provided by ApacheTo solve these problems,
Jena, which is primarilycertain
run as acomponents
web applicationof Fuseki,
or as an Jena,
embeddedand the tupl
server.
databaseJena(TDB)
provides
in resource
Apache description framework schema
Jena are investigated (RDFS). In Fuseki
experimentally. the caseisofaa SPARQ
single
servermachine,
provided storage layer technology
by Apache can provide
Jena, which high RDF
is primarily runstorage
as a webperformance
application[27]. or as a
Next, we must define reasoning rules and conduct knowledge reasoning.
embedded server. Jena provides resource description framework schema (RDFS). In th
case of a single machine, storage layer technology can provide high RDF storage perfo
mance [29]. Next, we must define reasoning rules and conduct knowledge reasoning.

3.5. Visualizing the Knowledge Graph on Neo4j


Based on the literature [2], the knowledge graph has basic query and reasoning func
tions; however, the visualization function of the knowledge graph has not been reflected
3.5. Visualizing the Knowledge Graph on Neo4j
Based
Remote Sens. 2021, 13, 2511on
the literature [2], the knowledge graph has basic query and reasoning11 of 19
functions; however, the visualization function of the knowledge graph has not been
reflected. Therefore, the graph database Neo4j is used to visualize the knowledge graph
[30]. Neo4j provides good the
3.5. Visualizing visualizations
Knowledge Graphof knowledge graphs. The specific
on Neo4j
implementation processBased and visualization
on the literatureare
[2],shown below.graph
the knowledge The process
has basic of generating
query the func-
and reasoning
tions; however,
knowledge graph is as follows. the visualization function of the knowledge graph has not been reflected.
Therefore, the graph database Neo4j is used to visualize the knowledge graph [28]. Neo4j
First, table data are converted into the format of CSV. These files are then moved to
provides good visualizations of knowledge graphs. The specific implementation process
the import directoryand of Neo4j because
visualization Neo4jbelow.
are shown defaultsTheto openoffiles
process in the the
generating import directory.
knowledge graph is as
After starting the database,
follows. the Cypher language [31] is used to import data, and the
construction mode of the First,
nodetableisdata are converted
created into the format
(k: knowledge of CSV.
{name: These filesorientation”}).
“absolute are then moved to the
import directory of Neo4j because Neo4j defaults to open files
This statement creates a node with a knowledge label, and this node has a name attribute, in the import directory. After
starting the database, the Cypher language [29] is used to import data, and the construction
an attribute value ofmode
“absolute orientation”,
of the node is created (k:and a variable
knowledge {name:name k. The
“absolute Cypher sentence
orientation”}). This statement
for importing structured
createsdata
a nodeentity
withdata into thelabel,
a knowledge database is node
and this Codehas1. aThe
nameresult of Code
attribute, 1
an attribute
is shown in Figure 9.value of “absolute orientation”, and a variable name k. The Cypher sentence for importing
structured data entity data into the database is Code 1. The result of Code 1 is shown in
CodeFigure
1 Entity
9. Data Imported Based on Cypher Language
LOAD CSV WITH HEADERSCode FROM “file:///knowledge.csv” AS line MERGE
1 Entity Data Imported Based on Cypher Language
(k:knowledge{id:line.knowledge_id, name:line.knowledge_name,
LOAD CSV WITH HEADERS FROM “file:///knowledge.csv” AS line MERGE
genre:line.knowledge_genre})
(k:knowledge{id:line.knowledge_id, name:line.knowledge_name, genre:line.knowledge_genre})

Figure 9. Part of the


Figure visualization
9. Part shows shows
of the visualization the result of importing
the result of importingentities intothe
entities into thedatabase.
database.

The name of the imported file is “knowledge.csv”, and the label is “knowledge”.
The name of the imported file is “knowledge.csv”, and the label is “knowledge”.
Attributes and attribute values correspond to row and column data in the table. The imports
Attributes and attribute values
of nodes correspondaretosimilar.
and relationships row and columnsentence
The Cypher data inforthe table. structured
importing The
imports of nodes and relationships
relational data into theare similar.
database The
is Code Cypher
2. The sentence
result of for importing
Code 1 is shown in Figure 10.
structured relational data into theCode database is Code 2. The result of Code 1 is shown in
2 Relational Data Imported Based on Cypher Language
Figure 10.
LOAD CSV WITH HEADERS FROM “file:///knowledge_to_genre.csv” AS line MATCH
(from:knowledge{id:line.knowledge_id}),(to:genre{id:line.genre_id})
Code 2 Relational Data (from)-[r:belong_to{Relation:line.knowledge_to_genre}]->(to)
MERGE Imported Based on Cypher Language
LOAD CSV WITH HEADERS
The nameFROM
of the “file:///knowledge_to_genre.csv”
imported file is “knowledge_to_genre”.AS line
The MATCH
partial result of the
imported visualization is shown below (Figure 11). In this
(from:knowledge{id:line.knowledge_id}),(to:genre{id:line.genre_id}) picture, the two red nodes,
“Surveying” and “Remote Sensing”, represent the class of knowledge, and the blue node
MERGE (from)-[r:belong_to{Relation:line.knowledge_to_genre}]->(to)
represents knowledge.
Remote Sens. 2021, 13, 2511 12 of 19
Remote Sens. 2021, 13, 2511 12 of 1

Figure 10. Part of the visualization shows the result of importing relationships into the database.

The name of the imported file is “knowledge_to_genre”. The partial result of the im
ported visualization is shown below (Figure 11). In this picture, the two red nodes, “Sur
veying” and “Remote Sensing”, represent the class of knowledge, and the blue node rep
Figure10.10.
resents
Figure Part
of of
thethe
knowledge.
Part visualization
visualization shows
shows the result
the result of importing
of importing relationships
relationships into the database.
into the database.

The name of the imported file is “knowledge_to_genre”. The partial result of the im
ported visualization is shown below (Figure 11). In this picture, the two red nodes, “Su
veying” and “Remote Sensing”, represent the class of knowledge, and the blue node rep
resents knowledge.

Figure11.11.
Figure Part
Part of the
of the visualization
visualization of the knowledge
of the knowledge graph
graph in the fieldin
of the field of
surveying surveying
and and remote
remote sensing.
sensing.
The database contains 1024 nodes, which belong to the two classes of nodes “knowl-
edge” and “genre”. The nodes in the knowledge graph are the primary components of
the knowledge graph, primarily the knowledge of surveying and remote sensing. The
Figure 11. Part of the visualization of the knowledge graph in the field of surveying and remote
sensing.
Remote Sens. 2021, 13, 2511 13 of 19

Remote Sens. 2021, 13, 2511 13 of 19


The database contains 1024 nodes, which belong to the two classes of nodes
“knowledge” and “genre”. The nodes in the knowledge graph are the primary compo-
nents of the knowledge graph, primarily the knowledge of surveying and remote sensing.
library also contains
The library 1295 relationships,
also contains which which
1295 relationships, represent the relationship
represent betweenbetween
the relationship nodes
in the class “knowledge” and nodes in the class “genre”. For example, “belong_to”
nodes in the class “knowledge” and nodes in the class “genre”. For example, “belong_to” is the
belonging relationship between knowledge and type. The library also contains attribute
is the belonging relationship between knowledge and type. The library also contains at-
information, such as such
tribute information, the “genre” and “name”
as the “genre” of the “knowledge”
and “name” node. Certain
of the “knowledge” basic
node. Certain
information for creating the database is shown below (Figure 12).
basic information for creating the database is shown below (Figure 12).

Figure12.
Figure 12.Part
Partof
ofthe
theinformation
informationofofthe
theknowledge
knowledgegraph
graphknowledge
knowledgebase
baseininthe
thefield
fieldofofsurveying
survey-
ing remote
and and remote sensing.
sensing.

4.4.Application
ApplicationAnalysis
Analysisof
ofthe
theKnowledge
KnowledgeGraph
Graph
Application
Applicationanalysis
analysisofofknowledge
knowledgegraph
graph is is
ananimportant
important means
meansto verify the the
to verify value of
value
knowledge
of knowledgegraph. ThisThis
graph. article verifies
article the application
verifies valuevalue
the application of theof
knowledge graph graph
the knowledge from
the two
from theapplication scenarios
two application of “domain
scenarios relevance
of “domain analysis”
relevance and “knowledge
analysis” reasoning
and “knowledge rea-
in the field
soning of surveying
in the and remote
field of surveying andsensing” in the smart
remote sensing” campus
in the platform.platform.
smart campus

4.1.
4.1.Domain
DomainRelevance
RelevanceAnalysis
Analysis
In many fields, there are various degrees of correlation. For example, in education,
In many fields, there are various degrees of correlation. For example, in education,
there is a common phenomenon in which pieces of knowledge are related between different
there is a common phenomenon in which pieces of knowledge are related between differ-
disciplines of the same major. Zhou and others proposed a method for constructing a
ent disciplines of the same major. Zhou and others proposed a method for constructing a
scientific knowledge graph based on the degree of interdisciplinary association, which
scientific knowledge graph based on the degree of interdisciplinary association, which
aims to help students quickly understand the relationship between disciplines [30]. The
aims to help students quickly understand the relationship between disciplines [32]. The
relevance analysis based on the knowledge graph can assist students in choosing courses
relevance analysis based on the knowledge graph can assist students in choosing courses
and improve teaching quality.
and improve teaching quality.
In recent years, with the development of science and technology, the “smart campus
In recent years, with the development of science and technology, the “smart campus
platform” has risen rapidly. Therefore, in the field of surveying and remote sensing, the
platform” has risen rapidly. Therefore, in the field of surveying and remote sensing, the
knowledge graph can be leveraged in the context of the smart campus platform. The same
knowledge is
knowledge graph
showncantobeexist
leveraged in the domains,
in multiple context of thereby
the smart campus platform.
highlighting The same
the association
knowledge is shown to exist in multiple domains, thereby highlighting
between fields (Figure 13). The association between domains is related to the number of the association
between knowledge
common fields (Figure 13). The
between association
domains. between
The more domainsofiscommon
the number related to the number
knowledge, theof
common knowledge between domains. The more the number of common
higher the association between fields. In this research, if one knowledge belongs to both knowledge, the
higher the association between fields. In this research, if one knowledge
“surveying” and “remote sensing”, then the knowledge is the common knowledge between belongs to both
“surveying”
the two fields.and The“remote sensing”,
more common then the knowledge
knowledge between two is the common
fields, knowledge
the higher degree be-
of
tween the two fields.
association between fields. The more common knowledge between two fields, the higher degree
of association between fields.
The domain is the class in ontology. In this research, the domains of surveying and
remote sensing respectively correspond to the classes of “Surveying” and “Remote Sens-
ing” in the knowledge graph. Among all the knowledge, some knowledge belongs to the
class of “Surveying “, and some knowledge belongs to the class of “Remote Sensing”.
sive understanding of the field, then in order to save time and cost, you could just
the two subjects of photogrammetry and remote sensing principles and applicatio
cordingly, professional students in this field can quickly help such students unde
Remote Sens. 2021, 13, 2511the degree of coverage of knowledge content between subjects, help them quickly
14 of 19

stand a nearby subject that has already studied a subject, and improve learning effi

Figure 13. Display


Figure 13. Displayofofrelevance between
relevance between courses.
courses.

The domain is the class in ontology. In this research, the domains of surveying and
4.2. Knowledge Reasoning
remote sensing in the
respectively Field oftoSurveying
correspond the classes ofand Remoteand
“Surveying” Sensing
“Remote Sensing”
in the knowledge graph. Among all the knowledge, some knowledge belongs to the class of
Knowledge reasoning is primarily based on known knowledge to infe
“Surveying”, and some knowledge belongs to the class of “Remote Sensing”. Some knowl-
knowledge or distinguish
edge belongs incorrect
to both the class knowledge.
of “Surveying” Theofreasoning
and the class function
“Remote Sensing”. Theseofare
knowled
prominent feature
the common of the knowledge
knowledge graph.
of the two fields. Compared
Then this to traditional
common knowledge determinesknowledge
the
degree of association between the two fields (Figure 13).
ing, knowledge reasoning based on knowledge graphs is more flexible, and its m
The picture (Figure 13) contains nodes of “Surveying” and “Remote Sensing”, as
are more diverse. There arebelonging
well as all the knowledge many to methods;
the field ofhowever,
remote sensingtheandmethod used
surveying, whichin this a
primarily based on
also contains the reasoning
the relationship of description
“belong_to” between classes logic
nodesand thus primarily
and knowledge. The intr
knowledge graph contains 1022 knowledge points. There are 273 knowledge points shared
knowledge reasoning based on this method.
by the two disciplines. Its repetition rate is 26.7%. According to the degree of relevance,
The description
it can be used as alogic system
reference whenisselecting
primarily divided
courses. If you into
were four parts:
a student in a(1) three ba
field
ments (concepts, relationships
unrelated to surveying and remote and entities);
sensing (2)tothe
and sought haveaxiom
a generalset,
and to which the conc
comprehensive
understanding of the field, then in order to save time and cost, you could just choose the two
longs; (3) the assertion set of the entity; and (4) the reasoning mechanism. Reasonin
subjects of photogrammetry and remote sensing principles and applications. Accordingly,
based on supporting
professional studentsOWL
in thisDL
field(description
can quickly helplogic) language
such students include
understand FaCT++,
the degree of Rac
pellet [33]. In addition
coverage of knowledgeto content
existing tools,subjects,
between knowledge
help them can be reasoned
quickly understandby writing rul
a nearby
subject that has already studied a subject, and improve learning
In this knowledge graph, the reasoning of knowledge is described by writin efficiency.

based on4.2. the reasoning


Knowledge Reasoningmethod ofofdescription
in the Field logic.Sensing
Surveying and Remote This article primarily involv
Knowledge reasoning is primarily based on known knowledge to infer new knowl-
edge or distinguish incorrect knowledge. The reasoning function of knowledge is a promi-
nent feature of the knowledge graph. Compared to traditional knowledge reasoning,
Remote Sens. 2021, 13, 2511 15 of 19

knowledge reasoning based on knowledge graphs is more flexible, and its methods are
more diverse. There are many methods; however, the method used in this article is primar-
ily based on the reasoning of description logic and thus primarily introduces knowledge
reasoning based on this method.
The description logic system is primarily divided into four parts: (1) three basic
elements (concepts, relationships and entities); (2) the axiom set, to which the concept
belongs; (3) the assertion set of the entity; and (4) the reasoning mechanism. Reasoning
tools based on supporting OWL DL (description logic) language include FaCT++, Racer,
and pellet [31]. In addition to existing tools, knowledge can be reasoned by writing rules.
In this knowledge graph, the reasoning of knowledge is described by writing rules
based on the reasoning method of description logic. This article primarily involves two
types of reasoning: entity and relational. Entity reasoning infers unknown entities based on
existing entities and relationships (e.g., known entity-relation-unknown entity). Relational
reasoning infers the relationship between one or more entities by editing rules under the
condition of existing entities (e.g., known entity-unknown relationship-known entity).
Generally, missing knowledge is inherent to knowledge graphs, that is, the incom-
pleteness of entities or relationships. We can apply the knowledge graph in the field of
surveying and remote sensing to complement the knowledge graph in the field of survey-
ing and remote sensing. Knowledge graph completion is an important way to acquire
knowledge. The goal of knowledge graph completion is to find these missing items of
knowledge and add them to the knowledge graph so that the knowledge graph tends to
be complete [32]. A knowledge graph is named G, and its basic components include the
entity set E = e1 , e2 , . . . ei (i is the number of entities), relation set R = r1 , r2 . . . r j (j is the
number of relations) and corresponding triple set T = {(em , rk , en )} (em , en ∈ E, rk ∈ R).
Since the number of entities E and relationships R in the knowledge graph are limited,
there may be some entities and relationships that are not in G. According to the content
we want to complete, we can divide knowledge completion into three subtasks: 1 Given
a partial triplet T1 = (?, rk , en ), we predict the head entity. 2 Given a partial triplet
T2 = (em , rk , ?), we predict the tail entity. 3 Given a partial triplet T3 = (em , ?, en ), we pre-
dict the relationship between entities. According to whether the entities and relationships
belong to the original knowledge graph, we can divide the knowledge graph completion
into static knowledge graph completion and dynamic knowledge graph completion. The
entities and relationships in the completion of the static knowledge graph are all in the
original knowledge graph. The entities and relationships in the completion of the dynamic
knowledge graph are not in the original knowledge graph. Through the completion of the
knowledge graph, the collection of entities and relationships of the original knowledge
graph can be expanded.
Knowledge graph completion is the most widely used field of knowledge reasoning.
The original intention of a large number of knowledge graph reasoning algorithms is to
be applied to knowledge graph completion, such as the Markov logic network ( MLN),
translating relation embeddings (TransR), capsule network-based embedding (CapsE), and
relational graph neural network with hierarchical attention (RGHAT). All the methods
mentioned above can determine whether there is a certain relationship between any entities
by reasoning in the vector space, and then realize the completion of the knowledge graph.
Integrating the knowledge graph in the field of surveying and remote sensing into the
smart campus platform can help students discover new entities or relationships between
entities. For example, relational reasoning is based on the Jena tool, as shown in code 3
(based on the SPARQL language). This code defines a rule named “rule”, which means
that if there is an entity that belongs to “Remote Sensing”, then this entity belongs to
“Geography”, which is (Remote Sensing-belong_to-Geography). For example: knowing
(Aberration-belong_to-Remote Sensing), according to this code you can obtain the result:
(Aberration-belong_to-Geography). The visual display of the whole reasoning process is
shown in Figure 14.
Remote Sens. 2021, 13, 2511 16 of 19

Code 3 Inference Rules Based on SPARQL Language


@prefix: <http://www.kbdemo.com#>.
@prefix owl: <http://www.w3.org/2002/07/owl#>.
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>.
@prefix xsd: <XML Schema>.
@prefix rdfs: <http://www.w3.org/2000/02/rdf-schema#>.
Remote Sens. 2021, 13, 2511 16 of 1
[rule:(?k:belong_to ?g)(?g:hasname ?n)(?n:genre_name ‘Remote Sensing’)->(?k rdf:type:Geography)]
[ruleInverse:(?k:belong_to ?g)->(?g:hasKnowledge ?k)]

Figure14.
Figure 14.Visual
Visual display
display of the
of the inference
inference process.
process.

5.
5. Summary
Summaryand
andProspect
Prospect
In
Inthe
thefield ofof
field surveying
surveyingandand
remote sensing,
remote rapidrapid
sensing, acquisition, efficientefficient
acquisition, processing processing
and effective application of remote-sensing data are the core tasks. However, in the face of
and effective application of remote-sensing data are the core tasks. However, in the face
the massive amount of data accumulated, as well as the heterogeneous, decentralized, and
of the massive
dynamic update amount of data
characteristics of accumulated,
the massive amount as well as theit heterogeneous,
of data, decentralized
is difficult to realize the
semantic integration, interoperability and sharing platform construction of massive datato realize
and dynamic update characteristics of the massive amount of data, it is difficult
the semantic
application integration,
services. Since theinteroperability
knowledge graphand sharing
is a kind platform
of semantic construction
network, of massive
hierarchical
data applicationand
interconnection services.
semanticSince the knowledge
processing graph
capabilities can isbea realized
kind of semantic
in the form network,
of a hier
graphical structure, and theand
archical interconnection system and relevance
semantic processing of knowledge
capabilities cancan
be be
displayed
realized more
in the form
intuitively. The knowledge
of a graphical structure,graph provides
and the system a better
and organization
relevance of and management
knowledge canmethod
be displayed
for isolated information and knowledge. It describes the real-world
more intuitively. The knowledge graph provides a better organization and managemenentities, concepts, and
the relationships between entities and concepts in a structured and semantic form and
method for isolated information and knowledge. It describes the real-world entities, con
organizes information into a form that is easier for people to understand. The purpose is to
cepts, and
express the relationships
the objective world as abetween entitiesknowledge
well-structured and concepts in a structured
expression. It is hailedandas asemantic
form and
booster organizes
for the information
next generation into aintelligence.
of artificial form that Theis easier for people
contributions to article
of this understand.
are The
purpose
as follows:is to express the objective world as a well-structured knowledge expression. It is
hailed
(1) To as a booster
quickly obtainfor the next
effective datageneration
from massiveof artificial
amounts of intelligence. Thedecentral-
heterogeneous, contributions o
this article aredynamically
ized, and as follows:updated data. This article proposes a method for constructing
a subject knowledge graph in the field of surveying and remote sensing.
(1) To quickly obtain effective data from massive amounts of heterogeneous, decentral
(2) To verify the application value of the knowledge graph in the field of surveying and
ized,
remoteand dynamically
sensing. updated
This article data.
verifies This article
its functions fromproposes a visualization
the query, method for constructing
and
areasoning
subject knowledge graph in
of the knowledge graph. the field of surveying and remote sensing.
(2) To
Theverify the application
challenges value of encountered
and countermeasures the knowledge graph
in the in the
research are field of surveying and
as follows:
remote sensing. This article verifies its functions from the query, visualization and
(1) The connection between the mode layer and the data layer. The knowledge graph
reasoning
in the field of
of the knowledge
surveying graph.
and remote sensing is mainly divided into two parts: mode
The challenges and countermeasures encountered in the research are as follows:
(1) The connection between the mode layer and the data layer. The knowledge graph in
the field of surveying and remote sensing is mainly divided into two parts: mode leve
construction and a data layer. The mode level is the foundation of the data layer
Remote Sens. 2021, 13, 2511 17 of 19

level construction and a data layer. The mode level is the foundation of the data layer.
Through the factual expression of the ontology library standard data layer, ontology is
the conceptual template of a structured knowledge base. The data layer is composed
of a series of knowledge entities or concepts. Knowledge is stored in units of facts.
The data layer expresses knowledge in the form of triples (entity 1-relation-entity 2)
or (entity-attribute-attribute value). Realizing the association between ontology and
data at two levels is a major challenge in constructing a knowledge graph. This article
uses the D2RQ tool to realize mapping from ontology to the database. The D2RQ tool
converts the structured data of the relational database into data in RDF format. This
mapping also provides the ability to view existing relational data that exist in the RDF
data model, which is represented by mapping the structure selected by the customer
and the target vocabulary. The R2RML mapping itself is an RDF graph and is recorded
in Turtle syntax. R2RML supports different types of mapping implementation. The
processor can provide virtual SPARQL endpoints on the mapped relational data, or
generate RDF dumps, provide a link data interface.
(2) Application and practice of the domain knowledge graph. This article gives an exam-
ple of the application of integrating the knowledge graph in the field of surveying
and remote sensing into the smart campus platform. The knowledge visualization
application of the domain knowledge graph on the smart campus platform can assist
teachers and students in selecting courses. The domain knowledge graph is applied
to knowledge reasoning on the smart campus platform, which can help teachers and
students discover and reason about new knowledge, as well as new relationships
between knowledge.
Knowledge in the field of surveying and remote sensing is diverse and complex, and
knowledge graphs can be studied in more detail in the following areas. Data in these
fields of study are typically images, and the recognition and acquisition of knowledge in
image data are understudied; in particular, the knowledge graph of image data lacks a
time dimension. Thus, future research should investigate how to add the time dimension
to the knowledge graph and how to extend the time dimension to the application of the
knowledge graph. If the above problems can be broken through, then the knowledge graph
in the field of surveying and remote sensing will have greater potential application value:
(1) The discovery of new rules of surveying and remote sensing: The continuous increase
in surveying and remote-sensing data and the continuous improvement of digital
management and utilization technology have provided great convenience for scientific
researchers to carry out research work. The surveying and remote-sensing knowl-
edge graph provides support for the insight and discovery of regular knowledge of
surveying and remote-sensing resources by associating a large amount of surveying
and remote sensing knowledge into a network structure. Researchers can discover
various knowledge and rules hidden behind the development process through the
analysis of surveying and remote-sensing data to provide relevant scientific research
personnel and scientific research policy makers with scientific research directions and
a policy-making basis.
(2) Application of machine learning methods in the analysis of knowledge graphs in
the field of surveying and remote sensing: From the development process of the
combination of machine learning and knowledge graphs (the knowledge graph as a
complex network; traditional machine learning methods to conduct graph mining
and analysis on the knowledge graph; further application of deep learning methods
and graph neural network methods in knowledge graphs), the value and function
of the knowledge graphs have been further embodied. The surveying and remote-
sensing domain knowledge graph is a special domain knowledge graph, and the
analysis method in the general knowledge graph is used to mine and analyse the
graph, but it cannot make full use of the structure and characteristics of the surveying
and remote-sensing domain knowledge graph. For the specific structural features
and entity attributes in the knowledge graph of surveying and remote sensing, it
Remote Sens. 2021, 13, 2511 18 of 19

is necessary to design specific machine learning methods or deep neural network


structures. At the same time, for different application scenarios, different objective
functions are usually designed to learn the parameters of the algorithm. Therefore,
the study of machine learning and deep learning mining methods for the knowledge
graph in the field of surveying and remote sensing is helpful to the further analysis
and application of the knowledge graph in the field of surveying and remote sensing.
(3) Construction of the service platform of the knowledge graph in the field of surveying
and remote sensing: In the construction of the knowledge graph in the field of
surveying and remote sensing, the work efficiency is affected due to the problem of
scattered tools. The next research plan is to build a knowledge graph service platform
in the field of surveying and remote sensing to realize the integration of tools and
services. At the data source level, it integrates all kinds of open and available data in
the field of surveying and remote sensing, as well as data unique to each demand side.
Through the provided functions of surveying and remote sensing data acquisition,
data storage, ontology construction, graph construction and update, a knowledge
graph of the field of surveying and remote sensing that can be updated in time can
be constructed. In terms of services, through the provision of knowledge service
algorithms and models such as knowledge queries, knowledge visualization, and
knowledge reasoning, a service platform provides targeted knowledge services for
different roles.

Author Contributions: Conceptualization, X.H. and Z.J.; methodology, X.H.; validation, X.L., Q.L.
and R.Y.; formal analysis, Z.J.; investigation, L.L.; resources, X.H.; data curation, X.L.; writing—
original draft preparation, X.H.; writing—review and editing, L.L., L.Y.; visualization, M.S.; supervi-
sion, X.L.; project administration, Z.J.; funding acquisition, X.L. All authors have read and agreed to
the published version of the manuscript.
Funding: This research was supported by the National Key Research and Development Project
of China (No. 2016YFC0502106), National Science and Technology Major Project of China (No.
2018ZX07111002) and the National Natural Science Foundation of China(No. 41476161).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The storage URL of the structured raw data to construct the knowledge
graph is: https://github.com/hao1661282457/Knowledge-graphs.git (accessed on 25 June 2021).
Acknowledgments: We thank AJE (https://www.aje.cn/ (accessed on 25 June 2021)), for editing the
English text of this manuscript.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Li, W.; Xiao, Y.W.; Wang, W. People Entity Recognition Based on Chinese Knowledge Graph. Comput. Eng. 2017, 43, 225–231,240.
[CrossRef]
2. Liu, Q.; Li, Y.; Duan, H.; Liu, Y.; Qin, Z. Knowledge Graph Construction Techniques. J. Comput. Res. Dev. 2016, 53, 582–600.
[CrossRef]
3. Cao, Q.; Zhao, Y. The Technical Realization Process and Related Applications of Knowledge Graph. Inf. Stud. Theory Appl. 2015,
12, 127–132. [CrossRef]
4. Xu, Z.; Sheng, Y.; He, L.; Wang, Y. Review on Knowledge Graph Techniques. J. Univ. Electron. Sci. Technol. China 2016, 45, 589–606.
[CrossRef]
5. Wang, Z.; Xiong, C.; Zhang, L.; Xia, G. Accurate Annotation of Remote Sensing Images via Active Spectral Clustering with Little
Expert Knowledge. Remote Sens. 2015, 7, 15014–15045. [CrossRef]
6. Xie, R.; Luo, Z.; Wang, Y.; Chen, W. Key Techniques for Establishing Domain Specific Large Scale Knowledge Graph of Remote
Sensing Satellite. Radio Eng. 2017, 47, 1–6. [CrossRef]
7. Jiang, B.; Wan, G.; Xu, J. Geographic Knowledge Graph Building Extracted from Multi-sourced Heterogeneous Data. Acta Geod. et
Cartogr. Sin. 2018, 47, 1051–1061. [CrossRef]
8. Lu, F.; Yu, L.; Qiu, P. On Geographic Knowledge graph. J. Geo Inf. Sci. 2017, 19, 723–734. [CrossRef]
Remote Sens. 2021, 13, 2511 19 of 19

9. Zhu, L.J. Research on Information Resource Management Model Based on Domain Knowledge in World Wide Web Environment.
Ph.D. Thesis, China Agricultural University, Beijing, China, 1 June 2004.
10. Wang, L.; Wang, J.; Xu, N.; Deng, Y. Knowledge Graph-based Metro Engineering Accidents Knowledge Modeling and Analysis. J.
Civil. Eng. Manag. 2019, 36, 109–114,122. [CrossRef]
11. Wei, T.; Wang, J. Construction of Knowledge Graph based on Non-classification Relation Extraction Technology. Ind. Technol.
Innov. 2020, 37, 27–32. [CrossRef]
12. He, L. Research on Key Techniques of Entity Attribute Extraction for Unstructured Text. Master’s Thesis, Harbin University of
Science and Technology, Harbin, China, 1 June 2020.
13. Sun, J.B. Principles and Applications of Remote Sensing, 3rd ed.; Whuhan University Press: Whuhan, China, 2009; pp. 23–126.
14. Kong, X.Y.; Guo, J.; Liu, Z. Founding of Geodesy, 4th ed.; Whuhan University Press: Wuhan, China, 2005; pp. 56–89.
15. Vyas, A.; Kadakia, U.; Jat, P. Extraction of Professional Details from Web-URLs using DeepDive. Procedia Comput. Sci. 2018, 132,
1602–1610. [CrossRef]
16. Ma, H.B. Research on Construction and Application of Knowledge Graph of Enterprise Related Information for Risk Control.
Master’s Thesis, Beijing University of Technology, Beijing, China, 30 May 2019; pp. 25–27.
17. Abad, A.; Moschitti, A. Distant supervision for relation extraction using tree kernels. Appl. Clay Sci. 2015, 115, 108–114. [CrossRef]
18. Mallory, E.; Zhang, C.; Christopher, R.; Altman, R. Large-scale extraction of gene interactions from full-text literature using
DeepDive. Bioinformatics 2016, 1, 106–113. [CrossRef]
19. John, H.; Gennari, J.; Musen, M.; Fergerson, R. The Evolution of Protégé: An Environment for Knowledge-Based Systems
Development. Int. J. Hum. Comput. Stud. 2003, 58, 89–123. [CrossRef]
20. Zhang, R. Research and Analysis Based on Semantics of Rice Domain Knowledge Expression. Master’s Thesis, Hunan Agricultural
University, Hunan, China, 8 May 2007.
21. Arenas, M.; Ugarte, M. Designing a Query Language for RDF. ACM Trans. Database Syst. 2017, 42, 21.1–21.46. [CrossRef]
22. Gan, J.; Xia, Y.; Xu, T.; Zhang, X. Extension of Web Ontology Language OWL in Knowledge Representation. J. Yunnan Norm. Univ.
2005, 25, 9–14. [CrossRef]
23. Duan, X.; Wang, L.; Wang, S. A Preliminary Study on the Application of Knowledge Graphs in Professional Fields. Electron. World
2020, 4. [CrossRef]
24. Liu, J. Research on the Construction and Application of Knowledge Graph in Tourism Domain. Master’s Thesis, Zhejiang
University, Hangzhou, China, 1 June 2019.
25. Ye, S. Research on the Construction and Query Method of Knowledge Graph in Coalmine Based on Neo4j. Master’s Thesis, China
University of Mining and Technology, Xuzhou, China, 30 May 2019.
26. Zhou, W. The Construction and Application of Knowledge Graph Incorporating Causal Events. Master’s Thesis, East China
Normal University, Shanghai, China, 23 May 2019.
27. Yang, X.; Yang, M.; Yang, D.; Huang, Y. Research on Knowledge Fusion Triplets Storage Structure Based on Jena System. Value
Eng. 2018, 8, 134–137. [CrossRef]
28. Zhao, K.; Wang, H.; Shi, N.; Sa, Z.; Xu, X. Study and Implementation on Knowledge Graph of Guizhi Decoction Associated
Formulas Based on Neo4j. World Chin. Med. 2019, 14, 2636–2646. [CrossRef]
29. Zhang, Z. Research on the Parsing of Graph Database Query Language Cypher. Master’s Thesis, Huazhong University of Science
and Technology, Wuhan, China, 8 May 2018.
30. Zhou, Z.; Xue, D.; Xin, X.; Yang, L. A Construction Method of Scientific Knowledge Graph Based on the Degree of Inter-
disciplinary Association. In Proceedings of the 2011 Fall Academic Conference of Chinese Physical Society, Hangzhou, China, 15
September 2011.
31. Chen, B.; Li, G.; Zhang, J.; Li, J. Framework design of SWRL-based Reasoning Mechanism. Comput. Eng. Des. 2010, 31, 847–849,853.
[CrossRef]
32. Wang, W.G. Knowledge Graph Reasoning: Modern Methods and Applications. Big Data Res. 2021, 1, 1–24. Available online:
https://kns.cnki.net/kcms/detail/10.1321.G2.20210331.1811.0O4.html (accessed on 9 May 2021).

You might also like