Semantic Web&social Networks Material
Semantic Web&social Networks Material
[R15A0535]
LECTURE NOTES
(2019-20)
DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING
TEXT BOOKS:
1. Thinking on the Web - Berners Lee, Godel and Turing, Wiley inter science, 2008.
2. Social Networks and the Semantic Web, Peter Mika, Springer, 2007. REFERENCE
BOOKS:
1. Semantic Web Technologies, Trends and Research in Ontology Based Systems,
J.Davies, R.Studer, P.Warren, John Wiley & Sons.
2. Semantic Web and Semantic Web Services -Liyang Lu Chapman and Hall/CRC
Publishers,(Taylor & Francis Group)
3. Information Sharing on the semantic Web – Heiner Stuckenschmidt; Frank Van
Harmelen, Springer Publications.
4. Programming the Semantic Web, T.Segaran, C.Evans, J.Taylor, O’Reilly, SPD.
Course Objectives & Outcomes
Course Objectives
• To learn Web Intelligence
• To learn Knowledge Representation for the Semantic Web
• To learn Ontology Engineering
• To learn Semantic Web Applications, Services and Technology
• To learn Social Network Analysis and semantic web
• To understand the role of ontology and inference engines in semantic web
• To explain the analysis of the social Web and the design of a new class of
applications that combine human intelligence with machine processing.
• To describe how the Semantic Web provides the key in aggregating information
across heterogeneous sources.
• To understand the benefits of Semantic Web by incorporating usergenerated
metadata and other clues left behind by users.
Course Outcomes
• Ability to understand and knowledge representation for the semantic web Ability
to create ontology
• Ability to build a blogs and social networks
• Understand the basics of Semantic Web and Social Networks.
• Understand Electronic sources for network analysis and different Ontology
languages.
• Modeling and aggregating social network data.
• Develop social-semantic applications.
• Evaluate Web- based social network and Ontology.
INDEX
Lecture Notes
UNIT - I
Thinking and Intelligent Web Applications, The Information Age ,The World Wide Web, Limitations
of Today‘s Web, The Next Generation Web, Machine Intelligence, Artificial Intelligence, Ontology,
Inference engines, Software Agents, Berners-Lee www, Semantic Road Map, Logic on the semantic
Web.
THINKING AND INTELLIGENT WEB APPLICATIONS
The meaning of the term ―thinking‖ must be provided in the context of intelligent applications
on the World Wide Web as it is frequently loosely defined and ambiguously applied.
In general, thinking can be a complex process that uses concepts, their interrelationships, and
inference or deduction, to produce new knowledge. However, thinking is often used to describe
such disparate acts as memory recall, arithmetic calculations, creating stories, decision making,
puzzle solving, and so on.
The term ―intelligence‖ can be applied to nonhuman entities as we do in the field of Artificial
Intelligence (AI). But frequently we mean something somewhat different than in the case of
human intelligence. For example, a person who performs difficult arithmetic calculations quickly
and accurately would be considered as intelligent. But, a computer that could perform the same
calculations faster and with greater accuracy would not be considered as an intelligent.
Human thinking involves complicated interactions within the biological components of the brain,
and that the process of learning is also an important element of human intelligence.
Software applications perform tasks that are sufficiently complex and human-like that the term
―intelligent‖ may be appropriate. But, Artificial Intelligent (AI) is the science of
machines simulating intelligent behavior. The concept of intelligent application on the World
Wide Web takes the advantages of AI technologies in order to enhance applications and make
them to behave in more intelligent ways.
Here, question arises regarding Web intelligence or intelligent software applications on the World
Wide Web. The World Wide Web can be described as an interconnected network of networks.
The present day Web consists not only of the interconnected networks, servers, and clients, but
also the multimedia hypertext representation of vast quantities of information distributed over an
immense global collection of electronic devices with software services being provided over the
Web.
The current Web consists of static data representations that are designed for direct human access
and use.
AGE:
We are accustomed to living in a world that is rapidly changing. This is true in all aspects of our
society and culture, but is especially true in the field of information technology. It is common to
observe such rapid change and comment simply that ―things change.‖
Over the past decades, humanbeings have experienced two global revolutionary changes: the
Agricultural Revolution and the Industrial Revolution. Each revolutionary change not only
enhanced the access of humanresources but also freed the individuals to achieve higher level
cultural and social goals.
In addition, over the past half century, the technological inventions of the Information Age may in
fact be of such scope as to represent a third revolutionary change i.e., the Information
Revolution.
The issue that the ―rapidly changing world of the Information Age be considered a
global revolutionary change on the scale of earlier revolutions‖ can be solved by comparing the
changes associated with the Agricultural Revolution with the Industrial Revolution.
Before the agriculture revolution, human beings move to warmer regions in the winter season and
back to colder regions in the summer seasons. Human beings were able to migrate to all locations
on the earth as they have the flexibility of human species and the capability to create adaptable
human cultures.
The adaptable human‘s cultures survived and thrived in every environmental niche on the planet
by fishing, herding and foroging.
Human beings lived to stay permanently in a single location as soon as they discovered the
possibility of cultivating crops. The major implementation of a non migratory life style is that
small portion of land could be exploited intensively for long periods of time. Another implication
is the agricultural communities concentrated the activities into one or two cucle periodsassociated
with growing and harvesting the crops. This new life style allowed individuals to save their
resources and spend on their other activities. In additions it created a great focus on the primary
necessity of planting, nurturing and harvesting the crops. The individual become very conscious
of time. A part from these, they become reliant on the following:
1. Special skills and knowledge associated with agricultural production.
2. Storage and protection of food supplies.
3. Distribution of products within the community to ensure adequate substenance.
4. Sufficient seed for the near life cycle‘s planning. This life style is different from hunter-
gatherer life styles.
The agricultural revolution slowly moved across villages and regions introducing land cultivation
as well as a new way of life.
During agricultural revolution human and animal muscle were used to produce the energy
required to run the economy. As soon as the French revolution came into existence millions of
horses and oxen produced the power required to run the economy.
Dept. of CSE, MRCET Page|2
Semantic Web and Social Networks IV B.Tech I Sem (R15)
The WWW project was initiated by CERN (European laboratory for particle physics) to create a
system to handle distributed resources necessary for Scientific Research. The WWW today is a
distributed client-server service, in which a client using a browser can be access a service using a
server. However, the service provided is distributed over many locations called Websites.
The web consists of many web pages that incorporate text, graphics, sound, animation and other
multimedia components. These web pages are connected to one another by hypertext. In a
hypertext environment the information is stored using the concept of pointers. WWW uses a
concept of HTTP which allows communicate between a web browser and web server. The web
page can be created by using a HTML. This language has some commands while which are used
to inform the web browser about the way of displaying the text, graphics and multimediafiles.
HTML also has some commands through which we can give links to the webpages.
The WWW today is a distributed client-server, in which a client using a web browser can access a
service using a server.
Working of a web:
Web page is a document available on World Wide Web. Web Pages are stored on web server and
can be viewed using a web browser.
WWW works on client- server approach. Following steps explains how the web works:
1. User enters the URL (say, http://www.mrcet.ac.in) of the web page in the address bar of web
browser.
2. Then browser requests the Domain Name Server for the IP address corresponding to
www.mrcet.ac.in.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
ARPANET
Licklider, a psychologist and computer scientist put out the idea in 1960 of a network of
computers connected together by "wide-band communication lines" through which they could
share data and information storage.
Licklider was hired as the head of computer research by the Defense Advanced Research Projects
Agency (DARPA), and his small idea took off.
The first ARPANET link was made on October 29, 1969, between the University of California
and the Stanford Research Institute. Only two letters were sent before the system crashed, but that
was all the encouragement the computer researchers needed. The ARPANET became a high-
speed digital postoffice as people used it to collaborate on research projects. It was a distributed
system of ―many-to-many‖ connections.
Robert Kahn of DARPA and Vinton Cerf of Stanford University worked together on a solution,
and in 1977, the internet protocol suite was used to seamlessly link three different networks.
The mid-1980s marked a boom in the personal computer and superminicomputer industries. The
combination of inexpensive desktop machines and powerful, network-ready servers allowed
many companies to join the Internet for the first time.Corporations began to use the Internet to
communicate with each other and with their customers.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
These three events were the introduction of the World Wide Web, the widespread availability of
the graphical browser, and the unleashing of commercialization.
In 1991, World Wide Web was created by Timothy Berners Lee in 1989 at
CERN in Geneva. World Wide Web came into existence as a proposal by him, to allow
researchers to work together effectively and efficiently at CERN. Eventually it became World
Wide Web.
The following diagram briefly defines evolution of World Wide Web:
The Web combined words, pictures, and sounds on Internet pages and programmers saw the
potential for publishing information in a way that could be as easy as using a word processor, but
with the richness of multimedia.
Berners-Lee and his collaborators laid the groundwork for the open standardsof theWeb. Their
efforts included the Hypertext Transfer Protocol (HTTP) linking Web documents, the Hypertext
Markup Language (HTML) for formatting Web documents, and the Universal Resource Locator
(URL) system for addressing Web documents.
The primary language for formatting Web pages is HTML. With HTML the author describes
what a page should look like, what types of fonts to use, what color the text should be, where
paragraph marks come, and many more aspects of the document. All HTML documents are
created by using tags.
In 1993, Marc Andreesen and a group of student programmers at NCSA (the National Center for
Supercomputing Applications located on the campus of University of Illinois at Urbana
Champaign) developed a graphical browser for the World Wide Web called Mosaic, which he
later reinvented commercially as Netscape Navigator.
WWW Architecture
WWW architecture is divided into several layers as shown in the following diagram:
DATA INTERCHANGE
Resource Description Framework (RDF) framework helps in defining core representation of
data for web. RDF represents data about resource in graph form.
TAXONOMIES
RDF Sc hema (RDFS) allows more standardized description of taxonomiesand
other ontological constructs.
ONTOLOGIES
Web Ontology Language (OWL) offers more constructs over RDFS. It comes in following
three versions:
OWL Lite for taxonomies and simple constraints.
OWL DL for full description logic support.
OWL for more syntactic freedom of RDF
RULES
RIF and SWRL offers rules beyond the constructs that are available
from RDFs and OWL. Simple Protocol and RDF Query La nguage (SPARQL) is SQL like
language used for querying RDF data and OWL Ontologies.
Dept. of CSE, MRCET Page|6
Semantic Web and Social Networks IV B.Tech I Sem (R15)
PROOF
All semantic and rules that are executed at layers below Proof and their result will be used to
prove deductions.
CRYPTOGRAPHY
Cryptography means such as digital signature for verification of the origin of sources is used.
2. The web today donot have the ability of machine understanding and processing of web-
based information.
3. The web is characterized by textual data augmented web services as it involves human
assistance and relies on the intevoperation and inefficient exchange of the two competing
propritery server frameworks.
4. The web is characterized by textual data augmented by pictorial and audio-visual addition.
5. The web todau is limited to manual keyboard searches as HTML do not have the ability to
exploit by information retrieval techniques.
Semantic Web agents could utilize metadata, ontologies, and logic to carry out its tasks. Agents
are pieces of software that work autonomously and proactively on the Web to perform certain
tasks. In most cases, agents will simply collect and organize information. Agents on the Semantic
Web will receive some tasks to perform and seek information from Web resources, while
communicating with other Web agents, in order to fulfill its task.
ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science
that aims to create it. AI textbooks define the field as "the study and design of intelligent
agents" where an intelligent agent is a system that perceives its environment and takes
actions that maximize its chances of success. John McCarthy, who coined the term in 1955,
defines it as "the science and engineering of making intelligent machines." Intelligent agent:
Programs, used extensively on the Web, that perform tasks such as retrieving and delivering
information and automating repetition More than 50 companies are currently developing
intelligent agent software or services, including Firefly and WiseWire.
Agents are designed to make computing easier. Currently they are used as Web browsers, news
retrieval mechanisms, and shopping assistants. By specifying certain parameters, agents will
"search" the Internet and return the results directly back to your PC.
Branches of AI
Here's a list, but some branches are surely missing, because no-one has identified them yet.
Logical AI
What a program knows about the world in general the facts of the specific situation in which it
must act, and its goals are all represented by sentences of some mathematical logical language.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
Applications of AI
Game playing
Dept. of CSE, MRCET Page|9
You can buy machines that can play master level chess for a few hundred dollars. There is some
AI in them, but they play well against people mainly through brute force computation--looking at
hundreds of thousands of positions. To beat a world champion by brute force and known reliable
heuristics requires being able to look at 200 million positions per second.
Speech recognition
In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus
United Airlines has replaced its keyboard tree for flight information by a system using speech
recognition of flight numbers and city names. It is quite convenient. On the the other hand, while
it is possible to instruct some computers using speech, most users have gone back to the keyboard
and the mouse as still more convenient.
Understanding natural language
Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough
either. The computer has to be provided with an understanding of the domain the text is about,
and this is presently possible only for very limited domains.
Computer vision
The world is composed of three-dimensional objects, but the inputs to the human eye and
computers' TV cameras are two dimensional. Some useful programs can work solely in two
dimensions, but full computer vision requires partial three-dimensional information that is not just
a set of two-dimensional views. At present there are only limited ways of representing three-
dimensional information directly, and they are not as good as what humans evidently use.
Expert systems
A "knowledge engineer'' interviews experts in a certain domain and tries to embody their
knowledge in a computer program for carrying out some task. How well this works depends on
whether the intellectual mechanisms required for the task are within the present state of AI. When
this turned out not to be so, there were many disappointing results.
One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the
blood and suggested treatments. It did better than medical students or practicing doctors, provided
its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments
and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its
interactions depended on a single patient being considered. Since the experts consulted by the
knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the
knowledge engineers forced what the experts told them into a predetermined framework. In the
present state of AI, this has to be true. The usefulness of current expert systems depends on their
users having common sense.
Heuristic classification
One of the most feasible kinds of expert system given the present knowledge of AI is to put some
information in one of a fixed set of categories using several sources of information. An example
is advising whether to accept a proposed credit card purchase. Information is available about the
owner of the credit card, his record of payment and also about the item he is buying and about the
establishment from which he is buying it (e.g., about whether there have been previous credit card
frauds at this establishment).
Semantic Web and Social Networks IV B.Tech I Sem (R15)
KEY APPLICATIONS
Ontologies are part of the W3C standards stack for the Semantic Web, in which they are used to
specify standard conceptual vocabularies in which to exchange data among systems, provide
services for answering queries, publish reusable knowledge bases, and offer services to facilitate
interoperability across multiple, heterogeneous systems and databases.
The key role of ontologies with respect to database systems is to specify a data modeling
representation at a level of abstraction above specific database designs (logical or physical), so
that data can be exported, translated, queried, and unified across independently developed
systems and services. Successful applications to date include database interoperability, cross
database search, and the integration of web services.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
INFERENCE ENGINE
Dept. of CSE, MRCET Page|11
Inference means A conclusion reached on the basis of evidence and reasoning.
In computer science, and specifically the branches of knowledge engineering and artificial
intelligence, an inference engine is a “computer program that tries to derive answers from a
knowledge base”. It is the "brain" that expert systems use to reason about the information in the
knowledge base for the ultimate purpose of formulating new conclusions. Inference engines are
considered to be a special case of reasoning engines, which can use more general methods of
reasoning.
Architecture
The separation of inference engines as a distinct software component stems from the typical
production system architecture. This architecture relies on a data store,
1. An interpreter. The interpreter executes the chosen agenda items by applying the
corresponding base rules.
2. A scheduler. The scheduler maintains control over the agenda by estimating the effects of
applying inference rules in light of item priorities or other criteria on the agenda.
3. A consistency enforcer. The consistency enforcer attempts to maintain a consistent
representation of the emerging solution.
Logic:
In logic, a rule of inference, inference rule, or transformation rule is the act of drawing a
conclusion based on the form of premises interpreted as a function which takes premises, analyses
their syntax, and returns a conclusion (or conclusions). For example, the rule of inference modus
ponens takes two premises, one in the form of "If p then q" and another in the form of "p" and
returns the conclusion "q". Popular rules of inference include modus ponens, modus tollens from
propositional logic and contraposition.
Expert System
In artificial intelligence, an expert system is a computer system that emulates the decision-
making ability of a human expert. Expert systems are designed to solve complex problems by
reasoning about knowledge, like an expert, and not by following the procedure of a developer as
is the case in conventional programming.
SOFTWARE AGENT
In computer science, a software agent is a software program that acts for a user or other
program in a relationship of agency, which derives from the Latin agere (to do): an agreement
to act on one's behalf.
The basic attributes of a software agent are that
• Agents are not strictly invoked for a task, but activate themselves,
• Agents may reside in wait status on a host, perceiving context,
• Agents may get to run status on a host upon starting conditions,
• Agents do not require interaction of user,
• Agents may invoke other tasks including communication.
Various authors have proposed different definitions of agents; these commonly include concepts
such as
Semantic Web and Social Networks IV B.Tech I Sem (R15)
Persistence (code is not executed on demand but runs continuously and decides for itself
when it should perform some activity)
2. Ontology
Ontology is an agreement between software agents that exchange information. Thus, the required
information is obtained by such an agreement in order to interpret the structure as well as
understand the exchanged data and a vocabulary that is used in the exchanges.
Using ontology, agents can exchange new information can be inferred by applying and extending
the logical rules present in the ontology.
An ontology that is complex enough to the useful for complex exchanges of information will
suffer from the possibility of logical inconsistencies. This is considered as a basic consequence of
the insights of Godel‘s incompleteness theorem.
UNIT - II
Knowledge Representation for the Semantic Web
Ontologies and their role in the semantic web, Ontologies Languages for the Semantic Web –
Resource Description Framework(RDF) / RDF Schema, Ontology Web Language(OWL), UML,
XML/XML Schema.
The first point underlines that ontology needs to be modelled using languages with a formal
semantics such languages include RDF and OWL. These languages are treated as most frequently
used languages in semantic web. These languages contain those models which are prefferred by
term ontology.
This understanding represents an agreement among members of the community over the concepts
and relationships that are present in a domain and their usage.
RDF and OWL, the ontology languages, have standardized syntaxes and logic-based formal
semantics. RDF and OWL are the languages most commonly used on the Semantic Web, and in
fact when using the term ontology many practitioners refer to domain models described in one of
these two languages. The second point reminds as that there is no such thing as a
―personal ontology‖. For example, the schema of a database or a UML class diagram that we
have created for the design of our own application is not an ontology. It is a conceptual model of
a domain, but it is not shared: there is no commitment toward this schema from anyone else but
us.
The simplest structures are glossaries or controlled vocabularies, in essence an agreement on the
meaning of a set of terms.
Semantic networks are essentially graphs that show also how terms are related to each other.
Thesauri are richer structures in that they describe a hierarchy between concepts and typically
also allow describing related terms and aliases. Thesauri are also the simplest structures where
logic-based reasoning can be applied: the broadernarrower relationships of these hierarchies are
transitive, in that an item that belongs to a narrower category also belongs to its direct parent and
all of its ancestors.
In practice, the most commonWeb ontologies are all lightweight ontologies due to the need of
serving the needs of many applications with divergent goals. Widely shared Web ontologies also
tend to be small as they contain only the terms tha t are agreed on by a broad user base. Large,
heavyweight ontologies are more commonly found in targeted expert systems used in focused
domains with a tradition of formalized processes and vocabularies such as the area of life
sciences and engineering.
Ontologies and ontology languages for the SemanticWeb:
Although the notion of ontologies is independent of the Web, ontologies play a special role in the
architecture of the Semantic Web.
relates the object (x): The book to the object (y): G¨odel, Escher,
Bach: An Eternal Golden Braid.
Think of a collection of interrelated RDF statements represented as a graph of interconnected
nodes. The nodes are connected via various relationships. For example, let us say each node
represents a person. Each person might be related to another person because they are siblings,
parents, spouses, friends, or employees.
Each interconnection is labeled with the relationship name.
The RDF is used in this manner to describe these relationships. It does not actually include the
nodes directly, but it does indirectly since the relationships point to the nodes. At any time, we
could introduce a new node, such as a newborn child, and all that is needed is for us to add the
appropriate relationship for the two parents.
BASIC ELEMENTS
Most of the elements of RDF concern classes, properties, and instances of classes.
Syntax
Both RDF and RDF Schema (RDFS) use XML-based syntax.
The RDF system provides a means of describing the relationships among resources in terms of
named properties and values. Since RDF and XML were developed about the same time, RDF
was defined as an excellent complement to XML. Encoding RDF triples in XML makes an
Semantic Web and Social Networks IV B.Tech I Sem (R15)
object portable across platforms and interoperable among applications. Because RDF data can be
expressed using XML syntax, it can be passed over the Web as a document and parsed using
Dept. of CSE, MRCET Page|24
existing XML-based software. This combination of RDF and XML enables individuals or
programs to locate, retrieve, process, store, or manage the information objects that comprise a
Semantic Web site.
Header
An RDF Document looks very much like all XML documents in terms of elements, tags, and
namespaces. An RDF document starts with a header including the root element as an “rdf:RDF”
element that also specifies a number of namespaces. It then defines properties and classes.
Table: RDF document parts (header, XML syntax, root element, namespace, the RDF
triple, and the end element)
Namespaces
The namespace mechanism of XML is also used in RDF. However, in XML, namespaces are only
used to remove ambiguities. In RDF, external namespaces are expected to be RDF documents
defining resources, which are used to import RDF documents.
To add a namespace to an RDF document, a namespace attribute can be added anywhere in the
document, but is usually added to the RDF tag itself. The namespace declaration for RDF
vocabularies usually points to a URI of the RDF Schema document for the vocabulary. We can
add a namespace as:
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
Note, the prefix for the RDF Syntax is given as ―rdf,‖ the RDF Schema is given as ―rdfs,‖ and
the Dublin Core schema (a special publication ontology) is given as ―dc.‖ DC is a well-
established RDF vocabulary for publications.
Description
The “rdf:about” attribute of the element ―rdf:Description‖ is equivalent to that of an ID
attribute, but is often used to suggest the object may be defined elsewhere. A set of RDF
statements form a large graph relating things to other things through their properties. The content
of “rdf:Description” elements are called property elements. The “rdf:resource” attribute and
the “rdf:type” element introduces structure to the rdf document.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
While RDF is required to be well formed, it does not require XML-style validation. The RDF
parsers do not use Document Type Definitions (DTDs) or XML Schema to ensure that the RDF is
valid.
Data Types
Dept. of CSE, MRCET Page|25
Sometimes it is useful to be able to identify what kind of thing a resource is, much like how
object-oriented systems use classes. The RDF system uses a type for this purpose. While there are
two very general types, a resource and a literal, every resource may be given a precise type.
For example, the resource ―John‖ might be given a type of ―Person.‖ The value of the
type should be another resource that would mean that more information could be associated with
the type itself.
1. quadrilaterals(X)→ polygons(X)
2. polygons(X) → shapes(X)
3. quadrilaterals (squares)
And now from this knowledge the following conclusions can be deduced:
1. polygons (squares)
2. shapes (squares)
3. quadrilateral(X)→ shapes(X)
The hierarchy relationship of classes is shown in Figure
OWL DL
Web Ontology Language DL (Descriptive Logic) is a sublanguage of OWL Full that restricts how
the constructors from OWL and RDF can be used. This ensures that the language is related to
description logic. Description Logics are a decidable fragment of First-Order Logic (FOL).
The OWL DL supports strong expressiveness while retaining computational completeness and
decidability. It is therefore possible to automatically compute the classification hierarchy and
check for inconsistencies in an ontology that conforms to OWL DL.
The advantage of this sublanguage is efficient reasoning support. The disadvantage is the loss of
full compatibility with RDF. However, every legal OWL DL document is a legal RDF document.
OWL Lite
Further restricting OWL DL produces a subset of the language called OWL Lite, which excludes
enumerated classes, disjointness statements, and arbitrary cardinality. The OWL Lite supports a
classification hierarchy and simple constraints.
Header
An OWL document contains an OWL ontology and is an RDF document with elements, tags,
and namespaces. An OWL document starts with a header that identifies the root element as an
rdf:RDF element, which also specifies a number of namespaces.
Class Elements
Classes are defined using an owl:Class element. An example of an OWL class
―computer‖ is defined with a subclass ―laptop‖ as
<owl:Class rdf:ID="Computer">
<rdfs:subClassOf rdf:resource="#laptop"/></owl:Class>
Equivalence of classes is defined with owl:equivelentClass.
Property
A property in RDF provides information about the entity it is describing. Property characteristics
increase our ability to understand the inferred information within the data.
The following special identifiers can be used to provide information concerning properties
and their values:
• inverseOf: One property may be stated to be the inverse of another property.
• TransitiveProperty:Properties may be stated to be transitive.
• SymmetricProperty: Properties may be stated to be symmetric.
• FunctionalProperty: Properties may be stated to have a unique value.
• InverseFunctionalProperty: Properties may be stated to be inverse functional.
The OWL Lite allows restrictions to be placed on how properties can be used by instances of a
class.
• allValuesFrom: The restriction allValuesFrom is stated on a property with respect to a class.
• someValuesFrom: The restriction someValuesFrom is stated on a property with respect to a
class. A particular class may have a restriction on a property that at least one value for that
property is of a certain type.
• minCardinality: Cardinality is stated on a property with respect to a particular class. If a
minCardinality of 1 is stated on a property with respect to a class, then any instance of that
class will be related to at least one individual by that property.
• maxCardinality: Cardinality is stated on a property with respect to a particular class. If a
maxCardinality of 1 is stated on a property with respect to a class, then any instance of that
class will be related to at most one.
• cardinality: Cardinality is provided as a convenience when it is useful to state that a property
on a class has both minCardinality 0 and maxCardinality 0 or both minCardinality 1 and
maxCardinality 1.
• intersectionOf: OWL Lite allows intersections of named classes and restrictions.
RDF for web based data exchanges has an advantage that agreement on shared XML format
needs a stronger commitment than the agreement mode by using RDF. This agreement of
exchanging RDF documents considering only the individual statements like a simple subject,
predicate and object model.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
UNIT - III
Ontology Engineering , Constructing Ontology, Ontology Development Tools, Ontology Methods,
Ontology Sharing and Merging, Ontology Libraries and Ontology Mapping, Logic, Rule and
Inference Engines.
CONSTRUCTING ONTOLOGY:
Ontology permits sharing common understanding of the structure of information among people
and software agents. Since there is no unique model for a particular domain, ontology
development is best achieved through an iterative process. Objects and their relationships reflect
the basic concepts within an ontology.
An iterative approach for building ontologies starts with a rough first pass through the main
processes as follows:
Dept. of CSE, MRCET Page|38
• First, set the scope. The development of an ontology should start by defining its domain and
scope.
Several basic questions are helpful at this point:
What will the ontology cover?
How will the ontology be used?
What questions does the ontology answer?
Who will use and maintain the ontology?
The answers may change as we proceed, but they help limit the scope of the model.
• Second, evaluate reuse. Check to see if existing ontologies can be refined and extended.
Reusing existing ontologies will help to interact with other applications and vocabularies. Many
knowledge-representation systems can import and export ontologies directly for reuse.
• Third, enumerate terms. It is useful to list all terms, what they address, & what properties they
have. Initially, a comprehensive list of terms is useful without regard for overlapping concepts.
Nouns can form the basis for class names & verbs can form the basis for property names.
• Fourth, define the taxonomy. There are several possible approaches in developing a class
hierarchy: a top-down process starts by defining general concepts in the domain. A bottom-up
development process starts with the definition of the most specific classes, the levels of the
hierarchy, with subsequent grouping of these classes into more general concepts.
• Fifth, define properties. The classes alone will not provide enough information to answer
questions. We must also describe the internal structure of concepts. While attaching properties
to classes one should establish the domain and range. Property constraints (facets) describe or
limit the set of possible values for a frame slot.
• Sixth, define facets. Up to this point the ontology resembles a RDFS without any primitives
from OWL. In this step, the properties add cardinality, values, and characteristics that will
enrich their definitions.
• Seventh, the slots can have different facets describing the value type, allowed values, the
number of the values (cardinality), and other features of the values.
Slot cardinality: the number of values a slot has. Slot value type: the type of values a slot has.
Minimum and maximum value: a range of values for a numeric slot.
Default value: the value a slot has unless explicitly specified otherwise.
• Eighth, define instances. The next step is to create individual instances of classes in the
hierarchy.
• Finally, check for anomalies. The Web-Ontology Language allows the possibility of detecting
inconsistencies within the ontology. Anomalies, such as incompatible domain and range
definitions for transitive, symmetric, or inverse properties may occur.
ONTOLOGY METHODS
Several approaches for developing ontologies have been attempted in the last two decades. In
1990, Lenat and Guha proposed the general process steps. In 1995, the first guidelines were
proposed on the basis of the Enterprise Ontology and the TOVE (TOronto Virtual Enterprise)
project. A few years later, the On-To-Knowledge methodology was developed.
The Cyc Knowledge Base (see http://www.cyc.com/) was designed to accommodate all of human
knowledge and contains about 100,000 concept types used in the rules and facts encoded in its
knowledge base. The method used to build the Cyc consisted of three phases.
The first phase manually codified articles and pieces of knowledge containing common sense
knowledge implicit in different sources.
The second and third phase consisted of acquiring new common sense knowledge using natural
language or machine learning tools.
The Electronic Dictionary Research (ERD) project in Japan has developed a dictionary with
over 400,000 concepts, with their mappings to both English and Japanese words. Although the
EDR project has many more concepts than Cyc, it does not provide as much detail for each one
(see http://www.iijnet.or.jp/edr/).
Semantic Web and Social Networks IV B.Tech I Sem (R15)
WordNet is a hierarchy of 166,000 word form and sense pairs. WordNet does not have as much
detail as Cyc or as broad coverage as EDR, but it is the most widely used ontology for natural
ONTOLOGY LIBRARIES
Scientists should be able to access a global, distributed knowledge base of scientific data that
appears to be integrated, locally available, and is easy to search.
Data is obtained by multiple instruments, using various protocols in differing vocabularies using
assumptions that may be inconsistent, incomplete, evolving, and distributed. Currently, there are
existing ontology libraries including
• DAML ontology library (www.daml.org/ontologies).
• Ontolingua ontology library (www.ksl.stanford.edu/software/ontolingua/).
• Prot´eg´e ontology library (protege.stanford.edu/plugins.html).
Available upper ontologies include
• IEEE Standard Upper Ontology (suo.ieee.org).
• Cyc (www.cyc.com).
Available general ontologies include
•(www.dmoz.org).
• WordNet (www.cogsci.princeton.edu/ wn/).
• Domain-specific ontologies.
• UMLS Semantic Net.
• GO (Gene Ontology) (www.geneontology.org).
• Chemical Markup Language, CML.
ONTOLOGY MATCHING
Ontology provides a vocabulary and specification of the meaning of objects that encompasses
several conceptual models: including classifications, databases, and axiom theories. However, in
the open Semantic Web environment different ontologies may be defined.
Ontology matching finds correspondences between ontology objects. These include ontology
merging, query answering, and data translation. Thus, ontology matching enables data
interoperate.
Today ontology matching is still largely labor-intensive and error-prone. As a result, manual
matching has become a key bottleneck.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
String Matching
String matching can help in processing ontology matching. String matching is used in text
processing, information retrieval, and pattern matching. There are many string matching methods
including ―edit distance‖ for measuring the similarities of two strings.
Dept. of CSE, MRCET Page|44
Let us consider two strings; S1 and S2.
If we use limited steps of character edit operations (insertions, deletions, and substitutions), S1
can be transformed into S2 in an edit sequence. The edit distance defines the weight of an edit
sequence.
The existing ontology files on the Web (e.g., http://www.daml.org/ontologies) show that people
usually use similar elements to build ontologies, although the complexity and terminology may be
different. This is because there are established names and properties to describe a concept. The
value of string matching lies in its utility to estimate the lexical similarity.
However, we also need to consider the real meaning of the words and the context.
In addition, there are some words that are similar in alphabet form while they have different
meaning such as, ―too‖ and ―to.‖ Hence, it is not enough to use only string matching.
ONTOLOGY MAPPING
Ontology mapping enables interoperability among different sources in the Semantic Web. It is
required for combing distributed and heterogeneous ontologies.
Ontology mapping transforms the source ontology into the target ontology based on semantic
relations. There are three mapping approaches for combing distributed and heterogeneous
ontologies:
1. Mapping between local ontologies.
2. Mapping between integrated global ontology and local ontologies.
3. Mapping for ontology merging, integration, or alignment.
Ontology merge, integration, and alignment can be considered as ontology reuse processes.
Ontology merge is the process of generating a single, coherent ontology from two or more
existing and different ontologies on the same subject.
Ontology integrationis the process of generating a single ontology from two or more differing
ontologies in different subjects. Ontology alignment creates links between two original
ontologies.
ONTOLOGY MAPPING TOOLS
There are three types of ontology mapping tools and provide an example of each:
For ontology mapping between local ontologies, an example mapping tool is GLUE. GLUE is a
system that semiautomatically creates ontology mapping using machine learning techniques.
Given two ontologies, GLUE finds the most similar concept in the other ontology.
For similarity measurement between two concepts, GLUE calculates the joint probability
distribution of the concepts. The GLUE uses a multistrategy learning approach for finding joint
probability distribution.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
For ontology mappings between source ontology and integrated global ontology, an example tool
is Learning Source Description (LSD). In LSD, Schema can be viewed as ontologies with
restricted relationship types. This process can be considered as ontology mapping between
information sources and a global ontology.
This involves rules of the form ―IF (condition), THEN (conclusion).‖ With only a finite number
of comparisons, we are required to reach a conclusion.
Axioms of a theory are assertions that are assumed to be true without proof. In terms of
semantics, axioms are valid assertions. Axioms are usually regarded as starting points for
applying rules of inference and generating a set of conclusions.
Rules of inference, or transformation rules, are rules that one can use to infer a conclusion from a
premise to create an argument. A set of rules can be used to infer any valid conclusion if it is
complete, while never inferring an invalid conclusion, if it is sound.
Rules can be either conditional or biconditional. Conditional rules, or rules of inference, are
rules that one can use to infer the first type of statement from the second, but where the second
cannot be inferred from the first. With biconditional rules, in contrast, both inference directions
are valid.
Conditional Transformation Rules
We will use letters p, q, r, s, etc. as propositional variables.
An argument is Modus ponens if it has the following form (P1 refers to the first premise; P2 to
the second premise: C to the conclusion):
(P1) if p then q
(P2) p (C)
q
Example:
(P1) If Socrates is human then Socrates is mortal.
(P2) Socrates is human. (C)
Socrates is mortal.
Which can be represented as Modus ponens:
[(p → q) ∧ p] → [q]
An argument is Modus tollens if it has the following form:
(P1) if p then q
(P2) not-q
(C) not-p
Semantic Web and Social Networks IV B.Tech I Sem (R15)
Example:
(P1) If Socrates is human then Socrates is mortal.
(P2) Socrates is not mortal. (C)
Socrates is not human.
In both cases, the order of the premises is immaterial (e.g., in modus tollens ―not- q‖ could come
first instead of ―if p then q‖).
[p ∨ p]
Tautology is represented as [p] ↔
INFERENCE ENGINES
An expert system has three levels of organization: a working memory, an inference engine, and a
knowledge base. The inference engine is the control of the execution of reasoning rules. This
means that it can be used to deduce new knowledge from existing information.
The inference engine is the core of an expert system and acts as the generic control mechanism
that applies the axiomatic knowledge from the knowledge base to the task-specific data to reach
some conclusion.
To prove that D is true, given that A and B are true, we start with Rule 1 and go on down the list
until a rule that ―fires‖ is found. In this case, Rule 3 is the only one that fires in the first iteration.
At the end of the first iteration, it can be concluded that A, B, and E are true. This information is
used in the second iteration.
In the second iteration, Rule 2 fires adding the information that G is true. This extra information
causes Rule 4 to fire, proving that D is true. This is the method of forward chaining, where one
proceeds from a given situation toward a desired goal, adding new assertions along the way. This
strategy is appropriate in situations where data are expensive to collect and few are available.
Backward Chaining
In backward chaining the system needs to know the value of a piece of data. It searches for rules
whose conclusions mention this data. Before it can use the rules, it must test their conditions. This
may entail discovering the value of more pieces of data, and so on. This is also called goal-
directed inference, or hypothesis driven, because inferences are not performed until the system is
made to prove a particular goal.
In backward chaining, we start with the desired goal and then attempt to find evidence for proving
the goal. Using the forward chaining example, the strategy to prove that D is true would be the
following.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
First, find the rule that proves D. This is Rule 4. The subgoal is then to prove that G is true. Rule
2 meets the subgoal, and as it is already known that A is true, therefore the next subgoal is to
show that E is true. Rule 3 provides the next subgoal of proving that B is true. But the fact that B
is true is one of the given assertions. Therefore, E is true, which implies that G is true, which in
turn implies that D is true.
Backward chaining is useful in situations where the amount of data is large and where a specific
characteristic of the system is of interest.
Tree Searches
In the first query, the question is who owns a computer? The answer is ― Smith.‖ In the second
query, the question is What make of computer are defined in the database? The third query,
however asks who owns a computer and what is the make of that computer?
The query is a graph containing variables that can be matched with the graph. Should the graph
in the database be more extended, it would have to be matched with a subgraph. So, generally for
executing an RDF query what has to be done is called ― subgraph matching.‖
Following the data model for RDF the two queries are in fact equal because a sequence of
statements is implicitly a conjunction.
Fig: Graph representation of a rule Fig: Query that matches with a rule
Dept. of CSE, MRCET Page|51
Semantic Web and Social Networks IV B.Tech I Sem (R15)
A triple can be modeled as a predicate: triple(subject, property, object). A set of triples equals
a list of triples and a connected graph is decomposed into a set of triples.
For our example this gives
Triple(John, owns, computer).
Triple(computer, make, Apple).
This sequence is equivalent to:
[Triple(John, owns, computer). Triple(computer,make, Apple).]
From Figure RDF Queries the triples are Triple(?who, owns, computer).
Triple(computer, make,?what).
This sequence is equivalent to:
[Triple(?who, owns, computer). Triple(computer,make, ?what).]
From Figure Graph representation of a rule the triple is
Triple([Triple(X, owns, computer)], implies, [Triple(X, must buy, software)]).
From Figure Query that matches with a rule the triple is
Triple(?who, must buy, software).
A unification algorithm for RDF can handle subgraph matching and embedded rules by the term
“subgraph matching with rules.”
The unification algorithm divides the sequence of RDF statements into sets where each set
constitutes a connected subgraph. This is called a tripleset that is done for the database and for
the query. Then the algorithm matches each tripleset of the query with each tripleset of the
database.
Agents
Agents are pieces of software that work autonomously and proactively. In most cases, an agent
will simply collect and organize information. Agents on the Semantic Web will receive some
tasks to perform and seek information from Web resources, while communicating with other
Web agents, in order to fulfill its task. Semantic Web agents will utilize metadata, ontologies,
and logic to carry out its tasks.
UNIT - IV
Semantic Web applications and services, Semantic Search, e-learning, Semantic Bioinformatics,
Knowledge Base ,XML Based Web Services, Creating an OWL-S Ontology for Web Services,
Semantic Search Technology, Web Search Agents and Semantic Methods.
SEMANTIC SEARCH
Semantic search methods can augment and improve traditional search results by using, not just
words, but concepts and logical relationships. There are two approaches to improving search
results through semantic methods: (1) the direct use of Semantic Web metadata and (2) Latent
Semantic Indexing (LSI).
The Semantic Web will provide more meaningful metadata about content, through the use of RDF
and OWL documents that will help to form the Web into a semantic network. In a semantic
network, the meaning of content is better represented and logical connections are formed between
related information.
However, most semantic-based search engines suffer increasingly difficult performance problems
because of the large and rapidly growing scale of the Web. In order for semantic search to be
effective in finding responsive results, the network must contain a great deal of relevant
information. At the same time, a large network creates difficulties in processing the many
possible paths to a relevant solution.
e-LEARNING
The big question in the area of educational systems is what is the next step in the evolution of e-
learning? Are we finally moving from scattered applications to a coherent collaborative
Semantic Web and Social Networks IV B.Tech I Sem (R15)
environment? How close we are to the vision of the Educational Semantic Web and what do we
need to do in order to realize it?
On the one hand, we wish to achieve interoperability among educational systems and on the other
hand, to have automated, structured, and unified authoring.
Dept. of CSE, MRCET Page|54
The Semantic Web is the key to enabling the interoperability by capitalizing on (1) semantic
conceptualization and ontologies, (2) common standardized communication syntax, and (3)
large-scale integration of educational content and usage.
The RDF describes objects and their relationships. It allows easy reuse of information for
different devices, such as mobile phones and PDAs, and for presentation to people with different
capabilities, such as those with cognitive or visual impairments.
By tailored restructuring of information, future systems will be able to deliver content to the end-
user in a form applicable to them, taking into account users‘ needs, preferences, and prior
knowledge. Much of this work relies on vast online databases and thesauri, such as wordnet,
which categorize synonyms into distinct lexical concepts. Developing large multimedia database
systems makes materials as useful as possible for distinct user groups, from schoolchildren to
university lecturers. Students might, therefore, search databases using a simple term, while a
lecturer might use a more scientific term thus reflecting scaling in complexity.
The educational sector can also use the Internet Relay Chat (IRC) (http://www.irc.org/) a tool
that can be used by the Semantic Web. The IRC is a chat protocol where people can meet on
channels and talk to each other.
The IRC and related tools could work well within education, for project discussion, remote
working, and collaborative document creation. Video-conferencing at schools is increasingly
becoming useful in widening the boundaries for students.
SEMANTIC BIOINFORMATICS
The World Wide Web Consortium recently announced the formation of the Semantic Web Health
Care and Life Sciences Interest Group (HCLSIG) aimed to help life scientists tap the potential
benefits of using Semantic Web technology by developing use cases and applying standard
Semantic Web specifications to healthcare and life sciences problems.
The initial foundation and early growth of theWeb was based in great part on its adoption by the
high-energy physics community when six high-energy physics Web sites collaborated allowing
their participating physicists to interact on this new network of networks. A similar critical mass
in life sciences could occur if a half dozen ontologies for drug discovery were to become
available on the Semantic Web.
Life science is a particularly suitable field for pioneering the Semantic Web.
KNOWLEDGE BASE
In a number of parallel efforts, knowledge systems are being developed to provide semantic-
based and context-aware systems for the acquisition, organization, processing, sharing and use of
the knowledge embedded in multimedia content.
Ongoing research aims to maximize automation of the complete knowledge lifecycle and to
achieve semantic interoperability between Web resources and services.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
Web Service Architecture requires discrete software agents that must work together to implement
functionality. In XML-based Web Services, an agent sends and receives messages based upon
their architectural roles.
If a requester wishes to make use of a provider‘s Web Service, he uses a requester agent to
exchange messages with the provider agent. In order for this message exchange to be successful,
the requester and the provider must first agree on both the semantics and the mechanics of the
message exchange.
The message exchange mechanics are documented using WSDL. The service description is a
specification that can be processed by a machine using message formats, data types, and protocols
that are exchanged between the requester and provider.
UNIT - V
What is social Networks analysis, development of the social networks analysis, Electronic Sources for
Network Analysis – Electronic Discussion networks, Blogs and Online Communities, Web Based
Networks. Building Semantic Web Applications with social network features.
Illustrations froman early social network study at the Hawthorne works ofWestern Electric
in Chicago. The upper part shows the location of the workers in the wiring room, while the
lower part is a network image of fights about the windows between workers (W), solderers
(S) and inspectors (I).
Dept. of CSE, MRCET Page|63
Semantic Web and Social Networks IV B.Tech I Sem (R15)
Blogs make a particularly appealing research target due to the availability of structured electronic
data in the form of RSS (Rich Site Summary) feeds. RSS feeds contain the text of the blog posts
as well as valuable metadata such as the timestamp of posts, which is the basis of dynamic
analysis.
The 2004 US election campaign represented a turning point in blog research as it has been the
first major electoral contest where blogs have been exploited as a method of building networks
among individual activists and supporters. Blog analysis has suddenly shed its image as relevant
only to marketers interested in understanding product choices of young demographics;
Online community spaces and social networking services such as MySpace, LiveJournal cater to
socialization even more directly than blogs with features such as social networking (maintaining
lists of friends, joining groups), messaging and photo sharing.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
Web-based networks
The content of Web pages is the most inexhaustible source of information for social network
analysis. This content is not only vast, diverse and free to access but also in many cases more up
to date than any specialized database.
There are two features of web pages that are considered as the basis of extracting social
relations: links and co-occurrences (see Figure 3.2). The linking structure of the Web is
considered as proxy for real world relationships as links are chosen by the author of the page and
connect to other information sources that are considered authoritative and relevant enough to be
mentioned. The biggest drawback of this approach is that such direct links between personal
pages are very sparse: due to the increasing size of the Web searching has taken over browsing as
the primary mode of navigation on the Web. As a result, most individuals put little effort in
creating new links and updating link targets or have given up linking to other personal pages
altogether.
Kautz already notes that the biggest technical challenge in social network mining is the
disambiguation of person names. Person‘s names exhibit the same problems of polysemy and
synonymy that we have seen in the general case of web search.
In our work in extracting information about the Semantic Web community we also add a
disambiguation term our queries. We use a fixed disambiguation term (Semantic Web OR
ontology) instead of a different disambiguation term for every 'name. This is a safe (and even
desirable) limitation of the query as we are only interested in relations in the Semantic Web
context.
We also experimentwith a second method based on the concept of average precision. When
computing the weight of a directed link between two personswe consider an ordered list of pages
The average precision method is more sophisticated in that it takes into account the order in
which the search engine returns document for a person:
It assumes that names of ot her persons that occur closer to the top of the list represent more
important contacts than names that occur in pages at the bottom of the list. The method is also
more scalable as it requires only downloading the list of top ranking pages once for each author.
The drawback of this method is that most search engines limit the number of pages returned to at
most a thousand. In case a person and his contacts have significantly more pages than that it is
likely that some of the pages for some the alters will not occur among the top ranking pages.
Lastly, we would note that one may reasonably argue against the above methods on the basis that
a single link or co -occurrence is hardly evidence for any relationship. In fact, not all links are
equally important nor every co-occurrence is intended.
For example, it may very well happen that two names co -occur on a web page without much
meaning to it (for example, they are on the same page of the corporate phone book or appear in a
list of citations).
Sesame
Sesame is a triple store implemented using Java technology. Much like a database for RDF data,
Sesame allows creating repositories and specifying access privileges, storing RDF data in a
repository and querying the data using any of the supported query languages. In the case of
Sesame, these include Sesame‘s own SeRQL language and SPARQL.
Semantic Web and Social Networks IV B.Tech I Sem (R15)
Dept. of CSE, MRCET Page|71
The data in the repository can be manipulated on the level of triples:
Individual statements can be added and removed from the repository.
RDF data can be added or extracted in any of the supported RDF representations including the
RDF/XML and Turtle languages.
Sesame can persistently store and retrieve the data from a variety of back-ends: data can persist
in memory, on the disk or in a relational database. As most RDF repositories, Sesame is not only
a data store but also integrates reasoning. Sesame has a built-in inferencer for applying the
RDF(S) inference rules.
While Sesame does not support OWL semantics, it does have a rule language that allows to
capture most of the semantics of OWL, including the notion of inverse-functional properties and
the semantics of the owl: sameAs relationship.
An important, recently added feature of Sesame is the ability to store and retrieve context
information. In distributed scenarios, it is often necessary to capture metadata about statements.
For example, in the case of collecting FOAF profiles from the Web, we might want to keep track
of where the information came from (the URL of the profile) and the time it was last crawled.
Context information is important even for centralized sites with user contributed content. Every
triple becomes a quad, with the last attribute identifying the context. Contexts are identified by
resources, which can be used in statements as all other resources. Contexts (named graphs) can
also be directly queried using the SPARQL query language supported by this version of Sesame.
The above mentioned functionalities of Sesame can be accessed in three ways.
First, Sesame provides an HTML interface that can be accessed through a browser.
Second, a set of servlets exposes functionality for remote access through HTTP, SOAP and RMI.
Lastly, Sesame provides a Java client library for developers which exposes all the above
mentioned functionality of a Sesame repository using method calls on a Java object called
SesameRepository.
This object can provide access to both local Sesame servers (running in the same Java Virtual
Machine as the application) or and remote servers (running in a different JVM as the application
or on a remote machine.
Working with the Sesame client API is relatively straightforward. Queries, for example, can be
executed by calling the evaluateTableQuery method of this class, passing on the query itself and
the identifier of the query language.
Elmo
Elmo is a development toolkit consisting of two main components. The first one is the Elmo API,
providing the above mentioned interface between a set of JavaBeans representing ontological
classes and the underlying triple store containing the data that is manipulated through the
JavaBeans.
The API also includes the tool for generating JavaBeans from ontologies and vice versa. The
second main component consists of a set of tools for working with RDF data, including an RDF
crawler and a framework of smushers.
Semantic Web and Social Networks IV B.Tech I Sem (R15)