Intelligent System Sem-VII Lab-Manual
Intelligent System Sem-VII Lab-Manual
Laboratory Manual
Experime
LO
nt Detailed Content of Experiment Hours
Mapping
No
Performance measure is the first to which we would like an automatic driver to Aspire.
Desirable measures include getting correct destination, minimising fuel consumption, no wear
and tear, minimising trip time and cost, minimising violation of traffic laws and disturbance to
other
drivers, maximising safety and passenger comfort and maximizing profit. But in this scenario,
some of the goals may conflict, so there will be some trade off involved.
Environment: The basic question that comes in the mind is what is the driving environment that
a taxi will face? A taxi driver will face with a variety of roads, ruler lines and urban Valley to 12
Lane Freeway. The roads contain other traffic, pedestrians, stray animal, roads work, police
potholes and cars. A taxi must also interact with potential and actual passengers. There might be
some restriction on driving, such as left-hand side driving as in India, Japan, etc., or right-hand
side driving. Otherwise the roads may be soaring temperature, desert areas and all snowfall
regions like Kashmir. Thus, more restricted the environment, easier the design problem.
Actuators: The actuators available to an automated taxi will be more or less same as those
available to human driver (i.e., control over engine through the accelerator and control over
steering and breaking). In addition, it will output to a display screen or voice synthesizer talk
back to passengers and perhaps some way to communicate with other drivers or vehicle politely
or otherwise.
Sensors: The sensors will play a crucial role in determining where the taxi actually is, what else
is on the road and how fast it is going. The basic sensors should therefore include one or more
TV cameras, the tachometer and the odometer. To control the vehicle properly, especially on
curves, it will also need to know the mechanical state of vehicle so it will need the usual array of
engine and electrical system sensors. It might have instruments that are not available to average
human driver, a satellite global positioning system (GPS) to give accurate position information
with respect to an electronic map and infrared solar sensors to detect distance to other cars and
obstacles. Finally, it will require keyboard or microphone for passenger to request a destination.
Conclusion: Thus, we have successfully designed Taxi driver and Vacume cleaner System Using PEAS.
Exp. 1 B) Problem Definition with State Space Representation
Theory:
Problem Statement
In the water jug problem in Artificial Intelligence, we are provided with two jugs: one having
the capacity to hold 3 gallons of water and the other has the capacity to hold 4 gallons of water.
There is no other measuring equipment available and the jugs also do not have any kind of
marking on them. So, the agent’s task here is to fill the 4-gallon jug with 2 gallons of water by
using only these two jugs and no other material. Initially, both our jugs are empty.
Here, let x denote the 4-gallon jug and y denote the 3-gallon jug.
7. (x,y) If (x+y)<7 (4, y-[4-x]) Pour some water from the 3 gallon jug to fill the four gallon jug
8. (x,y) If (x+y)<7 (x-[3-y],y) Pour some water from the 4 gallon jug to fill the 3 gallon jug.
9. (x,y) If (x+y)<4 (x+y,0) Pour all water from 3 gallon jug to the 4 gallon jug
10. (x,y) if (x+y)<3 (0, x+y) Pour all water from the 4 gallon jug to the 3 gallon jug
The listed production rules contain all the actions that could be performed by the agent in
transferring the contents of jugs. But, to solve the water jug problem in a minimum number of
moves, following set of rules in the given sequence should be performed:
On reaching the 7th attempt, we reach a state which is our goal state. Therefore, at this state, our
problem is solved.
Theory: There are many ways to traverse graphs. BFS is the most commonly used approach.
BFS is a traversing algorithm where you should start traversing from a selected node (source or starting node)
And traverse the graph layerwise thus exploring the neighbour nodes (nodes which are directly connected to
source node). You must then move towards the next-level neighbour nodes.
As the name BFS suggests, you are required to traverse the graph breadthwise as follows:
1. First move horizontally and visit all the nodes of the current layer
2. Move to the next layer
Algorithm:
mark s as visited.
while ( Q is not empty)
//Removing that vertex from queue,whose neighbour will be visited now
v = Q.dequeue( )
Theory:
1. Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures.
2. One starts at the root (selecting some arbitrary node as the root in the case of a graph) and
explores as far as possible along each branch before backtracking.
Algorithm:
DFS-recursive(G, s):
mark s as visited
for all neighbours w of s in Graph G:
if w is not visited:
DFS-recursive(G, w)
A* Search
Theory: It is an advanced BFS that searches for shorter paths first rather than the longer paths. A*
is optimal as well as a complete algorithm.
What do I mean by Optimal and Complete? Optimal meaning that A* is sure to find the least cost from the
source to the destination and Complete meaning that it is going to find all the paths that are available to us
from the source to the destination.
So that makes A* the best algorithm right? Well, in most cases, yes. But A* is slow and also the space it
requires is a lot as it saves all the possible paths that are available to us. This makes other faster algorithms have
an upper hand over A* but it is nevertheless, one of the best algorithms out there. Every node estimate itself
using following method
f(n)=g(n)+h(n)
where, g(n) is cost to reach current node; h(n)is cost to reach goal from current node
Algorithm:
A* Algorithm():
Theory:
This problem is to find an arrangement of N queens on a chess board, such that no queen can attack any other queens on the board.
The chess queens can attack in any direction as horizontal, vertical, horizontal and diagonal way.
A binary matrix is used to display the positions of N Queens, where no queens can attack other queens.
Input:
The size of a chess board. Generally, it is 8. as (8 x 8 is the size of a
normal chess board.)
Output:
The matrix that represents in which row and column the N Queens can be placed.
If the solution does not exist, it will return false.
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 1
0 1 0 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 0 0 0 1 0 0
0 0 1 0 0 0 0 0
In this output, the value 1 indicates the correct place for the queens.
The 0 denotes the blank spaces on the chess board.
Algorithm:
Output − True when placing a queen in row and place position is a valid or not.
Begin
if there is a queen at the left of current col, then
return false
if there is a queen at the left upper diagonal, then
return false
if there is a queen at the left lower diagonal, then
return false;
return true //otherwise it is valid place
End
solveNQueen(board, col)
Input − The chess board, the col where the queen is trying to be placed.
Begin
if all columns are filled, then
return true
for each row of the board, do
if isValid(board, i, col), then
set queen at place (i, col) in the board
if solveNQueen(board, col+1) = true, then
return true
otherwise remove queen from place (i, col) from board.
done
return false
End
Program:
from itertools import permutations, combinations
list_of_permutations = []
Theory:
In an 8-puzzle game, we need to rearrange some tiles to reach a predefined goal state.
Consider the following 8-puzzle board.
This is the goal state where each tile is in correct place. In this game, you will be given a
board where the tiles aren’t in the correct places. You need to move the tiles using the gap to
reach the goal state.
In the above figure, tiles 6, 7 and 8 are misplaced. So f (n) = 3 for this case.
For solving this problem with hill climbing search, we need to set a value for the heuristic.
Suppose the heuristic function h (n) is the lowest possible f (n) from a given state. First, we
need to know all the possible moves from the current state. Then we have to calculate f(n)
(number of misplaced tiles) for each possible move. Finally we need to choose the path with
lowest possible f (n) (which is our h (n) or heuristic).
Consider the figure above. Here, 3 moves are possible from the current state. For each state
we have calculated f (n). From the current state, it is optimal to move to the state with f (n) =
3 as it is closer to the goal state. So we have our h (n) = 3.
However, do you really think we can guarantee that it will reach the goal state? What will you
do if you reach on a state(Not the goal state) from which there are no better neighbour states!
This condition can be called a local maxima and this is the problem of hill climbing search.
Therefore, we may get stuck in local maxima. In this scenario, you need to backtrack to a
previous state to perform the search again to get rid of the path having local maxima.
What will happen if we reach to a state where all the f (n) values are equal? This condition is
called a plateau. You need to select a state at random and perform the hill climbing search
again!
# The state of the board is stored in a list. The list stores values for the
# board in the following positions:
#
# -------------
#|0|3|6|
# -------------
#|1|4|7|
# -------------
#|2|5|8|
# -------------
#
# The goal is defined as:
#
# -------------
#|1|2|3|
# -------------
#|8|0|4|
# -------------
#|7|6|5|
# -------------
#
# Where 0 denotes the blank tile or space.
goal_state= [1, 8, 7, 2, 0, 6, 3, 4, 5]
#
# The code will read state from a file called "state.txt" where the format is
# as above but space seperated. i.e. the content for the goal state would be
#187206345
### Code begins.
import sys
def display_board( state ):
print"------------"
print"| %i | %i | %i |"% (state[0], state[3], state[6])
print"------------"
print"| %i | %i | %i |"% (state[1], state[4], state[7])
print"------------"
print"| %i | %i | %i |"% (state[2], state[5], state[8])
print"------------"
def cmp( x, y ):
# Compare function for A*. f(n) = g(n) + h(n). I use depth (number of
moves) for g().
return (x.depth+ h( x.state, goal_state )) - (y.depth+ h( x.state, goal_state ))
defh( state, goal ):
"""Heuristic for the A* search. Returns an integer based on out of place
tiles"""
score =0
For I in range( len( state ) ):
if state[i] != goal[i]:
score = score +1
return score
# Node data structure
Class Node:
def init ( self, state, parent, operator, depth, cost ):
# Contains the state of the node
self.state= state
# Contains the node that generated this node
self.parent= parent
# Contains the operation that generated this node from the parent
self.operator= operator
# Contains the depth of this node (parent.depth +1)
self.depth= depth
# Contains the path cost of this node from depth 0. Not used for
depth/breadth first.
self.cost= cost
def readfile( filename ):
f =open( filename )
data =f.read()
# Get rid of the newlines
data =data.strip( "\n" )
#Break the string into a list using a space as a seperator.
data =data.split( " " )
state = []
for element in data:
state.append( int( element ) )
return state
# Main method
defmain():
starting_state=readfile( "state.txt" )
### CHANGE THIS FUNCTION TO USE bfs, dfs, ids or a_star
result = ids( starting_state, goal_state )
if result ==None:
print"No solution found"
elif result == [None]:
print"Start node was the goal!"
else:
print result
printlen(result), " moves"
# A python-isim. Basically if the file is being run execute the main() function.
if name ==" main ":
main()
Theory: Inference is the act or method of deriving logical conclusions from premises known
or assumed to be true. The conclusion drawn is additionally known as an idiomatic. The laws
of valid inference are studied within the field of logic.
Human inference (i.e. how humans draw conclusions) is historically studied inside the
sphere of cognitive psychology; artificial intelligence researchers develop machine-driven
inference systems to emulate human inference. statistical inference permits for inference from
quantitative data.
The process by which a conclusion is inferred from multiple observations is named
inductive reasoning. The conclusion is also correct or incorrect, or correct to within a certain
degree of accuracy, or correct in certain situations. Conclusions inferred from multiple
observations is also tested by additional observations. A conclusion reached on the basis of
proof and reasoning.
The process of reaching such a conclusion: "order, health, and by inference
cleanliness".
The validity of an inference depends on the shape of the inference. That is, the word
"valid" does not refer to the reality of the premises or the conclusion, but rather to the form of
the inference. an inference may be valid though the elements are false, and may be invalid
though the elements are true. However, a valid form with true premises can always have a
real conclusion.
For example,
All fruits are sweet.
A banana is a fruit.
Therefore, a banana is sweet.
For the conclusion to be necessarily true, the premises need to be true.
To show that this form is invalid, we demonstrate how it can lead from true premises to a false
conclusion.
All apples are fruit. (Correct)
Bananas are fruit. (Correct)
Therefore, bananas are apples. (Wrong)
A valid argument with false premises may lead to a false conclusion:
All tall people are Greek.
John Lennon was tall.
Therefore, John Lennon was Greek.
When a valid argument is used to derive a false conclusion from false premises, the inference
is valid because it follows the form of a correct inference. A valid argument can also be used
to derive a true conclusion from false premises:
All tall people are musicians.
John Lennon was tall.
Therefore, John Lennon was a musician.
In this case we have two false premises that imply a true conclusion.
In mathematical logic and automated theorem proving, resolution could be a rule of inference
resulting in a refutation theorem-proving technique for sentences in propositional logic and
first-order logic. In alternative words, iteratively applying the resolution rule in an acceptable
method allows for telling whether a propositional formula is satisfiable and for proving that a
first-order formula is unsatisfiable; this methodology could prove the satisfiability of a first-
order satisfiable formula, however not always, because it is the case for all ways for first-
order logic. Resolution was introduced by John Alan Robinson in 1965.
Forward Chaining:
If P is True and PQ then Q is True
Eg. Rani is hungry. If Rani is hungry then she barks. If Rani barks then Raja gets angry.
Prove Raja is angry using forward chaining.
Backward Chaining:
If Q is True and PQ then P is True
Eg. Rani is hungry. If Rani is hungry then she barks. If Rani barks then Raja gets angry.
Prove Raja is angry using backward chaining.
Resolution Rule: The resolution rule in propositional logic is a single valid inference rule
that produces a new clause implied by two clauses containing complementary literals. A
literal is a propositional variable or the negation of a propositional variable. Two literals are
said to be complements if one is the negation of the other (in the following, is taken to be the
complement to ). The resulting clause contains all the literals that do not have complements.
Formally:
where
all s and s are literals,
is the complement to , and
the dividing line stands for entails
The clause produced by the resolution rule is called the resolvent of the two input clauses.
When the two clauses contain more than one pair of complementary literals, the
resolution rule can be applied (independently) for each such pair; however, the result is
always a tautology.
Modus ponens can be seen as a special case of resolution of a one-literal clause and a
two-literal clause.
A Resolution Technique: When coupled with a complete search algorithm, the resolution
rule yields a sound and complete algorithm for deciding the satisfiability of a propositional
formula, and, by extension, the validity of a sentence under a set of axioms.
This resolution technique uses proof by contradiction and is based on the fact that any
sentence in propositional logic can be transformed into an equivalent sentence in conjunctive
normal form. The steps are as follows.
All sentences in the knowledge base and the negation of the sentence to be proved
(the conjecture) are conjunctively connected.
The resulting sentence is transformed into a conjunctive normal form with the
conjuncts viewed as elements in a set, S, of clauses.
For example,
Algorithm: The resolution rule is applied to all possible pairs of clauses that contain
complementary literals. After each application of the resolution rule, the resulting sentence is
simplified by removing repeated literals. If the sentence contains complementary literals, it is
discarded (as a tautology). If not, and if it is not yet present in the clause set S, it is added to
S, and is considered for further resolution inferences.
If after applying a resolution rule the empty clause is derived, the original formula is
unsatisfiable (or contradictory), and hence, it can be concluded that the initial conjecture
follows from the axioms.
If, on the other hand, the empty clause cannot be derived, and the resolution rule
cannot be applied to derive any more new clauses, the conjecture is not a theorem of the
original knowledge base.
One instance of this algorithm is the original Davis–Putnam algorithm that was later
refined into the DPLL algorithm that removed the need for explicit representation of the
resolvents.
This description of the resolution technique uses a set S as the underlying data-
structure to represent resolution derivations. Lists, Trees and Directed Acyclic Graphs are
other possible and common alternatives. Tree representations are more faithful to the fact that
the resolution rule is binary. Together with a sequent notation for clauses, a tree
representation also makes it clear to see how the resolution rule is related to a special case of
the cut-rule, restricted to atomic cut-formulas. However, tree representations are not as
compact as set or list representations, because they explicitly show redundant subderivations
of clauses that are used more than once in the derivation of the empty clause. Graph
representations can be as compact in the number of clauses as list representations and they
also store structural information regarding which clauses were resolved to derive each
resolvent.
A simple example
In plain language: Suppose is false. In order for the premise to be true, must
be true. Alternatively, suppose is true. In order for the premise to be true, must
be true. Therefore, regardless of falsehood or veracity of , if both premises hold, then the
conclusion is true.
Resolution in First-Order Logic: In first-order logic, resolution condenses the traditional
syllogisms of logical inference down to a single rule.
To understand how resolution works, consider the following example syllogism of term
logic:
All Greeks are Europeans.
Homer is a Greek.
Therefore, Homer is a European.
Therefore,
To recast the reasoning using the resolution technique, first the clauses must be
converted to conjunctive normal form. In this form, all quantification becomes
implicit: universal quantifiers on variables (X, Y, …) are simply omitted as understood,
while existentially quantified variables are replaced by Skolem functions.
Therefore,
So, the question is, how does the resolution technique derive the last clause from the
first two? The rule is simple:
Find two clauses containing the same predicate, where it is negated in one clause but
not in the other.
Perform unification on the two predicates. (If the unification fails, you made a bad
choice of predicates. Go back to the previous step and try again.)
If any unbound variables which were bound in the unified predicates also occur in
other predicates in the two clauses, replace them with their bound values (terms) there as well.
Discard the unified predicates, and combine the remaining ones from the two clauses
into a new clause, also joined by the "∨" operator.
To apply this rule to the above example, we find the predicate P occurs in negated form
¬P(X)
in the first clause, and in non-negated form
P(a)
in the second clause. X is an unbound variable, while a is bound value (term). Unifying the
two produces the substitution
X↦a
Discarding the unified predicates, and applying this substitution to the remaining
predicates (just Q(X), in this case), produces the conclusion:
Q(a)
For another example, consider the syllogistic form
All Cretans are islanders.
All islanders are liars.
Therefore, all Cretans are liars.
Or more generally,
∀X P(X) → Q(X)
∀X Q(X) → R(X)
Therefore, ∀X P(X) → R(X)
In CNF, the antecedents become:
¬P(X) ∨ Q(X)
¬Q(Y) ∨ R(Y)
(Note that the variable in the second clause was renamed to make it clear that variables in
different clauses are distinct.)
Now, unifying Q(X) in the first clause with ¬Q(Y) in the second clause means
that X and Y become the same variable anyway. Substituting this into the remaining clauses
and combining them gives the conclusion:
¬P(X) ∨ R(X)
The resolution rule, as defined by Robinson, also incorporated factoring, which
unifies two literals in the same clause, before or during the application of resolution as
defined above. The resulting inference rule is refutation complete, in that a set of clauses is
unsatisfiable if and only if there exists a derivation of the empty clause using resolution
alone.
What is STRIPS?
Once the world is described, you then provide a problem set. A problem consists of an initial
state and a goal condition. STRIPS can then search all possible states, starting from the initial
one, executing various actions, until it reaches the goal.
A common language for writing STRIPS domain and problem sets is the Planning Domain
Definition Language (PDDL). PDDL lets you write most of the code with English words, so
that it can be clearly read and (hopefully) well understood. It’s a relatively easy approach to
writing simple AI planning problems.
Problem statement
Design a planning agent for a Blocks World problem. Assume suitable initial state and final
state for the problem.
Representation of goal/intention to achieve
Representation of actions it can perform; and
Representation of the environment;
Then have the agent generate a plan to achieve the goal.
The plan is generated entirely by the planning system, without human intervention.
Assume start & goal states as below:
a. STRIPS : A planning system – Has rules with precondition deletion list and addition list
Sequence of actions :
b. Grab C
c. Pickup C
d. Place on table C
e. Grab B
f. Pickup B
g. Stack B on C
h. Grab A
1. Precondition & Deletion List : holding(x)
2. Add List : hand empty, on(x,table), clear(x)
i. R3 : stack(x,y)
j. R4 : unstack(x,y)
Theory:
Probabilistic reasoning
The aim of a reasoning is to combine the capacity of probability theory to handle uncertainty
with the capacity of deductive logic to exploit structure. The result is a richer and more
expressive formalism with a broad range of possible application areas. Probabilistic logics
attempt to find a natural extension of traditional logic truth tables: the results they define are
derived through probabilistic expressions instead. A difficulty with probabilistic logics is that
they tend to multiply the computational complexities of their probabilistic and logical
components. Other difficulties include the possibility of counter-intuitive results, such as
those of Dempster-Shafer theory. The need to deal with a broad variety of contexts and issues
has led to many different proposals.
Probabilistic Reasoning Using Bayesian Learning: The idea of Bayesian learning is to
compute the posterior probability distribution of the target features of a new example
conditioned on its input features and all of the training examples.
Suppose a new case has inputs X=x and has target features, Y; the aim is to compute
P(Y|X=x∧e), where e is the set of training examples. This is the probability distribution of the
target variables given the particular inputs and the examples. The role of a model is to be the
assumed generator of the examples. If we let M be a set of disjoint and covering models, then
reasoning by cases and the chain rule give
The first two equalities are theorems from the definition of probability. The last
equality makes two assumptions: the model includes all of the information about the
examples that is necessary for a particular prediction [i.e., P(Y | m ∧x∧e)= P(Y | m ∧x) ], and
the model does not change depending on the inputs of the new example [i.e., P(m|x∧e)= P(m|
e)]. This formula says that we average over the prediction of all of the models, where each
model is weighted by its posterior probability given the examples.
P(m|e) = (P(e|m)×P(m))/(P(e)) .
Thus, the weight of each model depends on how well it predicts the data (the
likelihood) and its prior probability. The denominator, P(e), is a normalizing constant to
make sure the
posterior probabilities of the models sum to 1. Computing P(e) can be very
difficult when there are many models.
go :- hypothesize(Animal),
write('I guess that the animal is: '),
write(Animal),
nl,
undo.
/* hypotheses to be tested */
hypothesize(cheetah) :- cheetah, !.
hypothesize(tiger) :- tiger, !.
hypothesize(giraffe) :- giraffe, !.
hypothesize(zebra) :- zebra, !.
hypothesize(ostrich) :- ostrich, !.
hypothesize(penguin) :- penguin, !.
hypothesize(albatross) :- albatross, !.
hypothesize(unknown). /* no diagnosis */
ostrich :- bird,
verify(does_not_fly),
verify(has_long_neck).
penguin :- bird,
verify(does_not_fly),
verify(swims),
verify(is_black_and_white).
albatross :- bird,
verify(appears_in_story_Ancient_Mariner),
verify(flys_well).
/* classification rules */
mammal :- verify(has_hair), !.
mammal :- verify(gives_milk).
bird :- verify(has_feathers), !.
bird :- verify(flys),
verify(lays_eggs).
carnivore :- verify(eats_meat), !.
carnivore :- verify(has_pointed_teeth),
verify(has_claws),
verify(has_forward_eyes).
ungulate :- mammal,
verify(has_hooves), !.
ungulate :- mammal,
verify(chews_cud).
:- dynamic yes/1,no/1.
domains
disease,indication = symbol
Patient,name = string
predicates
hypothesis(string,disease)
symptom(name,indication)
response(char)
go
clauses
go :-
write(\"What is the patient\'s name? \"),
readln(Patient),
hypothesis(Patient,Disease),
write(Patient,\"probably has \",Disease,\".\"),nl.
go :-
write(\"Sorry, I don\'t seem to be able to\"),nl,
write(\"diagnose the disease.\"),nl.
symptom(Patient,fever) :-
write(\"Does \",Patient,\" have a fever (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,rash) :-
write(\"Does \",Patient,\" have a rash (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,headache) :-
write(\"Does \",Patient,\" have a headache (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,runny_nose) :-
write(\"Does \",Patient,\" have a runny_nose (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,conjunctivitis) :-
write(\"Does \",Patient,\" have a conjunctivitis (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,cough) :-
write(\"Does \",Patient,\" have a cough (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,body_ache) :-
write(\"Does \",Patient,\" have a body_ache (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,chills) :-
write(\"Does \",Patient,\" have a chills (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,sore_throat) :-
write(\"Does \",Patient,\" have a sore_throat (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,sneezing) :-
write(\"Does \",Patient,\" have a sneezing (y/n) ?\"),
response(Reply),
Reply=\'y\'.
symptom(Patient,swollen_glands) :-
write(\"Does \",Patient,\" have a swollen_glands (y/n) ?\"),
response(Reply),
Reply=\'y\'.
hypothesis(Patient,measles) :-
symptom(Patient,fever),
symptom(Patient,cough),
symptom(Patient,conjunctivitis),
symptom(Patient,runny_nose),
symptom(Patient,rash).
hypothesis(Patient,german_measles) :-
symptom(Patient,fever),
symptom(Patient,headache),
symptom(Patient,runny_nose),
symptom(Patient,rash).
hypothesis(Patient,flu) :-
symptom(Patient,fever),
symptom(Patient,headache),
symptom(Patient,body_ache),
symptom(Patient,conjunctivitis),
symptom(Patient,chills),
symptom(Patient,sore_throat),
symptom(Patient,runny_nose),
symptom(Patient,cough).
hypothesis(Patient,common_cold) :-
symptom(Patient,headache),
symptom(Patient,sneezing),
symptom(Patient,sore_throat),
symptom(Patient,runny_nose),
symptom(Patient,chills).
hypothesis(Patient,mumps) :-
symptom(Patient,fever),
symptom(Patient,swollen_glands).
hypothesis(Patient,chicken_pox) :-
symptom(Patient,fever),
symptom(Patient,chills),
symptom(Patient,body_ache),
symptom(Patient,rash).
hypothesis(Patient,measles) :-
symptom(Patient,cough),
symptom(Patient,sneezing),
symptom(Patient,runny_nose).
response(Reply) :-
readchar(Reply),
write(Reply),nl.