Propositional Logic Hybrid Agent and Logical State
Last Updated :
28 Feb, 2022
Prerequisite: Wumpus World in Artificial Intelligence
To create a hybrid agent for the wumpus world, the capacity to deduce various aspects of the state of the world may be integrated rather simply with the condition–action rules and problem-solving algorithms. The agent program keeps a knowledge base and a current strategy up to date. The atemporal axioms—those that do not depend on t , such as the axiom connecting the breeziness of squares to the presence of pits—are included in the initial knowledge base. The new percept phrase is added at each time step, along with all the axioms that are dependent on t , such as the successor-state axioms. (The agent doesn't need axioms for future time steps, as explained in the next section.) The agent then utilizes logical inference to determine which squares are safe and which have yet to be visited by ASKing inquiries of the knowledge base.
The agent program's primary body creates a plan based on a diminishing priority of goals. If there is a sparkle, the program first devises a strategy for grabbing the gold, returning to the original place, and climbing out of the cave. Otherwise, if no current plan exists, the software plots a path to the nearest safe square it has not yet visited, ensuring that the route passes only via safe squares. A* search, not an ASK , is used to plan a route. If the agent still has an arrow and there are no safe squares to investigate, the next step is to try to make a safe square by shooting at one of the available wumpus spots. These are found by inquiring where \operatorname{ASK}\left(K B, \neg W_{x, y}\right) is false, i.e. when it is unknown whether or not there is a wumpus. PLAN-ROUTE is used by the function PLAN-SHOT (not shown) to plan a sequence of operations that will line up this shot. If this doesn't work, the program looks for a square to explore that isn't provably unsafe—that is, one for which \operatorname{ASK}\left(K B, \neg O K_{x, y}^{t}\right) returns false. If no such square exists, the mission will be impossible, and the agent will withdraw to [1, 1] and climb out of the cave.
Logic States
The agent program performs admirably, but it has one fundamental flaw: the computational cost of calls to ASK grows exponentially over time. This is due to the fact that the required conclusions must reach back in time and involve an increasing number of proposition symbols. Obviously, this is unsustainable—we can't have an agent whose processing time for each percept grows in lockstep with its lifespan! We truly need a constant update time—that is, one that is independent of t . The apparent solution is to save, or cache, inference findings so that the inference process at the next time step can build on the outcomes of previous stages rather than having to re-start it again. The belief state—that is, some representation of the set of all conceivable current states of the world—can replace the previous history of percepts and all their repercussions
State estimation is the process of updating the belief state as fresh percepts come. We can employ a logical statement involving the proposition symbols associated with the current time step, as well as the atemporal symbols, instead of an explicit list of states as in Section 4.4. For instance, consider the logical sentence.
\text { WumpusAlive }^{1} \wedge L_{2,1}^{1} \wedge B_{2,1} \wedge\left(P_{3,1} \vee P_{2,2}\right)
describes the set of all situations at time 1 in which the wumpus is alive, the agent is at [2, 1] , the square is breezy, and there is a pit in either [3, 1] or [2, 2] , or both.
It turns out that maintaining a precise belief state as a logical formula is not straightforward. There are 2n potential states—that is, assignments of truth values to those symbols—if there are n fluent symbols for time t. The powerset (all subsets) of the set of physical states is now the set of belief states. There are 2^{2^{n}} belief states since there are 2n physical states. We'd require integers with \log _{2}\left(2^{2^{n}}\right)=2^{n} bits to designate the current belief state even if we employed the smallest possible encoding of logical formulas, with each belief state represented by a distinct binary number. To put it another way, correct state estimation may necessitate logical formulas of a size proportional to the number of symbols.
The representation of belief states as conjunctions of literals, i.e. 1-CNF formulas, is a highly common and natural method for approximate state estimation. Given the belief state at t 1, the agent program simply tries to prove X^{t} and \neg X^{t} for each symbol X^{t} (as well as each atemporal symbol whose truth value is unknown). The new belief state is formed by the conjunction of verifiable literals, and the previous belief state at t - 1 is discarded.
It's vital to keep in mind that as time passes, this technique may lose some information. If the above equation were the true belief state, neither P_{3,1} nor P_{2,2} would be provable separately, and neither would appear in the 1-CNF belief state. On the other hand, we know that the entire 1-CNF belief state must be true because every literal in it is proven from the prior belief state, and the original belief state is a true statement. As a result, the set of possible states represented by the 1-CNF belief state encompasses all states that are in fact feasible when the whole percept history is taken into account. The CNF belief state serves as a basic outer envelope, or conservative approximation, to the exact belief state. The concept of cautious approximations to complicated sets appears to be a repeating subject in several AI fields.
Similar Reads
Propositional Logic based Agent
Prerequisite: Wumpus World in Artificial Intelligence In this article, we'll use our understanding to make wumpus world agents that use propositional logic. The first stage is to enable the agent to deduce the state of the world from its percept history to the greatest extent possible. This necessit
8 min read
Propositional Logic in Artificial Intelligence
Propositional logic is used for solving complex problems using simple statements. These statements can either be true or false but cannot be both at same time. These propositions form knowledge representation, reasoning and decision-making in AI systems. In this article we will see the basics of pro
7 min read
Prepositional Logic Inferences
Prerequisite: Wumpus World in Artificial IntelligenceThe agent uses logical inference to identify which squares are safe but makes plans via A* search. We'll show you how to construct plans using logical inference in this part. The basic concept is straightforward:Write a phrase that comprisesInit^{
6 min read
Propositional Logic Reduction
It is possible to reduce first-order inference to propositional inference once rules for inferring nonquantified sentences from quantified sentences are established. The first concept is that, just as one instantiation can replace an existentially quantified statement, the set of all potential insta
3 min read
Syntax and Semantics of First-Order Logic in AI
First-order logic (FOL), also known as first-order predicate logic, is a fundamental formal system used in mathematics, philosophy, computer science, and linguistics for expressing and reasoning about relationships between objects in a domain. In artificial intelligence (AI), first-order logic (FOL)
9 min read
Proofs and Inferences in Proving Propositional Theorem
This article discusses how to use inference rules to create proofâa series of conclusions that leads to the desired result. The most well-known rule is known as Modus Ponens (Latin for affirming mode) and is expressed as\frac{\alpha \Rightarrow \beta, \quad \alpha}{\beta}Inferences in Proving Propos
4 min read
Difference between Propositional and First-Order Logic and How are they used in Knowledge Representation?
In artificial intelligence and computational logic, two fundamental types of logic are widely used for knowledge representation: propositional logic and first-order logic. These logical systems provide the foundation for constructing and manipulating knowledge in a formal and precise manner. This ar
7 min read
Rational Agent in AI
Artificial Intelligence (AI) is revolutionizing our lives, from self-driving cars to personalized recommendations on streaming platforms. The concept of a rational agent is at the core of many AI systems. A rational agent is an entity that acts to achieve the best outcome, given its knowledge and ca
6 min read
What is a Production System in AI?
Every automatic system with a specific algorithm must have rules for its proper functioning and functioning differently. The production systems in artificial intelligence are rules applied to different behaviors and environments. In this article, we will learn about production systems, their compone
6 min read
Prepositional Inference in Artificial Intelligence
Let's start with quantifiers that are universal. Assume we have in our knowledge base the conventional folklore axiom that All Greedy Kings Are Bad: \forall x \operatorname{King}(x) \wedge \operatorname{Greed} y(x) \Rightarrow \operatorname{Evil}(x) Then it appears that inferring any of the followin
3 min read