0% found this document useful (0 votes)
147 views111 pages

Fundamental of AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views111 pages

Fundamental of AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 111

UNIT 1: Introduction to Artificial Intelligence

Artificial Intelligence An Introduction


Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think
and act like humans. It involves the development of algorithms and computer programs that can perform tasks that
typically require human intelligence such as visual perception, speech recognition, decision-making, and language
translation. AI has the potential to revolutionize many industries and has a wide range of applications, from virtual
personal assistants to self-driving cars.

Before leading to the meaning of artificial intelligence let understand what is the meaning of Intelligence-

Intelligence: The ability to learn and solve problems. This definition is taken from webster’s Dictionary.

The most common answer that one expects is “to make computers intelligent so that they can act intelligently!”,
but the question is how much intelligent? How can one judge intelligence?

…as intelligent as humans. If the computers can, somehow, solve real-world problems, by improving on their own
from past experiences, they would be called “intelligent”.
Thus, the AI systems are more generic(rather than specific), can “think” and are more flexible.

Intelligence, as we know, is the ability to acquire and apply knowledge. Knowledge is the information acquired
through experience. Experience is the knowledge gained through exposure(training). Summing the terms up, we
get artificial intelligence as the “copy of something natural(i.e., human beings) ‘WHO’ is capable of acquiring and
applying the information it has gained through exposure.”

Artificial Intelligence

Intelligence is composed of:

 Reasoning

 Learning

 Problem-Solving

 Perception

 Linguistic Intelligence
UNIT 1: Introduction to Artificial Intelligence
Many tools are used in AI, including versions of search and mathematical optimization, logic, and methods based on
probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics,
philosophy, neuroscience, artificial psychology, and many others.

The main focus of artificial intelligence is towards understanding human behavior and performance. This can be
done by creating computers with human-like intelligence and capabilities. This includes natural language processing,
facial analysis and robotics. The main applications of AI are in military, healthcare, and computing; however, it’s
expected that these applications will start soon and become part of our everyday lives.

Many theorists believe that computers will one day surpass human intelligence; they’ll be able to learn faster,
process information more effectively and make decisions faster than humans. However, it’s still a work in progress
as there are many limitations to how much artificial intelligence is achieved. For example, computers don’t perform
well in dangerous or cold environments; they also struggle with physical tasks such as driving cars or operating heavy
machinery. Even so, there are many exciting things ahead for artificial intelligence!

Uses of Artificial Intelligence :

Artificial Intelligence has many practical applications across various industries and domains, including:

1. Healthcare: AI is used for medical diagnosis, drug discovery, and predictive analysis of diseases.

2. Finance: AI helps in credit scoring, fraud detection, and financial forecasting.

3. Retail: AI is used for product recommendations, price optimization, and supply chain management.

4. Manufacturing: AI helps in quality control, predictive maintenance, and production optimization.

5. Transportation: AI is used for autonomous vehicles, traffic prediction, and route optimization.

6. Customer service: AI-powered chatbots are used for customer support, answering frequently asked
questions, and handling simple requests.

7. Security: AI is used for facial recognition, intrusion detection, and cybersecurity threat analysis.

8. Marketing: AI is used for targeted advertising, customer segmentation, and sentiment analysis.

9. Education: AI is used for personalized learning, adaptive testing, and intelligent tutoring systems.

This is not an exhaustive list, and AI has many more potential applications in various domains and industries.

Need for Artificial Intelligence

1. To create expert systems that exhibit intelligent behavior with the capability to learn, demonstrate, explain,
and advise its users.

2. Helping machines find solutions to complex problems like humans do and applying them as algorithms in a
computer-friendly manner.

3. Improved efficiency: Artificial intelligence can automate tasks and processes that are time-consuming and
require a lot of human effort. This can help improve efficiency and productivity, allowing humans to focus on
more creative and high-level tasks.

4. Better decision-making: Artificial intelligence can analyze large amounts of data and provide insights that can
aid in decision-making. This can be especially useful in domains like finance, healthcare, and logistics, where
decisions can have significant impacts on outcomes.

5. Enhanced accuracy: Artificial intelligence algorithms can process data quickly and accurately, reducing the
risk of errors that can occur in manual processes. This can improve the reliability and quality of results.
UNIT 1: Introduction to Artificial Intelligence
6. Personalization: Artificial intelligence can be used to personalize experiences for users, tailoring
recommendations, and interactions based on individual preferences and behaviors. This can improve
customer satisfaction and loyalty.

7. Exploration of new frontiers: Artificial intelligence can be used to explore new frontiers and discover new
knowledge that is difficult or impossible for humans to access. This can lead to new breakthroughs in fields
like astronomy, genetics, and drug discovery.

Approaches of AI

There are a total of four approaches of AI and that are as follows:

 Acting humanly (The Turing Test approach): This approach was designed by Alan Turing. The ideology
behind this approach is that a computer passes the test if a human interrogator, after asking some written
questions, cannot identify whether the written responses come from a human or from a computer.

 Thinking humanly (The cognitive modeling approach): The idea behind this approach is to determine
whether the computer thinks like a human.

 Thinking rationally (The “laws of thought” approach): The idea behind this approach is to determine
whether the computer thinks rationally i.e. with logical reasoning.

 Acting rationally (The rational agent approach): The idea behind this approach is to determine whether the
computer acts rationally i.e. with logical reasoning.

 Machine Learning approach: This approach involves training machines to learn from data and improve
performance on specific tasks over time. It is widely used in areas such as image and speech recognition,
natural language processing, and recommender systems.

 Evolutionary approach: This approach is inspired by the process of natural selection in biology. It involves
generating and testing a large number of variations of a solution to a problem, and then selecting and
combining the most successful variations to create a new generation of solutions.

 Neural Networks approach: This approach involves building artificial neural networks that are modeled after
the structure and function of the human brain. Neural networks can be used for tasks such as pattern
recognition, prediction, and decision-making.

 Fuzzy logic approach: This approach involves reasoning with uncertain and imprecise information, which is
common in real-world situations. Fuzzy logic can be used to model and control complex systems in areas
such as robotics, automotive control, and industrial automation.

 Hybrid approach: This approach combines multiple AI techniques to solve complex problems. For example, a
hybrid approach might use machine learning to analyze data and identify patterns, and then use logical
reasoning to make decisions based on those patterns.

Applications of AI include Natural Language Processing, Gaming, Speech Recognition, Vision Systems, Healthcare,
Automotive, etc.

Forms of AI:

1) Weak AI:

 Weak AI is an AI that is created to solve a particular problem or perform a specific task.

 It is not a general AI and is only used for specific purpose.

 For example, the AI that was used to beat the chess grandmaster is a weak AI as that serves only 1 purpose
but it can do it efficiently.
UNIT 1: Introduction to Artificial Intelligence
2) Strong AI:

 Strong AI is difficult to create than weak AI.

 It is a general purpose intelligence that can demonstrate human abilities.

 Human abilities such as learning from experience, reasoning, etc. can be demonstrated by this AI.

3) Super Intelligence

 As stated by a leading AI thinker Nick Bostrom, “Super Intelligence is an AI that is much smarter than the
best human brains in practically every field”.

 It ranges from a machine being just smarter than a human to a machine being trillion times smarter than a
human.

 Super Intelligence is the ultimate power of AI.

An AI system is composed of an agent and its environment. An agent(e.g., human or robot) is anything that can
perceive its environment through sensors and acts upon that environment through effectors. Intelligent agents must
be able to set goals and achieve them. In classical planning problems, the agent can assume that it is the only system
acting in the world, allowing the agent to be certain of the consequences of its actions. However, if the agent is not
the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that cannot only
assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.
Natural language processing gives machines the ability to read and understand human language. Some
straightforward applications of natural language processing include information retrieval, text mining, question
answering, and machine translation. Machine perception is the ability to use input from sensors (such as cameras,
microphones, sensors, etc.) to deduce aspects of the world. e.g., Computer Vision. Concepts such as game theory,
and decision theory, necessitate that an agent can detect and model human emotions.

Many times, students get confused between Machine Learning and Artificial Intelligence, but Machine learning, a
fundamental concept of AI research since the field’s inception, is the study of computer algorithms that improve
automatically through experience. The mathematical analysis of machine learning algorithms and their performance
is a branch of theoretical computer science known as a computational learning theory.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational
philosophy, and computer science. Computational psychology is used to make computer programs that mimic
human behavior. Computational philosophy is used to develop an adaptive, free-flowing computer mind.
Implementing computer science serves the goal of creating computers that can perform tasks that only people could
previously accomplish.

AI has developed a large number of tools to solve the most difficult problems in computer science, like:

 Search and optimization

 Logic

 Probabilistic methods for uncertain reasoning

 Classifiers and statistical learning methods

 Neural networks

 Control theory

 Languages

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis,
creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines
UNIT 1: Introduction to Artificial Intelligence
(such as Google search), virtual assistants (such as Siri), image recognition in photographs, spam filtering, prediction
of judicial decisions[204] and targeted online advertisements. Other applications include Healthcare, Automotive,
Finance, Video games, etc

Are there limits to how intelligent machines – or human-machine hybrids – can be? A superintelligence,
hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing
that of the brightest and most gifted human mind. ‘‘Superintelligence’’ may also refer to the form or degree of
intelligence possessed by such an agent.

If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or
mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page
and help other Geeks.

Please write comments if you find anything incorrect, or if you want to share more information about the topic
discussed above.

Drawbacks of Artificial Intelligence :

1. Bias and unfairness: AI systems can perpetuate and amplify existing biases in data and decision-making.

2. Lack of transparency and accountability: Complex AI systems can be difficult to understand and interpret,
making it challenging to determine how decisions are being made.

3. Job displacement: AI has the potential to automate many jobs, leading to job loss and a need for reskilling.

4. Security and privacy risks: AI systems can be vulnerable to hacking and other security threats, and may also
pose privacy risks by collecting and using personal data.

5. Ethical concerns: AI raises important ethical questions about the use of technology for decision-making,
including issues related to autonomy, accountability, and human dignity.

Technologies Based on Artificial Intelligence:

1. Machine Learning: A subfield of AI that uses algorithms to enable systems to learn from data and make
predictions or decisions without being explicitly programmed.

2. Natural Language Processing (NLP): A branch of AI that focuses on enabling computers to understand,
interpret, and generate human language.

3. Computer Vision: A field of AI that deals with the processing and analysis of visual information using
computer algorithms.

4. Robotics: AI-powered robots and automation systems that can perform tasks in manufacturing, healthcare,
retail, and other industries.

5. Neural Networks: A type of machine learning algorithm modeled after the structure and function of the
human brain.

6. Expert Systems: AI systems that mimic the decision-making ability of a human expert in a specific field.

7. Chatbots: AI-powered virtual assistants that can interact with users through text-based or voice-based
interfaces.
UNIT 1: Introduction to Artificial Intelligence

Applications

Issues of Artificial Intelligence :

Artificial Intelligence has the potential to bring many benefits to society, but it also raises some important issues that
need to be addressed, including:

1. Bias and Discrimination: AI systems can perpetuate and amplify human biases, leading to discriminatory
outcomes.

2. Job Displacement: AI may automate jobs, leading to job loss and unemployment.

3. Lack of Transparency: AI systems can be difficult to understand and interpret, making it challenging to
identify and address bias and errors.

4. Privacy Concerns: AI can collect and process vast amounts of personal data, leading to privacy concerns and
the potential for abuse.

5. Security Risks: AI systems can be vulnerable to cyber attacks, making it important to ensure the security of
AI systems.

6. Ethical Considerations: AI raises important ethical questions, such as the acceptable use of autonomous
weapons, the right to autonomous decision making, and the responsibility of AI systems for their actions.

7. Regulation: There is a need for clear and effective regulation to ensure the responsible development and
deployment of AI.

It’s crucial to address these issues as AI continues to play an increasingly important role in our lives and society.

The Future of AI Technologies:

1. Reinforcement Learning: Reinforcement Learning is an interesting field of Artificial Intelligence that focuses on
training agents to make intelligent decisions by interacting with their environment.

2. Explainable AI: this AI techniques focus on providing insights into how AI models arrive at their conclusions.
UNIT 1: Introduction to Artificial Intelligence
3. Generative AI: Through this technique AI models can learn the underlying patterns and create realistic and novel
outputs.

4. Edge AI:AI involves running AI algorithms directly on edge devices, such as smartphones, IoT devices, and
autonomous vehicles, rather than relying on cloud-based processing.

5. Quantum AI: Quantum AI combines the power of quantum computing with AI algorithms to tackle complex
problems that are beyond the capabilities of classical computers.

Reference :

Here are some resources for further reading and learning about Artificial Intelligence:

1. Books:
“Artificial Intelligence: A Modern Approach” by Stuart Russell and Peter Norvig
“Deep Learning” by Ian Goodfellow, Yoshua Bengio, and Aaron Courville
“Artificial Intelligence with Python” by Prateek Joshi

2. Websites:
OpenAI (openai.com)
AI Conference (aiconf.org)
AI-Forum (ai-forum.org)
Stanford Artificial Intelligence Laboratory (ai.stanford.edu)

3. Online Courses:
Coursera’s Introduction to Artificial Intelligence (coursera.org/learn/introduction-to-ai)
Udacity’s Artificial Intelligence Nanodegree (udacity.com/course/artificial-intelligence-nanodegree–nd898)
edX’s Artificial Intelligence Fundamentals (edx.org/learn/artificial-intelligence)

Types of Artificial Intelligence


Artificial Intelligence refers to something which is made by humans or non-natural things and Intelligence means the
ability to understand or think. AI is not a system but it is implemented in the system. There are many different types
of AI, each with its own strengths and weaknesses.

In this article, we will learn What Are The Types Of Artificial Intelligence? based on their functionalities, capabilities,
and applications. Now Let’s dive deep into the article. Before diving deep into Types of Artificial Intelligence first you
need to learn about Artificial Intelligence.

You can refer to this article: Artificial Intelligence

Types of Artificial Intelligence

There are 7 types of Artificial Intelligence divided on the basis of Capabilities and functionalities of AI. Artificial
Intelligence can be divided based on capablities and various other functionalities.

Based on Capabilities of AI -Type 1

 Narrow AI

 General AI

 Super AI
UNIT 1: Introduction to Artificial Intelligence
Based on the Functionality of AI- Type 2

 Reactive Machines

 Limited Theory

 Theory of Mind

 Self-awareness

Types of Artificial Intelligence

Based on Capabilities of AI -Type 1

1. Narrow AI: Narrow AI also known as Weak AI or Narrow AI. Narrow AI is designed and trained ona specific
task or a narrow range tasks. These Narrow AI systems are designed and trained for a purpose. These
Narrow systems performs their designated tasks but mainly lack in the ability to generalize tasks. Personal
Virtual assistance like Alexa or Siri, recommendation systems, image recognization software and other
language translation tools.

2. General AI: It is known as Strong AI. It refers to AI systems that have human intelligence and abilities to
perform various tasks. Systems have capability to understand, learn and apply across a wide range of tasks
taht are similar to how a human can adapt to various tasks. In general AI remains a theoretical concept, and
now no AI can achieve this level of intelligence.

3. Super AI: It is known as Superintelligent AI that surpasses intelligence of human in solving-probem,


creativity, and overall abilities. Super AI develops emotions, desires, need and beliefs of their own. They are
able to make decisions of their own and solve problem of its own.
UNIT 1: Introduction to Artificial Intelligence
Based on the Functionality of AI- Type 2

1. Reactive Machines: Reactive machines were created by IBM in the mid-1980s.These machines are the
foremost basic sort of AI system. this suggests that they can’t form memories or use past experiences to
influence present -made a choice, they will only react to currently existing situations hence “Reactive”. An
existing sort of reactive machine is deep blue, chess played by the supercomputer. These are the most basic
type of AI and can only react to the environment, they cannot form memories or make decisions based on
past experiences. Examples include simple rule-based systems like chess-playing programs.

2. Limited Memory: It is comprised of machine learning models that the device derives knowledge from
previously-learned information, stored data, or events. Unlike Reactive machines, limited memory learns
from the past by observing actions or data fed to them to create experiential knowledge.

3. Theory of Mind: In this sort of AI decision-making ability is adequate to the extent of the human mind, but
by machines. while some machines currently exhibit humanlike capabilities like voice assistants, for example,
none are fully capable of holding conversations relative to human standards. One component of human
conversation has the emotional capacity or sounding and behaving sort of a person would in standard
conversations of conversation AI systems with a theory of mind can understand and simulate the mental
states of other agents. This type of AI is still in development and is not yet practical.

4. Self-Awareness: This AI involves machines that have human-level consciousness. this type of AI isn’t
currently alive but would be considered the foremost advanced sort of AI known to man.These AI systems
possess consciousness and self-awareness, but this is currently the stuff of science fiction, and not yet a
reality.

Agents in Artificial Intelligence


In artificial intelligence, an agent is a computer program or system that is designed to perceive its environment,
make decisions and take actions to achieve a specific goal or set of goals. The agent operates autonomously,
meaning it is not directly controlled by a human operator.

Agents can be classified into different types based on their characteristics, such as whether they are reactive or
proactive, whether they have a fixed or dynamic environment, and whether they are single or multi-agent systems.

 Reactive agents are those that respond to immediate stimuli from their environment and take actions based
on those stimuli. Proactive agents, on the other hand, take initiative and plan ahead to achieve their goals.
The environment in which an agent operates can also be fixed or dynamic. Fixed environments have a static
set of rules that do not change, while dynamic environments are constantly changing and require agents to
adapt to new situations.

 Multi-agent systems involve multiple agents working together to achieve a common goal. These agents may
have to coordinate their actions and communicate with each other to achieve their objectives. Agents are
used in a variety of applications, including robotics, gaming, and intelligent systems. They can be
implemented using different programming languages and techniques, including machine learning and
natural language processing.

Artificial intelligence is defined as the study of rational agents. A rational agent could be anything that makes
decisions, such as a person, firm, machine, or software. It carries out an action with the best outcome after
considering past and current percepts(agent’s perceptual inputs at a given instance). An AI system is composed of
an agent and its environment. The agents act in their environment. The environment may contain other agents.

An agent is anything that can be viewed as:

 Perceiving its environment through sensors and

 Acting upon that environment through actuators


UNIT 1: Introduction to Artificial Intelligence
Note: Every agent can perceive its own actions (but not always the effects).

Interaction of Agents with the Environment

Structure of an AI Agent

To understand the structure of Intelligent Agents, we should be familiar


with Architecture and Agent programs. Architecture is the machinery that the agent executes on. It is a device with
sensors and actuators, for example, a robotic car, a camera, and a PC. An agent program is an implementation of an
agent function. An agent function is a map from the percept sequence(history of all that an agent has perceived to
date) to an action.

Agent = Architecture + Agent Program

There are many examples of agents in artificial intelligence. Here are a few:

 Intelligent personal assistants: These are agents that are designed to help users with various tasks, such as
scheduling appointments, sending messages, and setting reminders. Examples of intelligent personal
assistants include Siri, Alexa, and Google Assistant.

 Autonomous robots: These are agents that are designed to operate autonomously in the physical world.
They can perform tasks such as cleaning, sorting, and delivering goods. Examples of autonomous robots
include the Roomba vacuum cleaner and the Amazon delivery robot.

 Gaming agents: These are agents that are designed to play games, either against human opponents or other
agents. Examples of gaming agents include chess-playing agents and poker-playing agents.

 Fraud detection agents: These are agents that are designed to detect fraudulent behavior in financial
transactions. They can analyze patterns of behavior to identify suspicious activity and alert authorities.
Examples of fraud detection agents include those used by banks and credit card companies.

 Traffic management agents: These are agents that are designed to manage traffic flow in cities. They can
monitor traffic patterns, adjust traffic lights, and reroute vehicles to minimize congestion. Examples of traffic
management agents include those used in smart cities around the world.
UNIT 1: Introduction to Artificial Intelligence
 A software agent has Keystrokes, file contents, received network packages that act as sensors and displays
on the screen, files, and sent network packets acting as actuators.

 A Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs, mouth, and other
body parts act as actuators.

 A Robotic agent has Cameras and infrared range finders which act as sensors and various motors act as
actuators.

Characteristics of an Agent

Types of Agents

Agents can be grouped into five classes based on their degree of perceived intelligence and capability :

 Simple Reflex Agents

 Model-Based Reflex Agents

 Goal-Based Agents

 Utility-Based Agents

 Learning Agent

 Multi-agent systems

 Hierarchical agents

Simple Reflex Agents

Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept. Percept
history is the history of all that an agent has perceived to date. The agent function is based on the condition-action
rule. A condition-action rule is a rule that maps a state i.e., a condition to an action. If the condition is true, then the
action is taken, else not. This agent function only succeeds when the environment is fully observable. For simple
reflex agents operating in partially observable environments, infinite loops are often unavoidable. It may be possible
to escape from infinite loops if the agent can randomize its actions.
UNIT 1: Introduction to Artificial Intelligence
Problems with Simple reflex agents are :

 Very limited intelligence.

 No knowledge of non-perceptual parts of the state.

 Usually too big to generate and store.

 If there occurs any change in the environment, then the collection of rules needs to be updated.

Simple Reflex Agents

Model-Based Reflex Agents

It works by finding a rule whose condition matches the current situation. A model-based agent can handle partially
observable environments by the use of a model about the world. The agent has to keep track of the internal
state which is adjusted by each percept and that depends on the percept history. The current state is stored inside
the agent which maintains some kind of structure describing the part of the world which cannot be seen.

Updating the state requires information about:

 How the world evolves independently from the agent?

 How do the agent’s actions affect the world?


UNIT 1: Introduction to Artificial Intelligence

Model-Based Reflex Agents

Goal-Based Agents

These kinds of agents take decisions based on how far they are currently from their goal(description of desirable
situations). Their every action is intended to reduce their distance from the goal. This allows the agent a way to
choose among multiple possibilities, selecting the one which reaches a goal state. The knowledge that supports its
decisions is represented explicitly and can be modified, which makes these agents more flexible. They usually require
search and planning. The goal-based agent’s behavior can easily be changed.
UNIT 1: Introduction to Artificial Intelligence

Goal-Based Agents

Utility-Based Agents

The agents which are developed having their end uses as building blocks are called utility-based agents. When there
are multiple possible alternatives, then to decide which one is best, utility-based agents are used. They choose
actions based on a preference (utility) for each state. Sometimes achieving the desired goal is not enough. We may
look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the action
that maximizes the expected utility. A utility function maps a state onto a real number which describes the
associated degree of happiness.
UNIT 1: Introduction to Artificial Intelligence

Utility-Based Agents

Learning Agent

A learning agent in AI is the type of agent that can learn from its past experiences or it has learning capabilities. It
starts to act with basic knowledge and then is able to act and adapt automatically through learning. A learning agent
has mainly four conceptual components, which are:

1. Learning element: It is responsible for making improvements by learning from the environment.

2. Critic: The learning element takes feedback from critics which describes how well the agent is doing with
respect to a fixed performance standard.

3. Performance element: It is responsible for selecting external action.

4. Problem Generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.
UNIT 1: Introduction to Artificial Intelligence

Learning Agent

Multi-Agent Systems

These agents interact with other agents to achieve a common goal. They may have to coordinate their actions and
communicate with each other to achieve their objective.

A multi-agent system (MAS) is a system composed of multiple interacting agents that are designed to work together
to achieve a common goal. These agents may be autonomous or semi-autonomous and are capable of perceiving
their environment, making decisions, and taking action to achieve the common objective.

MAS can be used in a variety of applications, including transportation systems, robotics, and social networks. They
can help improve efficiency, reduce costs, and increase flexibility in complex systems. MAS can be classified into
different types based on their characteristics, such as whether the agents have the same or different goals, whether
the agents are cooperative or competitive, and whether the agents are homogeneous or heterogeneous.

 In a homogeneous MAS, all the agents have the same capabilities, goals, and behaviors.

 In contrast, in a heterogeneous MAS, the agents have different capabilities, goals, and behaviors.

This can make coordination more challenging but can also lead to more flexible and robust systems.

Cooperative MAS involves agents working together to achieve a common goal, while competitive MAS involves
agents working against each other to achieve their own goals. In some cases, MAS can also involve both cooperative
and competitive behavior, where agents must balance their own interests with the interests of the group.

MAS can be implemented using different techniques, such as game theory, machine learning, and agent-based
modeling. Game theory is used to analyze strategic interactions between agents and predict their behavior. Machine
learning is used to train agents to improve their decision-making capabilities over time. Agent-based modeling is
used to simulate complex systems and study the interactions between agents.

Overall, multi-agent systems are a powerful tool in artificial intelligence that can help solve complex problems and
improve efficiency in a variety of applications.
UNIT 1: Introduction to Artificial Intelligence
Hierarchical Agents

These agents are organized into a hierarchy, with high-level agents overseeing the behavior of lower-level agents.
The high-level agents provide goals and constraints, while the low-level agents carry out specific tasks. Hierarchical
agents are useful in complex environments with many tasks and sub-tasks.

 Hierarchical agents are agents that are organized into a hierarchy, with high-level agents overseeing the
behavior of lower-level agents. The high-level agents provide goals and constraints, while the low-level
agents carry out specific tasks. This structure allows for more efficient and organized decision-making in
complex environments.

 Hierarchical agents can be implemented in a variety of applications, including robotics, manufacturing, and
transportation systems. They are particularly useful in environments where there are many tasks and sub-
tasks that need to be coordinated and prioritized.

 In a hierarchical agent system, the high-level agents are responsible for setting goals and constraints for the
lower-level agents. These goals and constraints are typically based on the overall objective of the system. For
example, in a manufacturing system, the high-level agents might set production targets for the lower-level
agents based on customer demand.

 The low-level agents are responsible for carrying out specific tasks to achieve the goals set by the high-level
agents. These tasks may be relatively simple or more complex, depending on the specific application. For
example, in a transportation system, low-level agents might be responsible for managing traffic flow at
specific intersections.

 Hierarchical agents can be organized into different levels, depending on the complexity of the system. In a
simple system, there may be only two levels: high-level agents and low-level agents. In a more complex
system, there may be multiple levels, with intermediate-level agents responsible for coordinating the
activities of lower-level agents.

 One advantage of hierarchical agents is that they allow for more efficient use of resources. By organizing
agents into a hierarchy, it is possible to allocate tasks to the agents that are best suited to carry them out,
while avoiding duplication of effort. This can lead to faster, more efficient decision-making and better overall
performance of the system.

Overall, hierarchical agents are a powerful tool in artificial intelligence that can help solve complex problems and
improve efficiency in a variety of applications.

Uses of Agents

Agents are used in a wide range of applications in artificial intelligence, including:

 Robotics: Agents can be used to control robots and automate tasks in manufacturing, transportation, and
other industries.

 Smart homes and buildings: Agents can be used to control heating, lighting, and other systems in smart
homes and buildings, optimizing energy use and improving comfort.

 Transportation systems: Agents can be used to manage traffic flow, optimize routes for autonomous
vehicles, and improve logistics and supply chain management.

 Healthcare: Agents can be used to monitor patients, provide personalized treatment plans, and optimize
healthcare resource allocation.

 Finance: Agents can be used for automated trading, fraud detection, and risk management in the financial
industry.

 Games: Agents can be used to create intelligent opponents in games and simulations, providing a more
challenging and realistic experience for players.
UNIT 1: Introduction to Artificial Intelligence
 Natural language processing: Agents can be used for language translation, question answering, and chatbots
that can communicate with users in natural language.

 Cybersecurity: Agents can be used for intrusion detection, malware analysis, and network security.

 Environmental monitoring: Agents can be used to monitor and manage natural resources, track climate
change, and improve environmental sustainability.

 Social media: Agents can be used to analyze social media data, identify trends and patterns, and provide
personalized recommendations to users.

Types of Environments in AI
An environment in artificial intelligence is the surrounding of the agent. The agent takes input from the environment
through sensors and delivers the output to the environment through actuators. There are several types of
environments:

 Fully Observable vs Partially Observable

 Deterministic vs Stochastic

 Competitive vs Collaborative

 Single-agent vs Multi-agent

 Static vs Dynamic

 Discrete vs Continuous

 Episodic vs Sequential

 Known vs Unknown

Environment types
UNIT 1: Introduction to Artificial Intelligence
1. Fully Observable vs Partially Observable

 When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is
said to be a fully observable environment else it is partially observable.

 Maintaining a fully observable environment is easy as there is no need to keep track of the history of the
surrounding.

 An environment is called unobservable when the agent has no sensors in all environments.

 Examples:

 Chess – the board is fully observable, and so are the opponent’s moves.

 Driving – the environment is partially observable because what’s around the corner is not known.

2. Deterministic vs Stochastic

 When a uniqueness in the agent’s current state completely determines the next state of the agent, the
environment is said to be deterministic.

 The stochastic environment is random in nature which is not unique and cannot be completely determined
by the agent.

 Examples:

 Chess – there would be only a few possible moves for a coin at the current state and these moves
can be determined.

 Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.

3. Competitive vs Collaborative

 An agent is said to be in a competitive environment when it competes against another agent to optimize the
output.

 The game of chess is competitive as the agents compete with each other to win the game which is the
output.

 An agent is said to be in a collaborative environment when multiple agents cooperate to produce the desired
output.

 When multiple self-driving cars are found on the roads, they cooperate with each other to avoid collisions
and reach their destination which is the output desired.

4. Single-agent vs Multi-agent

 An environment consisting of only one agent is said to be a single-agent environment.

 A person left alone in a maze is an example of the single-agent system.

 An environment involving more than one agent is a multi-agent environment.

 The game of football is multi-agent as it involves 11 players in each team.

5. Dynamic vs Static

 An environment that keeps constantly changing itself when the agent is up with some action is said to be
dynamic.

 A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant.

 An idle environment with no change in its state is called a static environment.


UNIT 1: Introduction to Artificial Intelligence
 An empty house is static as there’s no change in the surroundings when an agent enters.

6. Discrete vs Continuous

 If an environment consists of a finite number of actions that can be deliberated in the environment to obtain
the output, it is said to be a discrete environment.

 The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with
every game, but still, it’s finite.

 The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be
continuous.

 Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which
cannot be numbered.

7.Episodic vs Sequential

 In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or episodes.
There is no dependency between current and previous incidents. In each incident, an agent receives input
from the environment and then performs the corresponding action.

 Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the
conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e. there is no
dependency between current and previous decisions.

 In a Sequential environment, the previous decisions can affect all future decisions. The next action of the
agent depends on what action he has taken previously and what action he is supposed to take in the future.

 Example:

 Checkers- Where the previous move can affect all the following moves.

8. Known vs Unknown

 In a known environment, the output for all probable actions is given. Obviously, in case of unknown

Problem Solving in Artificial Intelligence


 The reflex agent of AI directly maps states into action. Whenever these agents fail to operate in an
environment where the state of mapping is too large and not easily performed by the agent, then the stated
problem dissolves and sent to a problem-solving domain which breaks the large stored problem into the
smaller storage area and resolves one by one. The final integrated action will be the desired outcomes.

 On the basis of the problem and their working domain, different types of problem-solving agent defined and
use at an atomic level without any internal state visible with a problem-solving algorithm. The problem-
solving agent performs precisely by defining problems and several solutions. So we can say that problem
solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree,
heuristic algorithms to solve a problem.

 We can also say that a problem-solving agent is a result-driven agent and always focuses on satisfying the
goals.

 There are basically three types of problem in artificial intelligence:

 1. Ignorable: In which solution steps can be ignored.

 2. Recoverable: In which solution steps can be undone.

 3. Irrecoverable: Solution steps cannot be undo.


UNIT 1: Introduction to Artificial Intelligence
 Steps problem-solving in AI: The problem of AI is directly associated with the nature of humans and their
activities. So we need a number of finite steps to solve a problem which makes human easy works.

 These are the following steps which require to solve a problem :

 Problem definition: Detailed specification of inputs and acceptable system solutions.

 Problem analysis: Analyse the problem thoroughly.

 Knowledge Representation: collect detailed information about the problem and define all possible
techniques.

 Problem-solving: Selection of best techniques.

 Components to formulate the associated problem:

 Initial State: This state requires an initial state for the problem which starts the AI agent towards a specified
goal. In this state new methods also initialize problem domain solving by a specific class.

 Action: This stage of problem formulation works with function with a specific class taken from the initial
state and all possible actions done in this stage.

 Transition: This stage of problem formulation integrates the actual action done by the previous action stage
and collects the final stage to forward it to their next stage.

 Goal test: This stage determines that the specified goal achieved by the integrated transition model or not,
whenever the goal achieves stop the action and forward into the next stage to determines the cost to
achieve the goal.

 Path costing: This component of problem-solving numerical assigned what will be the cost to achieve the
goal. It requires all hardware software and human working cost.

 environment, for an agent to make a decision, it has to gain knowledge about how the environment works.
UNIT 2: PROBLEM SOLVING
Search Algorithms in AI
Artificial Intelligence is the study of building agents that act rationally. Most of the time, these agents perform some
kind of search algorithm in the background in order to achieve their tasks.

 A search problem consists of:

 A State Space. Set of all possible states where you can be.

 A Start State. The state from where the search begins.

 A Goal State. A function that looks at the current state returns whether or not it is the goal state.

 The Solution to a search problem is a sequence of actions, called the plan that transforms the start state to
the goal state.

 This plan is achieved through search algorithms.

Types of search algorithms:

There are far too many powerful search algorithms out there to fit in a single article. Instead, this article will
discuss six of the fundamental search algorithms, divided into two categories, as shown below.

Note that there is much more to search algorithms than the chart I have provided above. However, this article will
mostly stick to the above chart, exploring the algorithms given there.

Uninformed Search Algorithms:

The search algorithms in this section have no additional information on the goal node other than the one provided in
the problem definition. The plans to reach the goal state from the start state differ only by the order and/or length
of actions. Uninformed search is also called Blind search. These algorithms can only generate the successors and
differentiate between the goal state and non goal state.

The following uninformed search algorithms are discussed in this section.

1. Depth First Search

2. Breadth First Search

3. Uniform Cost Search


UNIT 2: PROBLEM SOLVING
Each of these algorithms will have:

 A problem graph, containing the start node S and the goal node G.

 A strategy, describing the manner in which the graph will be traversed to get to G.

 A fringe, which is a data structure used to store all the possible states (nodes) that you can go from the
current states.

 A tree, that results while traversing to the goal node.

 A solution plan, which the sequence of nodes from S to G.

Depth First Search:

Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts
at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as
possible along each branch before backtracking. It uses last in- first-out strategy and hence it is implemented using a
stack.

Example:

Question. Which solution would DFS find to move from node S to node G if run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. As DFS traverses the tree “deepest node first”,
it would always pick the deeper branch until it reaches the solution (or it runs out of nodes, and goes to the next
branch). The traversal is shown in blue arrows.
UNIT 2: PROBLEM SOLVING

Path: S -> A -> B -> C -> G

= the depth of the search tree = the number of levels of the search tree.
= number of nodes in level .

Time complexity: Equivalent to the number of nodes traversed in

DFS.

Space complexity: Equivalent to how large can the fringe get.


Completeness: DFS is complete if the search tree is finite, meaning for a given finite search tree, DFS will come up
with a solution if it exists.
Optimality: DFS is not optimal, meaning the number of steps in reaching the solution, or the cost spent in reaching it
is high.

Breadth First Search:

Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the
tree root (or some arbitrary node of a graph, sometimes referred to as a ‘search key’), and explores all of the
neighbor nodes at the present depth prior to moving on to the nodes at the next depth level. It is implemented using
a queue.

Example:
Question. Which solution would BFS find to move from node S to node G if run on the graph below?
UNIT 2: PROBLEM SOLVING

Solution. The equivalent search tree for the above graph is as follows. As BFS traverses the tree “shallowest node
first”, it would always pick the shallower branch until it reaches the solution (or it runs out of nodes, and goes to the
next branch). The traversal is shown in blue arrows.

Path: S -> D -> G

= the depth of the shallowest solution.


= number of nodes in level .
Time complexity: Equivalent to the number of nodes traversed in BFS until the shallowest

solution.

Space complexity: Equivalent to how large can the fringe get.


Completeness: BFS is complete, meaning for a given search tree, BFS will come up with a solution if it exists.
UNIT 2: PROBLEM SOLVING
Optimality: BFS is optimal as long as the costs of all edges are equal.

Uniform Cost Search:

UCS is different from BFS and DFS because here the costs come into play. In other words, traversing via different
edges might not have the same cost. The goal is to find a path where the cumulative sum of costs is the least.

Cost of a node is defined as:

cost(node) = cumulative cost of all nodes from root

cost(root) = 0

Example:
Question. Which solution would UCS find to move from node S to node G if run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. The cost of each node is the cumulative cost
of reaching that node from the root. Based on the UCS strategy, the path with the least cumulative cost is chosen.
Note that due to the many options in the fringe, the algorithm explores most of them so long as their cost is low, and
discards them when a lower-cost path is found; these discarded traversals are not shown below. The actual traversal
is shown in blue.
UNIT 2: PROBLEM SOLVING

Path: S -> A -> B -> G


Cost: 5

Let = cost of solution.


= arcs cost.

Then effective depth

Time complexity: ,Space complexity:

Advantages:

 UCS is complete only if states are finite and there should be no loop with zero weight.

 UCS is optimal only if there is no negative cost.

Disadvantages:

 Explores options in every “direction”.

 No information on goal location.


UNIT 2: PROBLEM SOLVING
Informed Search Algorithms:

Here, the algorithms have information on the goal state, which helps in more efficient searching. This information is
obtained by something called a heuristic.
In this section, we will discuss the following search algorithms.

1. Greedy Search

2. A* Tree Search

3. A* Graph Search

Search Heuristics: In an informed search, a heuristic is a function that estimates how close a state is to the goal
state. For example – Manhattan distance, Euclidean distance, etc. (Lesser the distance, closer the goal.) Different
heuristics are used in different informed algorithms discussed below.

Greedy Search:

In greedy search, we expand the node closest to the goal node. The “closeness” is estimated by a heuristic h(x).

Heuristic: A heuristic h is defined as-


h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.

Strategy: Expand the node closest to the goal state, i.e. expand the node with a lower h value.

Example:

Question. Find the path from S to G using greedy search. The heuristic values h of each node below the name of the
node.

Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the lower heuristic cost. Now
from D, we can move to B(h=4) or E(h=3). We choose E with a lower heuristic cost. Finally, from E, we go to G(h=0).
This entire traversal is shown in the search tree below, in blue.
UNIT 2: PROBLEM SOLVING

Path: S -> D -> E -> G

Advantage: Works well with informed search problems, with fewer steps to reach a goal.
Disadvantage: Can turn into unguided DFS in the worst case.

A* Tree Search:

A* Tree Search, or simply known as A* Search, combines the strengths of uniform-cost search and greedy search. In
this search, the heuristic is the summation of the cost in UCS, denoted by g(x), and the cost in the greedy search,
denoted by h(x). The summed cost is denoted by f(x).

Heuristic: The following points should be noted wrt heuristics in A* search.

 Here, h(x) is called the forward cost and is an estimate of the distance of the current node from the goal
node.

 And, g(x) is called the backward cost and is the cumulative cost of a node from the root node.

 A* search is optimal only when for all nodes, the forward cost for a node h(x) underestimates the actual cost
h*(x) to reach the goal. This property of A* heuristic is called admissibility.

Admissibility:

Strategy: Choose the node with the lowest f(x) value.

Example:

Question. Find the path to reach from S to G using A* search.


UNIT 2: PROBLEM SOLVING

Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at each step, choosing the
node with the lowest sum. The entire work is shown in the table below.

Note that in the fourth set of iterations, we get two paths with equal summed cost f(x), so we expand them both in
the next set. The path with a lower cost on further expansion is the chosen path.

Path h(x) g(x) f(x)

S 7 0 7

S -> A 9 3 12

S -> D 5 2 7

S -> D -> B 4 2+1=3 7

S -> D -> E 3 2+4=6 9

S -> D -> B -> C 2 3+2=5 7


UNIT 2: PROBLEM SOLVING
S -> D -> B -> E 3 3+1=4 7

S -> D -> B -> C -> G 0 5+4=9 9

S -> D -> B -> E -> G 0 4+3=7 7

Path: S -> D -> B -> E -> G


Cost: 7

A* Graph Search:

 A* tree search works well, except that it takes time re-exploring the branches it has already explored. In
other words, if the same node has expanded twice in different branches of the search tree, A* search might
explore both of those branches, thus wasting time

 A* Graph Search, or simply Graph Search, removes this limitation by adding this rule: do not expand the
same node more than once.

 Heuristic. Graph search is optimal only when the forward cost between two successive nodes A and B, given
by h(A) – h (B), is less than or equal to the backward cost between those two nodes g(A -> B). This property
of the graph search heuristic is called consistency.

Consistency:

Example:

Question. Use graph searches to find paths from S to G in the following graph.
UNIT 2: PROBLEM SOLVING
the Solution. We solve this question pretty much the same way we solved last question, but in this case, we keep a
track of nodes explored so that we don’t re-explore them.

Path: S -> D -> B -> E -> G


Cost: 7

Difference between Informed and Uninformed Search in AI


What is an Informed Search in AI?

algorithms have information on the goal state which helps in more efficient searching. This information is obtained
by a function that estimates how close a state is to the goal state. Informed search in AI is a type of search algorithm
that uses additional information to guide the search process, allowing for more efficient problem-solving compared
to uninformed search algorithms. This information can be in the form of heuristics, estimates of cost, or other
relevant data to prioritize which states to expand and explore. Examples of informed search algorithms include A*
search, Best-First search, and Greedy search. Example: Greedy Search and Graph Search.
UNIT 2: PROBLEM SOLVING
Here are some key features of informed search algorithms in AI:

 Use of Heuristics – informed search algorithms use heuristics, or additional information, to guide the search
process and prioritize which nodes to expand.

 More efficient – informed search algorithms are designed to be more efficient than uninformed search
algorithms, such as breadth-first search or depth-first search, by avoiding the exploration of unlikely paths
and focusing on more promising ones.

 Goal-directed – informed search algorithms are goal-directed, meaning that they are designed to find a
solution to a specific problem.

 Cost-based – informed search algorithms often use cost-based estimates to evaluate nodes, such as the
estimated cost to reach the goal or the cost of a particular path.

 Prioritization – informed search algorithms prioritize which nodes to expand based on the additional
information available, often leading to more efficient problem-solving.

 Optimality – informed search algorithms may guarantee an optimal solution if the heuristics used are
admissible (never overestimating the actual cost) and consistent (the estimated cost is a lower bound on the
actual cost).

What is an Uninformed Search in AI?

algorithms have no additional information on the goal node other than the one provided in the problem definition.
The plans to reach the goal state from the start state differ only by the order and length of actions. Uninformed
search in AI refers to a type of search algorithm that does not use additional information to guide the search
process. Instead, these algorithms explore the search space in a systematic, but blind, manner without considering
the cost of reaching the goal or the likelihood of finding a solution. Examples of uninformed search algorithms
include Breadth-First search (BFS), Depth-First search (DFS), and Depth-Limited search.

Uninformed search algorithms are often used as a starting point for more complex, informed search algorithms or as
a way to explore the search space in simple problems. However, in complex problems with large search spaces,
uninformed search algorithms may be inefficient and lead to an exponential increase in the number of states
explored. Examples: Depth First Search and Breadth-First Search.

Here are some key features of uninformed search algorithms in AI:

 Systematic exploration – uninformed search algorithms explore the search space systematically, either by
expanding all children of a node (e.g. BFS) or by exploring as deep as possible in a single path before
backtracking (e.g. DFS).

 No heuristics – uninformed search algorithms do not use additional information, such as heuristics or cost
estimates, to guide the search process.

 Blind search – uninformed search algorithms do not consider the cost of reaching the goal or the likelihood
of finding a solution, leading to a blind search process.

 Simple to implement – uninformed search algorithms are often simple to implement and understand,
making them a good starting point for more complex algorithms.

 Inefficient in complex problems – uninformed search algorithms can be inefficient in complex problems
with large search spaces, leading to an exponential increase in the number of states explored.

Not guaranteed to find optimal solution – uninformed search algorithms do not guarantee an optimal solution, as
they do not consider the cost of reaching the goal or other relevant information.

Solutions Informed Search vs. Uninformed Search is depicted pictorially as follows:


UNIT 2: PROBLEM SOLVING
Parameters Informed Search Uninformed Search

Known as It is also known as Heuristic Search. It is also known as Blind Search.

It doesn’t use knowledge for the


Using Knowledge It uses knowledge for the searching process.
searching process.

It finds solution slow as compared to an


Performance It finds a solution more quickly.
informed search.

Completion It may or may not be complete. It is always complete.

Cost Factor Cost is low. Cost is high.

It consumes less time because of quick It consumes moderate time because of


Time
searching. slow searching.

No suggestion is given regarding the


Direction There is a direction given about the solution.
solution in it.

Implementation It is less lengthy while implemented. It is more lengthy while implemented.

It is more efficient as efficiency takes into It is comparatively less efficient as


account cost and performance. The incurred incurred cost is more and the speed of
Efficiency
cost is less and speed of finding solutions is finding the Breadth-Firstsolution is
quick. slow.

Computational Comparatively higher computational


Computational requirements are lessened.
requirements requirements.

Size of search Having a wide scope in terms of handling large Solving a massive search task is
problems search problems. challenging.

 Greedy Search  Depth First Search (DFS)


Examples of
 A* Search  Breadth First Search (BFS)
Algorithms
 AO* Search  Branch and Bound
UNIT 2: PROBLEM SOLVING
Parameters Informed Search Uninformed Search

 Hill Climbing Algorithm

Heuristic Evaluation
The need for Heuristic Evaluation :

Heuristic Evaluation is the process of thorough evaluation/assessment where the experts in a particular domain,
used to measure the usability of the user interface. Usability can be defined as how easily a specific user can use a
particular design or say interface without facing any problem. In general, we can say the Heuristic Evaluation is
performed to detect the issues in the design of a product. It also identifies the ways to resolve those issues present
in design and meet the user expectations.

Heuristic Evaluation is an in-depth usability test that is performed by the experts. As it is also well known to
everyone that better usability, higher the number of users will interact with the product. Jakob Nielsen and Rolf
Molich are web usability pioneers who published the article in 1990, which contains a set of heuristics. A heuristic
can be defined as the fast and practical way to approach a problem and make effective decisions to solve those
problems. Experts use the heuristics approach to systematically evaluate the user experience (UX) design.

When to conduct Heuristic Evaluation :

There is no such rule when to perform the Heuristics Evaluation, but it can be performed at any stage of the design
process. Most of the time the heuristic evaluation is performed after the paper prototyping and usability test. As
Heuristics Evaluation helps to optimize the design of the user-interface it becomes very important to be performed
to evaluate the final design.

How to conduct Heuristic Evaluation :

Define the Scope of Evaluation –


Mentioning the budget and deadline becomes very important at the time of evaluation. One should also define the
different parameters where they want to conduct the usability test.

Know the End-User –


As we know, different groups of people have different expectations from a product. So it becomes very important to
know the end-user and their interest.

Choose your Set of Heuristics –


Without a proper heuristic, the Heuristics Evaluation will produce unreliable and useless results if all the evaluators
are not going to use the same guidelines.

Setting-up an Evaluation System and Identifying Issues –


Decide the different categories in which a problem should be categories like a critical issue, minor issue, etc.
Evaluators must follow the guidelines of system evaluation.

Analyze and Summarize the Results –


It becomes very necessary to analyze the issue present in the design of user interface and solve those issues before
the deadline.

Advantages :

 Reveals many hidden usability problems.

 It helps to determine the overall user experience.

 Heuristics evaluation can be combined with usability testing.


UNIT 2: PROBLEM SOLVING
 Better Heuristics Evaluation helps to engage more users.

 It is cheaper and faster than conducting full-blown usability testing.

Disadvantages :

 Sometimes it is a bit hard for even experts to figure out some problems.

 It becomes hard to find experts to conduct the Heuristics Evaluation.

 We will need few expert evaluators, so that it will become easier for us to stick with usability testing.

 Flaws in design will affect the engagement of users in the product.

 Heuristics testing depends on the expertise level of only a few experts.


UNIT 3: GAME PLAYING AND CSP
Game Theory
Game Theory is a topic in competitive programming that involves a certain type of problem, where there are some
players who play a game based on given rules and the task is often to find the winner or the winning moves. Game
Theory is often asked in short contests with a mixture of other topics like range querying or greedy or dynamic
programming.

Game Theory for Competitive Programming

 Objectives Game Theory for Competitive Programming:

 1. Game states:

 2. Winning and Losing states:

 3. Nim game:

 4. Misère game:

 5. Sprague–Grundy theorem:

 6. Grundy numbers:

 7. Subgames:

 8. Grundy’s game:

 Practice Problems on Game Theory

Objectives Game Theory for Competitive Programming:

 Here we will focus on two-player games that do not contain random elements.

 Our goal is to find a strategy we can follow to win the game no matter what the opponent does if such a
strategy exists.

 Game theory or combinatorics game theory in which we have perfect information (that is no randomization
like a coin toss) such as game rules, player’s turn, minimum and maximum involved in the problem
statements, and some conditions and constraints.

 There will be three possible cases/ state win, loss or tie.

 A terminal condition is well-defined/ specified clearly.


E.g. player who picks the last coin will win the game, or a player who picks the second last time coin will win
the game or something like that.

 It is assumed that the game will end at some point after a fixed number of moves. Unlike chess, where you
can have an unlimited number of moves possible especially when you are left with the only king, but if you
add an extra constraint that says “game should be ended within ‘n’ numbers of moves”, that will be a
terminal condition. This is the kind of assumption a game theory is looking for.

 It turns out that there is a general strategy for such games, and we can analyze the games using the nim
theory.

 Initially, we will analyze simple games where players remove sticks from heaps, and after this, we will
generalize the strategy used in those games to other games.
UNIT 3: GAME PLAYING AND CSP
1. Game states:

Let us consider a game where there is initially a heap of n-sticks. Players A and B move alternately, and player A
begins. On each move, the player has to remove 1, 2, or 3 sticks from the heap, and the player who removes the last
stick wins the game.

For example, if n = 10, the game may proceed as follows:

 A → removes 2 sticks (8 sticks left).

 B → removes 3 sticks (5 sticks left).

 A → removes 1 stick (4 sticks left)

 B → removes 2 sticks (2 sticks left).

 A → removes 2 sticks and wins

This game consists of states 0, 1, 2,…, n, where the number of the state corresponds to the number of sticks left.

A few examples of Game states are:

 Tic Tac Toe = Tic Tac Toe is a classic two-player game where the players take turns placing either X or O in a
3×3 grid until one player gets three in a row horizontally, vertically, or diagonally, or all spaces on the board
are filled.

 Rock-Paper-Scissors = Rock-Paper-Scissors is a simple two-player game where each player simultaneously


chooses one of three options (rock, paper, scissors). The winner is determined by a set of rules, rock beats
scissors, scissors beat paper, and paper beats rock.

2. Winning and Losing states:

A winning state is a state where the player will win the game if they play optimally, and a Losing state is a state
where the player will lose the game if the opponent plays optimally. It turns out that we can classify all states of a
game so that each state is either a winning state or a losing state

Let’s consider the above game:

In the above game, state 0 is clearly a losing state because the player cannot make any moves.

 States 1, 2, and 3 are winning states because we can remove 1, 2, or 3 sticks and win the game.

 State 4, in turn, is a losing state, because any move leads to a state that is a winning state for the opponent.

More generally, if there is a move that leads from the current state to a losing state, the current state is a winning
state, and otherwise, the current state is a losing state.
Using this observation, we can classify all states of a game starting with losing states where there are no possible
moves.
The states 0…15 of the above game can be classified as follows (W denotes a winning state and L denotes a losing
state):

States 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Result L W W W L W W W L W W W L W W W

It is easy to analyze this game: A state k is a losing state if k is divisible by 4, and otherwise, it is a winning state.
An optimal way to play the game is to always choose a move after which the number of sticks in the heap is divisible
by 4.
Finally, there are no sticks left and the opponent has lost.
UNIT 3: GAME PLAYING AND CSP
Of course, this strategy requires that the number of sticks is not divisible by 4 when it is our move. If it is, there is
nothing we can do, and the opponent will win the game if they play optimally.

Example:-

Basketball = In basketball, a winning state is when a team scores more points than their opponent at the end of the
game, while a losing state is when a team scores fewer points than their opponent.

Chess = In chess, a winning state is when a player checkmates their opponent’s king, while a losing state is when a
player’s king is checkmated.

State graph:

Let us now consider another stick game, where in each state k, it is allowed to remove any number x of sticks such
that x is smaller than k and divides k.

For example, in state 8 we may remove 1, 2 or 4 sticks, but in state 7 the only allowed move is to remove 1 stick.
The following picture shows the states 1…9 of the game as a state graph, whose nodes are the states and edges are
the moves between them:

The states 1…9 of the game as a state graph, whose nodes are the states and edges are the moves between them
UNIT 3: GAME PLAYING AND CSP
The final state in this game is always state 1, which is a losing state because there are no valid moves. The
classification of states 1…9 is as follows:

1 2 3 4 5 6 7 8 9

L W L W L W L W L

Surprisingly, in this game, all even-numbered states are winning states, and all odd-numbered states are losing
states

3. Nim game:

The nim game is a simple game that has an important role in game theory because many other games can be played
using the same strategy.
First, we focus on nim, and then we generalize the strategy to other games.
There are n heaps in nim, and each heap contains some number of sticks.
The players move alternately, and on each turn, the player chooses a heap that still contains sticks and removes any
number of sticks from it.
The winner is the player who removes the last stick.

The states in nim are of the form [x1, x2,…, xn], where xk denotes the number of sticks in heap k.
For example, A[] = [10,12,5]

It is a game where there are three heaps with 10, 12 and 5 sticks.
The state [0,0,…,0] is a losing state, because it is not possible to remove any sticks, and this is always the final state.

Analysis:

It turns out that we can easily classify any nim state by calculating the nim sum s = x1 ⊕ x2 ⊕··· ⊕ xn, where ⊕ is
the xor operation.
The states whose nim sum is 0 are losing states, and all other states are winning states.
For example, the nim sum of [10,12,5] is 10⊕12⊕5 = 3, so the state is a winning state.

Losing states:

The final state [0,0,…,0] is a losing state, and its nim sum is 0, as expected.
In other losing states, any move leads to a winning state, because when a single value xk changes, the nim sum also
changes, so the nim sum is different from 0 after the move.

Winning states:

We can move to a losing state if there is any heap k for which xk ⊕ s < xk.
In this case, we can remove sticks from heap k so that it will contain xk ⊕ s sticks, which will lead to a losing state.

There is always such a heap, where xk has a one bit at the position of the leftmost one bit of s.

As an example:

consider the state [10,12,5].

This state is a winning state because its nim sum is 3.


Thus, there has to be a move that leads to a losing state. Next, we will find out such a move.

The nim sum of the state is as follows:


UNIT 3: GAME PLAYING AND CSP
10 1010
12 1100
5 0101

3 0011

In this scenario, the heap with 10 sticks is the only heap that has a one bit at the position of the leftmost one bit of
the nim sum:

10 1010
12 1100
5 0101

3 0011

The new size of the heap has to be 10⊕ 3 = 9, so we will remove just one stick. After this, the state will
be [9,12,5], which is a losing state:

9 1001
12 1100
5 0101

0 0000

4. Misère game:

In a misère game, the goal of the game is the opposite, so the player who removes the last stick loses the game.
It turns out that the misère nim game can be optimally played almost like the standard nim game.
The idea is to first play the misère game like the standard game, but change the strategy at the end of the game.

The new strategy will be introduced in a situation where each heap would contain at most one stick after the next
move.
In the standard game, we should choose a move after which there is an even number of heaps with one stick.

However, in the misère game, we choose a move so that there is an odd number of heaps with one stick.

This strategy works because a state where the strategy changes always appear in the game, and this state is a
winning state because it contains exactly one heap that has more than one stick so the nim sum is not 0.

5. Sprague–Grundy theorem:

The Sprague–Grundy theorem generalizes the strategy used in nim to all games that fulfil the following
requirements:

 Two players move alternately.

 The game consists of states, and the possible moves in a state do not depend on whose turn it is.

 The game ends when a player cannot make a move.

 The game surely ends sooner or later.


UNIT 3: GAME PLAYING AND CSP
 The players have complete information about the states and allowed moves, and there is no randomness in
the game.

For more detail you can refer to this article ( Combinatorial Game Theory | (Sprague – Grundy Theorem) )

The idea is to calculate for each game state a Grundy number that corresponds to the number of sticks in a nim
heap. When we know the Grundy numbers of all states, we can play the game like the nim game.

6. Grundy numbers:

The Grundy number of a game state is mex({g1, g2,…, gn}).

where g1, g2,…, gn are the Grundy numbers of the states to which we can move, and the mex function gives the
smallest non-negative number that is not in the set.

For example:

mex({0,1,3}) = 2. If there are no possible moves in a state, its Grundy number is 0, because mex(Ø) = 0.\

For example in the state graph:

the state graph

The Grundy numbers are as follows:

the state graph for grundy numbers

The Grundy number of a losing state is 0, and the Grundy number of a winning state is a positive number.

The Grundy number of a state corresponds to the number of sticks in a nim heap. If the Grundy number is 0, we can
only move to states whose Grundy numbers are positive, and if the Grundy number is x > 0, we can move to states
whose Grundy numbers include all numbers 0,1,…, x−1.

As an example:

Example: consider a game where the players move a figure in a maze.

 Each square in the maze is either a floor or a wall.


UNIT 3: GAME PLAYING AND CSP
 On each turn, the player has to move the figure some number of steps left or up.

 The winner of the game is the player who makes the last move.

The following picture shows a possible initial state of the game, where @ denotes the figure and # denotes a square
where it can move.

the possible initial state of the game

The states of the game are all floor squares of the maze. In the above maze, the Grundy numbers are as follows:

the Grundy numbers

Thus, each state of the maze game corresponds to a heap in the nim game. For example, the Grundy number for the
lower-right square is 2, so it is a winning state.
We can reach a losing state and win the game by moving either four steps left or two steps up.
Note that unlike in the original nim game, it may be possible to move to a state whose Grundy number is larger than
the Grundy number of the current state.
However, the opponent can always choose a move that cancels such a move, so it is not possible to escape from a
losing state.

7. Subgames:
UNIT 3: GAME PLAYING AND CSP
Next, we will assume that our game consists of subgames, and on each turn, the player first chooses a subgame and
then move into the subgame. The game ends when it is not possible to make any move in any subgame.
In this case, the Grundy number of a game is the nim sum of the Grundy numbers of the subgames.
The game can be played like a nim game by calculating all Grundy numbers for subgames and then their nim sum.

As an example, consider a game that consists of three mazes. In this game, on each turn, the player chooses one of
the mazes and then moves the figure in the maze. Assume that the initial state of the game is as follows:

the player chooses one of the mazes and then moves the figure in the maze. Assume that the initial state of the game
is as follows

The Grundy numbers for the mazes are as follows

The Grundy numbers for the mazes

In the initial state, the nim sum of the Grundy numbers is 2⊕3⊕3 = 2, so the first player can win the game.
One optimal move is to move two steps up in the first maze, which produces the nim sum 0⊕3⊕3 = 0.

8. Grundy’s game:

Sometimes a move in a game divides the game into subgames that are independent of each other.
In this case, the Grundy number of the game is mex({g1, g2,…, gn}),

where n is the number of possible moves and gk = ak,1 ⊕ ak,2 ⊕…⊕ ak,m,

where move k generates subgames with Grundy numbers ak,1,ak,2,…,ak,m.

An example of such a game is Grundy’s game. Initially, there is a single heap that contains n sticks.

On each turn, the player chooses a heap and divides it into two nonempty heaps such that the heaps are of different
size. The player who makes the last move wins the game.
UNIT 3: GAME PLAYING AND CSP
Let f (n) be the Grundy number of a heap that contains n sticks. The Grundy number can be calculated by going
through all ways to divide the heap into two heaps.

For example, when n = 8, the possibilities are 1+7, 2+6 and 3+5, so f(8) = mex({f (1)⊕ f (7), f (2)⊕ f (6), f (3)⊕ f (5)}).

In this game, the value of f (n) is based on the values of f (1),…, f (n−1). The base cases are f (1) = f (2) = 0, because it
is not possible to divide the heaps of 1 and 2 sticks. The first Grundy numbers are:

f (1) = 0
f (2) = 0
f (3) = 1
f (4) = 0
f (5) = 2
f (6) = 1
f (7) = 0
f (8) = 2

The Grundy number for n = 8 is 2, so it is possible to win the game. The winning move is to create heaps 1+7 because f
(1)⊕ f (7) = 0.

Optimal Decision Making in Games


Humans’ intellectual capacities have been engaged by games for as long as civilization has existed, sometimes to an
alarming degree. Games are an intriguing subject for AI researchers because of their abstract character. A game’s
state is simple to depict, and actors are usually limited to a small number of actions with predetermined results.
Physical games, such as croquet and ice hockey, contain significantly more intricate descriptions, a much wider
variety of possible actions, and rather ambiguous regulations defining the legality of activities. With the exception of
robot soccer, these physical games have not piqued the AI community’s interest.

Games are usually intriguing because they are difficult to solve. Chess, for example, has an average branching factor
of around 35, and games frequently stretch to 50 moves per player, therefore the search tree has roughly 35100 or
10154 nodes (despite the search graph having “only” about 1040 unique nodes). As a result, games, like the real
world, necessitate the ability to make some sort of decision even when calculating the best option is impossible.

Inefficiency is also heavily punished in games. Whereas a half-efficient implementation of A search will merely take
twice as long to complete, a chess software that is half as efficient in utilizing its available time will almost certainly
be beaten to death, all other factors being equal. As a result of this research, a number of intriguing suggestions for
making the most use of time have emerged.

Optimal Decision Making in Games

Let us start with games with two players, whom we’ll refer to as MAX and MIN for obvious reasons. MAX is the first
to move, and then they take turns until the game is finished. At the conclusion of the game, the victorious player
receives points, while the loser receives penalties. A game can be formalized as a type of search problem that has
the following elements:

 S0: The initial state of the game, which describes how it is set up at the start.

 Player (s): Defines which player in a state has the move.

 Actions (s): Returns a state’s set of legal moves.

 Result (s, a): A transition model that defines a move’s outcome.

 Terminal-Test (s): A terminal test that returns true if the game is over but false otherwise. Terminal states
are those in which the game has come to a conclusion.
UNIT 3: GAME PLAYING AND CSP
 Utility (s, p): A utility function (also known as a payout function or objective function ) determines the final
numeric value for a game that concludes in the terminal state s for player p. The result in chess is a win, a
loss, or a draw, with values of +1, 0, or 1/2. Backgammon’s payoffs range from 0 to +192, but certain games
have a greater range of possible outcomes. A zero-sum game is defined (confusingly) as one in which the
total reward to all players is the same for each game instance. Chess is a zero-sum game because each game
has a payoff of 0 + 1, 1 + 0, or 1/2 + 1/2. “Constant-sum” would have been a preferable name, 22 but zero-
sum is the usual term and makes sense if each participant is charged 1.

The game tree for the game is defined by the beginning state, ACTIONS function, and RESULT function—a tree in
which the nodes are game states and the edges represent movements. The figure below depicts a portion of the tic-
tac-toe game tree (noughts and crosses). MAX may make nine different maneuvers from his starting position. The
game alternates between MAXs setting an X and MINs placing an O until we reach leaf nodes corresponding to
terminal states, such as one player having three in a row or all of the squares being filled. The utility value of the
terminal state from the perspective of MAX is shown by the number on each leaf node; high values are thought to be
beneficial for MAX and bad for MIN

The game tree for tic-tac-toe is relatively short, with just 9! = 362,880 terminal nodes. However, because there are
over 1040 nodes in chess, the game tree is better viewed as a theoretical construct that cannot be realized in the
actual world. But, no matter how big the game tree is, MAX’s goal is to find a solid move. A tree that is superimposed
on the whole game tree and examines enough nodes to allow a player to identify what move to make is referred to
as a search tree.

A sequence of actions leading to a goal state—a terminal state that is a win—would be the best solution in a typical
search problem. MIN has something to say about it in an adversarial search. MAX must therefore devise a contingent
strategy that specifies M A X’s initial state move, then MAX’s movements in the states resulting from every
conceivable MIN response, then MAX’s moves in the states resulting from every possible MIN reaction to those
moves, and so on. This is quite similar to the AND-OR search method, with MAX acting as OR and MIN acting as AND.
When playing an infallible opponent, an optimal strategy produces results that are as least as excellent as any other
plan. We’ll start by demonstrating how to find the best plan.

We’ll move to the trivial game in the figure below since even a simple game like tic-tac-toe is too complex for us to
draw the full game tree on one page. MAX’s root node moves are designated by the letters a1, a2, and a3. MIN’s
probable answers to a1 are b1, b2, b3, and so on. This game is over after MAX and MIN each make one move. (In
game terms, this tree consists of two half-moves and is one move deep, each of which is referred to as a ply.) The
terminal states in this game have utility values ranging from 2 to 14.
UNIT 3: GAME PLAYING AND CSP
Game’s Utility Function

The optimal strategy can be found from the minimax value of each node, which we express as MINIMAX, given a
game tree (n). Assuming that both players play optimally from there through the finish of the game, the utility (for
MAX) of being in the corresponding state is the node’s minimax value. The usefulness of a terminal state is obviously
its minimax value. Furthermore, if given the option, MAX prefers to shift to a maximum value state, whereas MIN
wants to move to a minimum value state. So here’s what we’ve got:

Optimal Decision Making in Multiplayer Games

Let’s use these definitions to analyze the game tree shown in the figure above. The game’s UTILITY function provides
utility values to the terminal nodes on the bottom level. Because the first MIN node, B, has three successor states
with values of 3, 12, and 8, its minimax value is 3. Minimax value 2 is also used by the other two MIN nodes. The root
node is a MAX node, with minimax values of 3, 2, and 2, resulting in a minimax value of 3. We can also find the root
of the minimax decision: action a1 is the best option for MAX since it leads to the highest minimax value.

This concept of optimal MAX play requires that MIN plays optimally as well—it maximizes MAX’s worst-case
outcome. What happens if MIN isn’t performing at its best? Then it’s a simple matter of demonstrating that MAX can
perform even better. Other strategies may outperform the minimax method against suboptimal opponents, but they
will always outperform optimal opponents.
UNIT 3: GAME PLAYING AND CSP
Minimax Algorithm in Game Theory (Alpha-Beta Pruning)
Prerequisites: Minimax Algorithm in Game Theory, Evaluation Function in Game Theory
Alpha-Beta pruning is not actually a new algorithm, but rather an optimization technique for the minimax algorithm.
It reduces the computation time by a huge factor. This allows us to search much faster and even go into deeper
levels in the game tree. It cuts off branches in the game tree which need not be searched because there already
exists a better move available. It is called Alpha-Beta pruning because it passes 2 extra parameters in the minimax
function, namely alpha and beta.

Let’s define the parameters alpha and beta.

Alpha is the best value that the maximizer currently can guarantee at that level or above.
Beta is the best value that the minimizer currently can guarantee at that level or below.

Pseudocode :

function minimax(node, depth, isMaximizingPlayer, alpha, beta):

if node is a leaf node :

return value of the node

if isMaximizingPlayer :

bestVal = -INFINITY

for each child node :

value = minimax(node, depth+1, false, alpha, beta)

bestVal = max( bestVal, value)

alpha = max( alpha, bestVal)

if beta <= alpha:

break

return bestVal

else :

bestVal = +INFINITY

for each child node :

value = minimax(node, depth+1, true, alpha, beta)

bestVal = min( bestVal, value)

beta = min( beta, bestVal)

if beta <= alpha:

break

return bestVal
UNIT 3: GAME PLAYING AND CSP

// Calling the function for the first time.

minimax(0, 0, true, -INFINITY, +INFINITY)

Let’s make the above algorithm clear with an example.

 The initial call starts from A. The value of alpha here is -INFINITY and the value of beta is +INFINITY. These
values are passed down to subsequent nodes in the tree. At A the maximizer must choose max of B and C,
so A calls B first

 At B it the minimizer must choose min of D and E and hence calls D first.

 At D, it looks at its left child which is a leaf node. This node returns a value of 3. Now the value of alpha
at D is max( -INF, 3) which is 3.

 To decide whether its worth looking at its right node or not, it checks the condition beta<=alpha. This is false
since beta = +INF and alpha = 3. So it continues the search.

 D now looks at its right child which returns a value of 5.At D, alpha = max(3, 5) which is 5. Now the value of
node D is 5

 D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The minimizer is now guaranteed a value of
5 or lesser. B now calls E to see if he can get a lower value than 5.

 At E the values of alpha and beta is not -INF and +INF but instead -INF and 5 respectively, because the value
of beta was changed at B and that is what B passed down to E

 Now E looks at its left child which is 6. At E, alpha = max(-INF, 6) which is 6. Here the condition becomes true.
beta is 5 and alpha is 6. So beta<=alpha is true. Hence it breaks and E returns 6 to B

 Note how it did not matter what the value of E‘s right child is. It could have been +INF or -INF, it still
wouldn’t matter, We never even had to look at it because the minimizer was guaranteed a value of 5 or
lesser. So as soon as the maximizer saw the 6 he knew the minimizer would never come this way because he
can get a 5 on the left side of B. This way we didn’t have to look at that 9 and hence saved computation
time.
UNIT 3: GAME PLAYING AND CSP
 E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The value of node B is also 5

So far this is how our game tree looks. The 9 is crossed out because it was never computed.

 B returns 5 to A. At A, alpha = max( -INF, 5) which is 5. Now the maximizer is guaranteed a value of 5 or
greater. A now calls C to see if it can get a higher value than 5.

 At C, alpha = 5 and beta = +INF. C calls F

 At F, alpha = 5 and beta = +INF. F looks at its left child which is a 1. alpha = max( 5, 1) which is still 5.

 F looks at its right child which is a 2. Hence the best value of this node is 2. Alpha still remains 5

 F returns a value of 2 to C. At C, beta = min( +INF, 2). The condition beta <= alpha becomes true as beta = 2
and alpha = 5. So it breaks and it does not even have to compute the entire sub-tree of G.

 The intuition behind this break-off is that, at C the minimizer was guaranteed a value of 2 or lesser. But the
maximizer was already guaranteed a value of 5 if he choose B. So why would the maximizer ever
choose C and get a value less than 2 ? Again you can see that it did not matter what those last 2 values were.
We also saved a lot of computation by skipping a whole sub-tree.

 C now returns a value of 2 to A. Therefore the best value at A is max( 5, 2) which is a 5.

 Hence the optimal value that the maximizer can get is 5

This is how our final game tree looks like. As you can see G has been crossed out as it was never computed.
UNIT 3: GAME PLAYING AND CSP

 Python3

# Python3 program to demonstrate

# working of Alpha-Beta Pruning

# Initial values of Alpha and Beta

MAX, MIN = 1000, -1000

# Returns optimal value for current player

#(Initially called for root and maximizer)

def minimax(depth, nodeIndex, maximizingPlayer,

values, alpha, beta):

# Terminating condition. i.e

# leaf node is reached

if depth == 3:

return values[nodeIndex]

if maximizingPlayer:

best = MIN
UNIT 3: GAME PLAYING AND CSP

# Recur for left and right children

for i in range(0, 2):

val = minimax(depth + 1, nodeIndex * 2 + i,

False, values, alpha, beta)

best = max(best, val)

alpha = max(alpha, best)

# Alpha Beta Pruning

if beta <= alpha:

break

return best

else:

best = MAX

# Recur for left and

# right children

for i in range(0, 2):

val = minimax(depth + 1, nodeIndex * 2 + i,

True, values, alpha, beta)

best = min(best, val)

beta = min(beta, best)

# Alpha Beta Pruning

if beta <= alpha:

break

return best
UNIT 3: GAME PLAYING AND CSP

# Driver Code

if __name__ == "__main__":

values = [3, 5, 6, 9, 1, 2, 0, -1]

print("The optimal value is :", minimax(0, 0, True, values, MIN, MAX))

# This code is contributed by Rituraj Jain

Output

Stochastic Games in Artificial Intelligence


Many unforeseeable external occurrences can place us in unforeseen circumstances in real life. Many games, such as
dice tossing, have a random element to reflect this unpredictability. These are known as stochastic games.
Backgammon is a classic game that mixes skill and luck. The legal moves are determined by rolling dice at the start of
each player’s turn white, for example, has rolled a 6–5 and has four alternative moves in the backgammon scenario
shown in the figure below.
UNIT 3: GAME PLAYING AND CSP
This is a standard backgammon position. The object of the game is to get all of one’s pieces off the board as quickly
as possible. White moves in a clockwise direction toward 25, while Black moves in a counterclockwise direction
toward 0. Unless there are many opponent pieces, a piece can advance to any position; if there is only one
opponent, it is caught and must start over. White has rolled a 6–5 and must pick between four valid moves: (5–10,5–
11), (5–11,19–24), (5–10,10–16), and (5–11,11–16), where the notation (5–11,11–16) denotes moving one piece
from position 5 to 11 and then another from 11 to 16.

Stochastic game tree for a backgammon position

White knows his or her own legal moves, but he or she has no idea how Black will roll, and thus has no idea what
Black’s legal moves will be. That means White won’t be able to build a normal game tree-like in chess or tic-tac-toe.
In backgammon, in addition to M A X and M I N nodes, a game tree must include chance nodes. The figure below
depicts chance nodes as circles. The possible dice rolls are indicated by the branches leading from each chance node;
each branch is labelled with the roll and its probability. There are 36 different ways to roll two dice, each equally
likely, yet there are only 21 distinct rolls because a 6–5 is the same as a 5–6. P (1–1) = 1/36 because each of the six
doubles (1–1 through 6–6) has a probability of 1/36. Each of the other 15 rolls has a 1/18 chance of happening.

The following phase is to learn how to make good decisions. Obviously, we want to choose the move that will put us
in the best position. Positions, on the other hand, do not have specific minimum and maximum values. Instead, we
can only compute a position’s anticipated value, which is the average of all potential outcomes of the chance nodes.

As a result, we can generalize the deterministic minimax value to an expected-minimax value for games with chance
nodes. Terminal nodes, MAX and MIN nodes (for which the dice roll is known), and MAX and MIN nodes (for which
the dice roll is unknown) all function as before. We compute the expected value for chance nodes, which is the sum
of all outcomes, weighted by the probability of each chance action.
UNIT 3: GAME PLAYING AND CSP

where r is a possible dice roll (or other random events) and RESULT(s,r) denotes the same state as s, but with the
addition that the dice roll’s result is r.

Constraint Satisfaction Problems (CSP) in Artificial Intelligence


Finding a solution that meets a set of constraints is the goal of constraint satisfaction problems (CSPs), a type of AI
issue. Finding values for a group of variables that fulfill a set of restrictions or rules is the aim of constraint
satisfaction problems. For tasks including resource allocation, planning, scheduling, and decision-making, CSPs are
frequently employed in AI.

There are mainly three basic components in the constraint satisfaction problem:

Variables: The things that need to be determined are variables. Variables in a CSP are the objects that must have
values assigned to them in order to satisfy a particular set of constraints. Boolean, integer, and categorical variables
are just a few examples of the various types of variables Variables, for instance, could stand in for the many puzzle
cells that need to be filled with numbers in a sudoku puzzle.

Domains: The range of potential values that a variable can have is represented by domains. Depending on the
issue, a domain may be finite or limitless. For instance, in Sudoku, the set of numbers from 1 to 9 can serve as the
domain of a variable representing a problem cell.

Constraints: The guidelines that control how variables relate to one another are known as constraints. Constraints in
a CSP define the ranges of possible values for variables. Unary constraints, binary constraints, and higher-order
constraints are only a few examples of the various sorts of constraints. For instance, in a sudoku problem, the
restrictions might be that each row, column, and 3×3 box can only have one instance of each number from 1 to 9.

Constraint Satisfaction Problems (CSP) representation:

 The finite set of variables V1, V2, V3 ……………..Vn.

 Non-empty domain for every single variable D1, D2, D3 …………..Dn.

 The finite set of constraints C1, C2 …….…, Cm.

 where each constraint Ci restricts the possible values for variables,


 e.g., V1 ≠ V2
 Each constraint Ci is a pair <scope, relation>
 Example: <(V1, V2), V1 not equal to V2>
 Scope = set of variables that participate in constraint.
 Relation = list of valid variable value combinations.
 There might be a clear list of permitted combinations. Perhaps a relation that is abstract and that allows for
membership testing and listing.
UNIT 3: GAME PLAYING AND CSP
Constraint Satisfaction Problems (CSP) algorithms:

 The backtracking algorithm is a depth-first search algorithm that methodically investigates the search space
of potential solutions up until a solution is discovered that satisfies all the restrictions. The method begins by
choosing a variable and giving it a value before repeatedly attempting to give values to the other variables.
The method returns to the prior variable and tries a different value if at any time a variable cannot be given
a value that fulfills the requirements. Once all assignments have been tried or a solution that satisfies all
constraints has been discovered, the algorithm ends.

 The forward-checking algorithm is a variation of the backtracking algorithm that condenses the search space
using a type of local consistency. For each unassigned variable, the method keeps a list of remaining values
and applies local constraints to eliminate inconsistent values from these sets. The algorithm examines a
variable’s neighbors after it is given a value to see whether any of its remaining values become inconsistent
and removes them from the sets if they do. The algorithm goes backward if, after forward checking, a
variable has no more values.

 Algorithms for propagating constraints are a class that uses local consistency and inference to condense the
search space. These algorithms operate by propagating restrictions between variables and removing
inconsistent values from the variable domains using the information obtained.

Implementations code for Constraint Satisfaction Problems (CSP):

Implement Constraint Satisfaction Problems algorithms with code

 Python3

class CSP:

def __init__(self, variables, Domains,constraints):

self.variables = variables

self.domains = Domains

self.constraints = constraints

self.solution = None

def solve(self):

assignment = {}

self.solution = self.backtrack(assignment)

return self.solution

def backtrack(self, assignment):

if len(assignment) == len(self.variables):

return assignment

var = self.select_unassigned_variable(assignment)
UNIT 3: GAME PLAYING AND CSP
for value in self.order_domain_values(var, assignment):

if self.is_consistent(var, value, assignment):

assignment[var] = value

result = self.backtrack(assignment)

if result is not None:

return result

del assignment[var]

return None

def select_unassigned_variable(self, assignment):

unassigned_vars = [var for var in self.variables if var not in assignment]

return min(unassigned_vars, key=lambda var: len(self.domains[var]))

def order_domain_values(self, var, assignment):

return self.domains[var]

def is_consistent(self, var, value, assignment):

for constraint_var in self.constraints[var]:

if constraint_var in assignment and assignment[constraint_var] == value:

return False

return True
UNIT 3: GAME PLAYING AND CSP
Define the Problem

Here we are solving Sudoku Puzzle with Constraint Satisfaction Problems algorithms

 Python3

puzzle = [[5, 3, 0, 0, 7, 0, 0, 0, 0],

[6, 0, 0, 1, 9, 5, 0, 0, 0],

[0, 9, 8, 0, 0, 0, 0, 6, 0],

[8, 0, 0, 0, 6, 0, 0, 0, 3],

[4, 0, 0, 8, 0, 3, 0, 0, 1],

[7, 0, 0, 0, 2, 0, 0, 0, 6],

[0, 6, 0, 0, 0, 0, 2, 8, 0],

[0, 0, 0, 4, 1, 9, 0, 0, 5],

[0, 0, 0, 0, 8, 0, 0, 0, 0]

def print_sudoku(puzzle):

for i in range(9):

if i % 3 == 0 and i != 0:

print("- - - - - - - - - - - ")

for j in range(9):

if j % 3 == 0 and j != 0:

print(" | ", end="")

print(puzzle[i][j], end=" ")

print()

print_sudoku(puzzle)

Output:

530 |070 |000

600 |195 |000

098 |000 |060

-----------

800 |060 |003

400 |803 |001


UNIT 3: GAME PLAYING AND CSP
700 |020 |006

-----------

060 |000 |280

000 |419 |005

000 |080 |000

Define Variables for the Constraint Satisfaction Problem

 Python3

variables = [(i, j) for i in range(9) for j in range(9)]

print(variables)

Output:

[(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8),

(1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (1, 7), (1, 8),

(2, 0), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6), (2, 7), (2, 8),

(3, 0), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (3, 6), (3, 7), (3, 8),

(4, 0), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (4, 6), (4, 7), (4, 8),

(5, 0), (5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6), (5, 7), (5, 8),

(6, 0), (6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (6, 6), (6, 7), (6, 8),

(7, 0), (7, 1), (7, 2), (7, 3), (7, 4), (7, 5), (7, 6), (7, 7), (7, 8),

(8, 0), (8, 1), (8, 2), (8, 3), (8, 4), (8, 5), (8, 6), (8, 7), (8, 8)]

Define the Domains for Constraint Satisfaction Problem

 Python3

Domains = {var: set(range(1, 10)) if puzzle[var[0]][var[1]] == 0

else {puzzle[var[0]][var[1]]} for var in variables}

print(Domains)

Output:

{(0, 0): {5},

(0, 1): {3},

(0, 2): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(0, 3): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(0, 4): {7},


UNIT 3: GAME PLAYING AND CSP
(0, 5): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(0, 6): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(0, 7): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(0, 8): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(1, 0): {6},

(1, 1): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(1, 2): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(1, 3): {1},

(1, 4): {9},

(1, 5): {5},

(1, 6): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(1, 7): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(1, 8): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(2, 0): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(2, 1): {9},

(2, 2): {8},

(2, 3): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(2, 4): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(2, 5): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(2, 6): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(2, 7): {6},

(2, 8): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(3, 0): {8},

(3, 1): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(3, 2): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(3, 3): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(3, 4): {6},

(3, 5): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(3, 6): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(3, 7): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(3, 8): {3},

(4, 0): {4},

(4, 1): {1, 2, 3, 4, 5, 6, 7, 8, 9},


UNIT 3: GAME PLAYING AND CSP
(4, 2): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(4, 3): {8},

(4, 4): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(4, 5): {3},

(4, 6): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(4, 7): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(4, 8): {1},

(5, 0): {7},

(5, 1): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(5, 2): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(5, 3): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(5, 4): {2},

(5, 5): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(5, 6): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(5, 7): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(5, 8): {6},

(6, 0): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(6, 1): {6},

(6, 2): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(6, 3): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(6, 4): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(6, 5): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(6, 6): {2},

(6, 7): {8},

(6, 8): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(7, 0): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(7, 1): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(7, 2): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(7, 3): {4},

(7, 4): {1},

(7, 5): {9},

(7, 6): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(7, 7): {1, 2, 3, 4, 5, 6, 7, 8, 9},


UNIT 3: GAME PLAYING AND CSP
(7, 8): {5},

(8, 0): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(8, 1): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(8, 2): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(8, 3): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(8, 4): {8},

(8, 5): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(8, 6): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(8, 7): {1, 2, 3, 4, 5, 6, 7, 8, 9},

(8, 8): {1, 2, 3, 4, 5, 6, 7, 8, 9}}

Define the Constraint for Constraint Satisfaction Problem

 Python3

def add_constraint(var):

constraints[var] = []

for i in range(9):

if i != var[0]:

constraints[var].append((i, var[1]))

if i != var[1]:

constraints[var].append((var[0], i))

sub_i, sub_j = var[0] // 3, var[1] // 3

for i in range(sub_i * 3, (sub_i + 1) * 3):

for j in range(sub_j * 3, (sub_j + 1) * 3):

if (i, j) != var:

constraints[var].append((i, j))

constraints = {}

for i in range(9):

for j in range(9):

add_constraint((i, j))

print(constraints)
UNIT 3: GAME PLAYING AND CSP
Output:

{(0, 0): [(1, 0), (0, 1), (2, 0), (0, 2), (3, 0), (0, 3), (4, 0), (0, 4), (5, 0), (0, 5), (6, 0), (0, 6), (7, 0), (0, 7), (8, 0), (0, 8), (0, 1),
(0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)], (0, 1): [(0, 0), (1, 1), (2, 1), (0, 2), (3, 1), (0, 3), (4, 1), (0, 4), (5, 1), (0, 5),
(6, 1), (0, 6), (7, 1), (0, 7), (8, 1), (0, 8), (0, 0), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)], (0, 2): [(0, 0), (1, 2), (0, 1),
(2, 2), (3, 2), (0, 3), (4, 2), (0, 4), (5, 2), (0, 5), (6, 2), (0, 6), (7, 2), (0, 7), (8, 2), (0, 8), (0, 0), (0, 1), (1, 0), (1, 1), (1, 2), (2,
0), (2, 1), (2, 2)], (0, 3): [(0, 0), (1, 3), (0, 1), (2, 3), (0, 2), (3, 3), (4, 3), (0, 4), (5, 3), (0, 5), (6, 3), (0, 6), (7, 3), (0, 7), (8,
3), (0, 8), (0, 4), (0, 5), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5)], (0, 4): [(0, 0), (1, 4), (0, 1), (2, 4), (0, 2), (3, 4), (0, 3), (4,
4), (5, 4), (0, 5), (6, 4), (0, 6), (7, 4), (0, 7), (8, 4), (0, 8), (0, 3), (0, 5), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5)], (0, 5): [(0,
0), (1, 5), (0, 1), (2, 5), (0, 2), (3, 5), (0, 3), (4, 5), (0, 4), (5, 5), (6, 5), (0, 6), (7, 5), (0, 7), (8, 5), (0, 8), (0, 3), (0, 4), (1, 3),
(1, 4), (1, 5), (2, 3), (2, 4), (2, 5)], (0, 6): [(0, 0), (1, 6), (0, 1), (2, 6), (0, 2), (3, 6), (0, 3), (4, 6), (0, 4), (5, 6), (0, 5), (6, 6),
(7, 6), (0, 7), (8, 6), (0, 8), (0, 7), (0, 8), (1, 6), (1, 7), (1, 8), (2, 6), (2, 7), (2, 8)], (0, 7): [(0, 0), (1, 7), (0, 1), (2, 7), (0, 2),
(3, 7), (0, 3), (4, 7), (0, 4), (5, 7), (0, 5), (6, 7), (0, 6), (7, 7), (8, 7), (0, 8), (0, 6), (0, 8), (1, 6), (1, 7), (1, 8), (2, 6), (2, 7), (2,
8)], (0, 8): [(0, 0), (1, 8), (0, 1), (2, 8), (0, 2), (3, 8), (0, 3), (4, 8), (0, 4), (5, 8), (0, 5), (6, 8), (0, 6), (7, 8), (0, 7), (8, 8), (0,
6), (0, 7), (1, 6), (1, 7), (1, 8), (2, 6), (2, 7), (2, 8)],

(1, 0): [(0, 0), (1, 1), (2, 0), (1, 2), (3, 0), (1, 3), (4, 0), (1, 4), (5, 0), (1, 5), (6, 0), (1, 6), (7, 0), (1, 7), (8, 0), (1, 8), (0, 0),
(0, 1), (0, 2), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)], (1, 1): [(0, 1), (1, 0), (2, 1), (1, 2), (3, 1), (1, 3), (4, 1), (1, 4), (5, 1), (1, 5),
(6, 1), (1, 6), (7, 1), (1, 7), (8, 1), (1, 8), (0, 0), (0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1), (2, 2)], (1, 2): [(0, 2), (1, 0), (1, 1),
(2, 2), (3, 2), (1, 3), (4, 2), (1, 4), (5, 2), (1, 5), (6, 2), (1, 6), (7, 2), (1, 7), (8, 2), (1, 8), (0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (2,
0), (2, 1), (2, 2)], (1, 3): [(0, 3), (1, 0), (1, 1), (2, 3), (1, 2), (3, 3), (4, 3), (1, 4), (5, 3), (1, 5), (6, 3), (1, 6), (7, 3), (1, 7), (8,
3), (1, 8), (0, 3), (0, 4), (0, 5), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5)], (1, 4): [(0, 4), (1, 0), (1, 1), (2, 4), (1, 2), (3, 4), (1, 3), (4,
4), (5, 4), (1, 5), (6, 4), (1, 6), (7, 4), (1, 7), (8, 4), (1, 8), (0, 3), (0, 4), (0, 5), (1, 3), (1, 5), (2, 3), (2, 4), (2, 5)], (1, 5): [(0,
5), (1, 0), (1, 1), (2, 5), (1, 2), (3, 5), (1, 3), (4, 5), (1, 4), (5, 5), (6, 5), (1, 6), (7, 5), (1, 7), (8, 5), (1, 8), (0, 3), (0, 4), (0, 5),
(1, 3), (1, 4), (2, 3), (2, 4), (2, 5)], (1, 6): [(0, 6), (1, 0), (1, 1), (2, 6), (1, 2), (3, 6), (1, 3), (4, 6), (1, 4), (5, 6), (1, 5), (6, 6),
(7, 6), (1, 7), (8, 6), (1, 8), (0, 6), (0, 7), (0, 8), (1, 7), (1, 8), (2, 6), (2, 7), (2, 8)], (1, 7): [(0, 7), (1, 0), (1, 1), (2, 7), (1, 2),
(3, 7), (1, 3), (4, 7), (1, 4), (5, 7), (1, 5), (6, 7), (1, 6), (7, 7), (8, 7), (1, 8), (0, 6), (0, 7), (0, 8), (1, 6), (1, 8), (2, 6), (2, 7), (2,
8)], (1, 8): [(0, 8), (1, 0), (1, 1), (2, 8), (1, 2), (3, 8), (1, 3), (4, 8), (1, 4), (5, 8), (1, 5), (6, 8), (1, 6), (7, 8), (1, 7), (8, 8), (0,
6), (0, 7), (0, 8), (1, 6), (1, 7), (2, 6), (2, 7), (2, 8)],

(2, 0): [(0, 0), (1, 0), (2, 1), (2, 2), (3, 0), (2, 3), (4, 0), (2, 4), (5, 0), (2, 5), (6, 0), (2, 6), (7, 0), (2, 7), (8, 0), (2, 8), (0, 0),
(0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 1), (2, 2)], (2, 1): [(0, 1), (2, 0), (1, 1), (2, 2), (3, 1), (2, 3), (4, 1), (2, 4), (5, 1), (2, 5),
(6, 1), (2, 6), (7, 1), (2, 7), (8, 1), (2, 8), (0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 2)], (2, 2): [(0, 2), (2, 0), (1, 2),
(2, 1), (3, 2), (2, 3), (4, 2), (2, 4), (5, 2), (2, 5), (6, 2), (2, 6), (7, 2), (2, 7), (8, 2), (2, 8), (0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1,
2), (2, 0), (2, 1)], (2, 3): [(0, 3), (2, 0), (1, 3), (2, 1), (2, 2), (3, 3), (4, 3), (2, 4), (5, 3), (2, 5), (6, 3), (2, 6), (7, 3), (2, 7), (8,
3), (2, 8), (0, 3), (0, 4), (0, 5), (1, 3), (1, 4), (1, 5), (2, 4), (2, 5)], (2, 4): [(0, 4), (2, 0), (1, 4), (2, 1), (2, 2), (3, 4), (2, 3), (4,
4), (5, 4), (2, 5), (6, 4), (2, 6), (7, 4), (2, 7), (8, 4), (2, 8), (0, 3), (0, 4), (0, 5), (1, 3), (1, 4), (1, 5), (2, 3), (2, 5)], (2, 5): [(0,
5), (2, 0), (1, 5), (2, 1), (2, 2), (3, 5), (2, 3), (4, 5), (2, 4), (5, 5), (6, 5), (2, 6), (7, 5), (2, 7), (8, 5), (2, 8), (0, 3), (0, 4), (0, 5),
(1, 3), (1, 4), (1, 5), (2, 3), (2, 4)], (2, 6): [(0, 6), (2, 0), (1, 6), (2, 1), (2, 2), (3, 6), (2, 3), (4, 6), (2, 4), (5, 6), (2, 5), (6, 6),
(7, 6), (2, 7), (8, 6), (2, 8), (0, 6), (0, 7), (0, 8), (1, 6), (1, 7), (1, 8), (2, 7), (2, 8)], (2, 7): [(0, 7), (2, 0), (1, 7), (2, 1), (2, 2),
(3, 7), (2, 3), (4, 7), (2, 4), (5, 7), (2, 5), (6, 7), (2, 6), (7, 7), (8, 7), (2, 8), (0, 6), (0, 7), (0, 8), (1, 6), (1, 7), (1, 8), (2, 6), (2,
8)], (2, 8): [(0, 8), (2, 0), (1, 8), (2, 1), (2, 2), (3, 8), (2, 3), (4, 8), (2, 4), (5, 8), (2, 5), (6, 8), (2, 6), (7, 8), (2, 7), (8, 8), (0,
6), (0, 7), (0, 8), (1, 6), (1, 7), (1, 8), (2, 6), (2, 7)],

(3, 0): [(0, 0), (1, 0), (3, 1), (2, 0), (3, 2), (3, 3), (4, 0), (3, 4), (5, 0), (3, 5), (6, 0), (3, 6), (7, 0), (3, 7), (8, 0), (3, 8), (3, 1),
(3, 2), (4, 0), (4, 1), (4, 2), (5, 0), (5, 1), (5, 2)], (3, 1): [(0, 1), (3, 0), (1, 1), (2, 1), (3, 2), (3, 3), (4, 1), (3, 4), (5, 1), (3, 5),
(6, 1), (3, 6), (7, 1), (3, 7), (8, 1), (3, 8), (3, 0), (3, 2), (4, 0), (4, 1), (4, 2), (5, 0), (5, 1), (5, 2)], (3, 2): [(0, 2), (3, 0), (1, 2),
(3, 1), (2, 2), (3, 3), (4, 2), (3, 4), (5, 2), (3, 5), (6, 2), (3, 6), (7, 2), (3, 7), (8, 2), (3, 8), (3, 0), (3, 1), (4, 0), (4, 1), (4, 2), (5,
0), (5, 1), (5, 2)], (3, 3): [(0, 3), (3, 0), (1, 3), (3, 1), (2, 3), (3, 2), (4, 3), (3, 4), (5, 3), (3, 5), (6, 3), (3, 6), (7, 3), (3, 7), (8,
3), (3, 8), (3, 4), (3, 5), (4, 3), (4, 4), (4, 5), (5, 3), (5, 4), (5, 5)], (3, 4): [(0, 4), (3, 0), (1, 4), (3, 1), (2, 4), (3, 2), (3, 3), (4,
4), (5, 4), (3, 5), (6, 4), (3, 6), (7, 4), (3, 7), (8, 4), (3, 8), (3, 3), (3, 5), (4, 3), (4, 4), (4, 5), (5, 3), (5, 4), (5, 5)], (3, 5): [(0,
5), (3, 0), (1, 5), (3, 1), (2, 5), (3, 2), (3, 3), (4, 5), (3, 4), (5, 5), (6, 5), (3, 6), (7, 5), (3, 7), (8, 5), (3, 8), (3, 3), (3, 4), (4, 3),
(4, 4), (4, 5), (5, 3), (5, 4), (5, 5)], (3, 6): [(0, 6), (3, 0), (1, 6), (3, 1), (2, 6), (3, 2), (3, 3), (4, 6), (3, 4), (5, 6), (3, 5), (6, 6),
UNIT 3: GAME PLAYING AND CSP
(7, 6), (3, 7), (8, 6), (3, 8), (3, 7), (3, 8), (4, 6), (4, 7), (4, 8), (5, 6), (5, 7), (5, 8)], (3, 7): [(0, 7), (3, 0), (1, 7), (3, 1), (2, 7),
(3, 2), (3, 3), (4, 7), (3, 4), (5, 7), (3, 5), (6, 7), (3, 6), (7, 7), (8, 7), (3, 8), (3, 6), (3, 8), (4, 6), (4, 7), (4, 8), (5, 6), (5, 7), (5,
8)], (3, 8): [(0, 8), (3, 0), (1, 8), (3, 1), (2, 8), (3, 2), (3, 3), (4, 8), (3, 4), (5, 8), (3, 5), (6, 8), (3, 6), (7, 8), (3, 7), (8, 8), (3,
6), (3, 7), (4, 6), (4, 7), (4, 8), (5, 6), (5, 7), (5, 8)],

(4, 0): [(0, 0), (1, 0), (4, 1), (2, 0), (4, 2), (3, 0), (4, 3), (4, 4), (5, 0), (4, 5), (6, 0), (4, 6), (7, 0), (4, 7), (8, 0), (4, 8), (3, 0),
(3, 1), (3, 2), (4, 1), (4, 2), (5, 0), (5, 1), (5, 2)], (4, 1): [(0, 1), (4, 0), (1, 1), (2, 1), (4, 2), (3, 1), (4, 3), (4, 4), (5, 1), (4, 5),
(6, 1), (4, 6), (7, 1), (4, 7), (8, 1), (4, 8), (3, 0), (3, 1), (3, 2), (4, 0), (4, 2), (5, 0), (5, 1), (5, 2)], (4, 2): [(0, 2), (4, 0), (1, 2),
(4, 1), (2, 2), (3, 2), (4, 3), (4, 4), (5, 2), (4, 5), (6, 2), (4, 6), (7, 2), (4, 7), (8, 2), (4, 8), (3, 0), (3, 1), (3, 2), (4, 0), (4, 1), (5,
0), (5, 1), (5, 2)], (4, 3): [(0, 3), (4, 0), (1, 3), (4, 1), (2, 3), (4, 2), (3, 3), (4, 4), (5, 3), (4, 5), (6, 3), (4, 6), (7, 3), (4, 7), (8,
3), (4, 8), (3, 3), (3, 4), (3, 5), (4, 4), (4, 5), (5, 3), (5, 4), (5, 5)], (4, 4): [(0, 4), (4, 0), (1, 4), (4, 1), (2, 4), (4, 2), (3, 4), (4,
3), (5, 4), (4, 5), (6, 4), (4, 6), (7, 4), (4, 7), (8, 4), (4, 8), (3, 3), (3, 4), (3, 5), (4, 3), (4, 5), (5, 3), (5, 4), (5, 5)], (4, 5): [(0,
5), (4, 0), (1, 5), (4, 1), (2, 5), (4, 2), (3, 5), (4, 3), (4, 4), (5, 5), (6, 5), (4, 6), (7, 5), (4, 7), (8, 5), (4, 8), (3, 3), (3, 4), (3, 5),
(4, 3), (4, 4), (5, 3), (5, 4), (5, 5)], (4, 6): [(0, 6), (4, 0), (1, 6), (4, 1), (2, 6), (4, 2), (3, 6), (4, 3), (4, 4), (5, 6), (4, 5), (6, 6),
(7, 6), (4, 7), (8, 6), (4, 8), (3, 6), (3, 7), (3, 8), (4, 7), (4, 8), (5, 6), (5, 7), (5, 8)], (4, 7): [(0, 7), (4, 0), (1, 7), (4, 1), (2, 7),
(4, 2), (3, 7), (4, 3), (4, 4), (5, 7), (4, 5), (6, 7), (4, 6), (7, 7), (8, 7), (4, 8), (3, 6), (3, 7), (3, 8), (4, 6), (4, 8), (5, 6), (5, 7), (5,
8)], (4, 8): [(0, 8), (4, 0), (1, 8), (4, 1), (2, 8), (4, 2), (3, 8), (4, 3), (4, 4), (5, 8), (4, 5), (6, 8), (4, 6), (7, 8), (4, 7), (8, 8), (3,
6), (3, 7), (3, 8), (4, 6), (4, 7), (5, 6), (5, 7), (5, 8)],

(5, 0): [(0, 0), (1, 0), (5, 1), (2, 0), (5, 2), (3, 0), (5, 3), (4, 0), (5, 4), (5, 5), (6, 0), (5, 6), (7, 0), (5, 7), (8, 0), (5, 8), (3, 0),
(3, 1), (3, 2), (4, 0), (4, 1), (4, 2), (5, 1), (5, 2)], (5, 1): [(0, 1), (5, 0), (1, 1), (2, 1), (5, 2), (3, 1), (5, 3), (4, 1), (5, 4), (5, 5),
(6, 1), (5, 6), (7, 1), (5, 7), (8, 1), (5, 8), (3, 0), (3, 1), (3, 2), (4, 0), (4, 1), (4, 2), (5, 0), (5, 2)], (5, 2): [(0, 2), (5, 0), (1, 2),
(5, 1), (2, 2), (3, 2), (5, 3), (4, 2), (5, 4), (5, 5), (6, 2), (5, 6), (7, 2), (5, 7), (8, 2), (5, 8), (3, 0), (3, 1), (3, 2), (4, 0), (4, 1), (4,
2), (5, 0), (5, 1)], (5, 3): [(0, 3), (5, 0), (1, 3), (5, 1), (2, 3), (5, 2), (3, 3), (4, 3), (5, 4), (5, 5), (6, 3), (5, 6), (7, 3), (5, 7), (8,
3), (5, 8), (3, 3), (3, 4), (3, 5), (4, 3), (4, 4), (4, 5), (5, 4), (5, 5)], (5, 4): [(0, 4), (5, 0), (1, 4), (5, 1), (2, 4), (5, 2), (3, 4), (5,
3), (4, 4), (5, 5), (6, 4), (5, 6), (7, 4), (5, 7), (8, 4), (5, 8), (3, 3), (3, 4), (3, 5), (4, 3), (4, 4), (4, 5), (5, 3), (5, 5)], (5, 5): [(0,
5), (5, 0), (1, 5), (5, 1), (2, 5), (5, 2), (3, 5), (5, 3), (4, 5), (5, 4), (6, 5), (5, 6), (7, 5), (5, 7), (8, 5), (5, 8), (3, 3), (3, 4), (3, 5),
(4, 3), (4, 4), (4, 5), (5, 3), (5, 4)], (5, 6): [(0, 6), (5, 0), (1, 6), (5, 1), (2, 6), (5, 2), (3, 6), (5, 3), (4, 6), (5, 4), (5, 5), (6, 6),
(7, 6), (5, 7), (8, 6), (5, 8), (3, 6), (3, 7), (3, 8), (4, 6), (4, 7), (4, 8), (5, 7), (5, 8)], (5, 7): [(0, 7), (5, 0), (1, 7), (5, 1), (2, 7),
(5, 2), (3, 7), (5, 3), (4, 7), (5, 4), (5, 5), (6, 7), (5, 6), (7, 7), (8, 7), (5, 8), (3, 6), (3, 7), (3, 8), (4, 6), (4, 7), (4, 8), (5, 6), (5,
8)], (5, 8): [(0, 8), (5, 0), (1, 8), (5, 1), (2, 8), (5, 2), (3, 8), (5, 3), (4, 8), (5, 4), (5, 5), (6, 8), (5, 6), (7, 8), (5, 7), (8, 8), (3,
6), (3, 7), (3, 8), (4, 6), (4, 7), (4, 8), (5, 6), (5, 7)],

(6, 0): [(0, 0), (1, 0), (6, 1), (2, 0), (6, 2), (3, 0), (6, 3), (4, 0), (6, 4), (5, 0), (6, 5), (6, 6), (7, 0), (6, 7), (8, 0), (6, 8), (6, 1),
(6, 2), (7, 0), (7, 1), (7, 2), (8, 0), (8, 1), (8, 2)], (6, 1): [(0, 1), (6, 0), (1, 1), (2, 1), (6, 2), (3, 1), (6, 3), (4, 1), (6, 4), (5, 1),
(6, 5), (6, 6), (7, 1), (6, 7), (8, 1), (6, 8), (6, 0), (6, 2), (7, 0), (7, 1), (7, 2), (8, 0), (8, 1), (8, 2)], (6, 2): [(0, 2), (6, 0), (1, 2),
(6, 1), (2, 2), (3, 2), (6, 3), (4, 2), (6, 4), (5, 2), (6, 5), (6, 6), (7, 2), (6, 7), (8, 2), (6, 8), (6, 0), (6, 1), (7, 0), (7, 1), (7, 2), (8,
0), (8, 1), (8, 2)], (6, 3): [(0, 3), (6, 0), (1, 3), (6, 1), (2, 3), (6, 2), (3, 3), (4, 3), (6, 4), (5, 3), (6, 5), (6, 6), (7, 3), (6, 7), (8,
3), (6, 8), (6, 4), (6, 5), (7, 3), (7, 4), (7, 5), (8, 3), (8, 4), (8, 5)], (6, 4): [(0, 4), (6, 0), (1, 4), (6, 1), (2, 4), (6, 2), (3, 4), (6,
3), (4, 4), (5, 4), (6, 5), (6, 6), (7, 4), (6, 7), (8, 4), (6, 8), (6, 3), (6, 5), (7, 3), (7, 4), (7, 5), (8, 3), (8, 4), (8, 5)], (6, 5): [(0,
5), (6, 0), (1, 5), (6, 1), (2, 5), (6, 2), (3, 5), (6, 3), (4, 5), (6, 4), (5, 5), (6, 6), (7, 5), (6, 7), (8, 5), (6, 8), (6, 3), (6, 4), (7, 3),
(7, 4), (7, 5), (8, 3), (8, 4), (8, 5)], (6, 6): [(0, 6), (6, 0), (1, 6), (6, 1), (2, 6), (6, 2), (3, 6), (6, 3), (4, 6), (6, 4), (5, 6), (6, 5),
(7, 6), (6, 7), (8, 6), (6, 8), (6, 7), (6, 8), (7, 6), (7, 7), (7, 8), (8, 6), (8, 7), (8, 8)], (6, 7): [(0, 7), (6, 0), (1, 7), (6, 1), (2, 7),
(6, 2), (3, 7), (6, 3), (4, 7), (6, 4), (5, 7), (6, 5), (6, 6), (7, 7), (8, 7), (6, 8), (6, 6), (6, 8), (7, 6), (7, 7), (7, 8), (8, 6), (8, 7), (8,
8)], (6, 8): [(0, 8), (6, 0), (1, 8), (6, 1), (2, 8), (6, 2), (3, 8), (6, 3), (4, 8), (6, 4), (5, 8), (6, 5), (6, 6), (7, 8), (6, 7), (8, 8), (6,
6), (6, 7), (7, 6), (7, 7), (7, 8), (8, 6), (8, 7), (8, 8)],

(7, 0): [(0, 0), (1, 0), (7, 1), (2, 0), (7, 2), (3, 0), (7, 3), (4, 0), (7, 4), (5, 0), (7, 5), (6, 0), (7, 6), (7, 7), (8, 0), (7, 8), (6, 0),
(6, 1), (6, 2), (7, 1), (7, 2), (8, 0), (8, 1), (8, 2)], (7, 1): [(0, 1), (7, 0), (1, 1), (2, 1), (7, 2), (3, 1), (7, 3), (4, 1), (7, 4), (5, 1),
(7, 5), (6, 1), (7, 6), (7, 7), (8, 1), (7, 8), (6, 0), (6, 1), (6, 2), (7, 0), (7, 2), (8, 0), (8, 1), (8, 2)], (7, 2): [(0, 2), (7, 0), (1, 2),
(7, 1), (2, 2), (3, 2), (7, 3), (4, 2), (7, 4), (5, 2), (7, 5), (6, 2), (7, 6), (7, 7), (8, 2), (7, 8), (6, 0), (6, 1), (6, 2), (7, 0), (7, 1), (8,
0), (8, 1), (8, 2)], (7, 3): [(0, 3), (7, 0), (1, 3), (7, 1), (2, 3), (7, 2), (3, 3), (4, 3), (7, 4), (5, 3), (7, 5), (6, 3), (7, 6), (7, 7), (8,
3), (7, 8), (6, 3), (6, 4), (6, 5), (7, 4), (7, 5), (8, 3), (8, 4), (8, 5)], (7, 4): [(0, 4), (7, 0), (1, 4), (7, 1), (2, 4), (7, 2), (3, 4), (7,
UNIT 3: GAME PLAYING AND CSP
3), (4, 4), (5, 4), (7, 5), (6, 4), (7, 6), (7, 7), (8, 4), (7, 8), (6, 3), (6, 4), (6, 5), (7, 3), (7, 5), (8, 3), (8, 4), (8, 5)], (7, 5): [(0,
5), (7, 0), (1, 5), (7, 1), (2, 5), (7, 2), (3, 5), (7, 3), (4, 5), (7, 4), (5, 5), (6, 5), (7, 6), (7, 7), (8, 5), (7, 8), (6, 3), (6, 4), (6, 5),
(7, 3), (7, 4), (8, 3), (8, 4), (8, 5)], (7, 6): [(0, 6), (7, 0), (1, 6), (7, 1), (2, 6), (7, 2), (3, 6), (7, 3), (4, 6), (7, 4), (5, 6), (7, 5),
(6, 6), (7, 7), (8, 6), (7, 8), (6, 6), (6, 7), (6, 8), (7, 7), (7, 8), (8, 6), (8, 7), (8, 8)], (7, 7): [(0, 7), (7, 0), (1, 7), (7, 1), (2, 7),
(7, 2), (3, 7), (7, 3), (4, 7), (7, 4), (5, 7), (7, 5), (6, 7), (7, 6), (8, 7), (7, 8), (6, 6), (6, 7), (6, 8), (7, 6), (7, 8), (8, 6), (8, 7), (8,
8)], (7, 8): [(0, 8), (7, 0), (1, 8), (7, 1), (2, 8), (7, 2), (3, 8), (7, 3), (4, 8), (7, 4), (5, 8), (7, 5), (6, 8), (7, 6), (7, 7), (8, 8), (6,
6), (6, 7), (6, 8), (7, 6), (7, 7), (8, 6), (8, 7), (8, 8)],

(8, 0): [(0, 0), (1, 0), (8, 1), (2, 0), (8, 2), (3, 0), (8, 3), (4, 0), (8, 4), (5, 0), (8, 5), (6, 0), (8, 6), (7, 0), (8, 7), (8, 8), (6, 0),
(6, 1), (6, 2), (7, 0), (7, 1), (7, 2), (8, 1), (8, 2)], (8, 1): [(0, 1), (8, 0), (1, 1), (2, 1), (8, 2), (3, 1), (8, 3), (4, 1), (8, 4), (5, 1),
(8, 5), (6, 1), (8, 6), (7, 1), (8, 7), (8, 8), (6, 0), (6, 1), (6, 2), (7, 0), (7, 1), (7, 2), (8, 0), (8, 2)], (8, 2): [(0, 2), (8, 0), (1, 2),
(8, 1), (2, 2), (3, 2), (8, 3), (4, 2), (8, 4), (5, 2), (8, 5), (6, 2), (8, 6), (7, 2), (8, 7), (8, 8), (6, 0), (6, 1), (6, 2), (7, 0), (7, 1), (7,
2), (8, 0), (8, 1)], (8, 3): [(0, 3), (8, 0), (1, 3), (8, 1), (2, 3), (8, 2), (3, 3), (4, 3), (8, 4), (5, 3), (8, 5), (6, 3), (8, 6), (7, 3), (8,
7), (8, 8), (6, 3), (6, 4), (6, 5), (7, 3), (7, 4), (7, 5), (8, 4), (8, 5)], (8, 4): [(0, 4), (8, 0), (1, 4), (8, 1), (2, 4), (8, 2), (3, 4), (8,
3), (4, 4), (5, 4), (8, 5), (6, 4), (8, 6), (7, 4), (8, 7), (8, 8), (6, 3), (6, 4), (6, 5), (7, 3), (7, 4), (7, 5), (8, 3), (8, 5)], (8, 5): [(0,
5), (8, 0), (1, 5), (8, 1), (2, 5), (8, 2), (3, 5), (8, 3), (4, 5), (8, 4), (5, 5), (6, 5), (8, 6), (7, 5), (8, 7), (8, 8), (6, 3), (6, 4), (6, 5),
(7, 3), (7, 4), (7, 5), (8, 3), (8, 4)], (8, 6): [(0, 6), (8, 0), (1, 6), (8, 1), (2, 6), (8, 2), (3, 6), (8, 3), (4, 6), (8, 4), (5, 6), (8, 5),
(6, 6), (7, 6), (8, 7), (8, 8), (6, 6), (6, 7), (6, 8), (7, 6), (7, 7), (7, 8), (8, 7), (8, 8)], (8, 7): [(0, 7), (8, 0), (1, 7), (8, 1), (2, 7),
(8, 2), (3, 7), (8, 3), (4, 7), (8, 4), (5, 7), (8, 5), (6, 7), (8, 6), (7, 7), (8, 8), (6, 6), (6, 7), (6, 8), (7, 6), (7, 7), (7, 8), (8, 6), (8,
8)], (8, 8): [(0, 8), (8, 0), (1, 8), (8, 1), (2, 8), (8, 2), (3, 8), (8, 3), (4, 8), (8, 4), (5, 8), (8, 5), (6, 8), (8, 6), (7, 8), (8, 7), (6,
6), (6, 7), (6, 8), (7, 6), (7, 7), (7, 8), (8, 6), (8, 7)]}

Find the solution to the above Sudaku Problem

 Python3

csp = CSP(variables, Domains, constraints)

sol = csp.solve()

solution = [[0 for i in range(9)] for i in range(9)]

for i,j in sol:

solution[i][j]=sol[i,j]

print_sudoku(solution)

Output:

534 |678 |192

672 |195 |348

198 |342 |567

-----------

859 |761 |423

426 |853 |971

713 |924 |856


UNIT 3: GAME PLAYING AND CSP
-----------

961 |537 |284

287 |419 |635

345 |286 |719

Full Code :

 Python3

puzzle = [[5, 3, 0, 0, 7, 0, 0, 0, 0],

[6, 0, 0, 1, 9, 5, 0, 0, 0],

[0, 9, 8, 0, 0, 0, 0, 6, 0],

[8, 0, 0, 0, 6, 0, 0, 0, 3],

[4, 0, 0, 8, 0, 3, 0, 0, 1],

[7, 0, 0, 0, 2, 0, 0, 0, 6],

[0, 6, 0, 0, 0, 0, 2, 8, 0],

[0, 0, 0, 4, 1, 9, 0, 0, 5],

[0, 0, 0, 0, 8, 0, 0, 0, 0]

def print_sudoku(puzzle):

for i in range(9):

if i % 3 == 0 and i != 0:

print("- - - - - - - - - - - ")

for j in range(9):

if j % 3 == 0 and j != 0:

print(" | ", end="")

print(puzzle[i][j], end=" ")

print()

print_sudoku(puzzle)

class CSP:

def __init__(self, variables, Domains,constraints):

self.variables = variables
UNIT 3: GAME PLAYING AND CSP
self.domains = Domains

self.constraints = constraints

self.solution = None

def solve(self):

assignment = {}

self.solution = self.backtrack(assignment)

return self.solution

def backtrack(self, assignment):

if len(assignment) == len(self.variables):

return assignment

var = self.select_unassigned_variable(assignment)

for value in self.order_domain_values(var, assignment):

if self.is_consistent(var, value, assignment):

assignment[var] = value

result = self.backtrack(assignment)

if result is not None:

return result

del assignment[var]

return None

def select_unassigned_variable(self, assignment):

unassigned_vars = [var for var in self.variables if var not in assignment]

return min(unassigned_vars, key=lambda var: len(self.domains[var]))

def order_domain_values(self, var, assignment):

return self.domains[var]

def is_consistent(self, var, value, assignment):

for constraint_var in self.constraints[var]:


UNIT 3: GAME PLAYING AND CSP
if constraint_var in assignment and assignment[constraint_var] == value:

return False

return True

# Variables

variables = [(i, j) for i in range(9) for j in range(9)]

# Domains

Domains = {var: set(range(1, 10)) if puzzle[var[0]][var[1]] == 0

else {puzzle[var[0]][var[1]]} for var in variables}

# Add contraint

def add_constraint(var):

constraints[var] = []

for i in range(9):

if i != var[0]:

constraints[var].append((i, var[1]))

if i != var[1]:

constraints[var].append((var[0], i))

sub_i, sub_j = var[0] // 3, var[1] // 3

for i in range(sub_i * 3, (sub_i + 1) * 3):

for j in range(sub_j * 3, (sub_j + 1) * 3):

if (i, j) != var:

constraints[var].append((i, j))

# constraints

constraints = {}

for i in range(9):

for j in range(9):

add_constraint((i, j))

# Solution

print('*'*7,'Solution','*'*7)
UNIT 3: GAME PLAYING AND CSP
csp = CSP(variables, Domains, constraints)

sol = csp.solve()

solution = [[0 for i in range(9)] for i in range(9)]

for i,j in sol:

solution[i][j]=sol[i,j]

print_sudoku(solution)

Output:

530 |070 |000

600 |195 |000

098 |000 |060

-----------

800 |060 |003

400 |803 |001

700 |020 |006

-----------

060 |000 |280

000 |419 |005

000 |080 |000

******* Solution *******

534 |678 |192

672 |195 |348

198 |342 |567

-----------

859 |761 |423

426 |853 |971

713 |924 |856

-----------

961 |537 |284

287 |419 |635

345 |286 |719


UNIT 3: GAME PLAYING AND CSP
Real-world Constraint Satisfaction Problems (CSP):

 Scheduling: A fundamental CSP problem is how to efficiently and effectively schedule resources like
personnel, equipment, and facilities. The constraints in this domain specify the availability and capacity of
each resource, whereas the variables indicate the time slots or resources.

 Vehicle routing: Another example of a CSP problem is the issue of minimizing travel time or distance by
optimizing a fleet of vehicles’ routes. In this domain, the constraints specify each vehicle’s capacity, delivery
locations, and time windows, while the variables indicate the routes taken by the vehicles.

 Assignment: Another typical CSP issue is how to optimally assign assignments or jobs to humans or
machines. In this field, the variables stand in for the tasks, while the constraints specify the knowledge,
capacity, and workload of each person or machine.

 Sudoku: The well-known puzzle game Sudoku can be modeled as a CSP problem, where the variables stand
in for the grid’s cells and the constraints specify the game’s rules, such as prohibiting the repetition of the
same number in a row, column, or area.

 Constraint-based image segmentation: The segmentation of an image into areas with various qualities (such
as color, texture, or shape) can be treated as a CSP issue in computer vision, where the variables represent
the regions and the constraints specify how similar or unlike neighboring regions are to one another.

Constraint Satisfaction Problems (CSP) benefits:

 conventional representation patterns

 generic successor and goal functions

 Standard heuristics (no domain-specific expertise).


UNIT 4: LOGICAL AGENTS
Knowledge based agents in AI

Humans claim that how intelligence is achieved- not by purely reflect mechanisms but by process of reasoning that
operate on internal representation of knowledge. In AI these techniques for intelligence are present in Knowledge
Based Agents.

Knowledge-Based System

 A knowledge-based system is a system that uses artificial intelligence techniques to store and reason with
knowledge. The knowledge is typically represented in the form of rules or facts, which can be used to draw
conclusions or make decisions.

 One of the key benefits of a knowledge-based system is that it can help to automate decision-making
processes. For example, a knowledge-based system could be used to diagnose a medical condition, by
reasoning over a set of rules that describe the symptoms and possible causes of the condition.

 Another benefit of knowledge-based systems is that they can be used to explain their decisions to humans.
This can be useful, for example, in a customer service setting, where a knowledge-based system can help a
human agent understand why a particular decision was made.

 Knowledge-based systems are a type of artificial intelligence and have been used in a variety of applications
including medical diagnosis, expert systems, and decision support systems.

Knowledge-Based System in Artificial Intelligence

 An intelligent agent needs knowledge about the real world to make decisions and reasoning to act
efficiently.

 Knowledge-based agents are those agents who have the capability of maintaining an internal state of
knowledge, reason over that knowledge, update their knowledge after observations and take action. These
agents can represent the world with some formal representation and act intelligently.

Why use a knowledge base?

 A knowledge base inference is required for updating knowledge for an agent to learn with experiences and
take action as per the knowledge.

 Inference means deriving new sentences from old. The inference-based system allows us to add a new
sentence to the knowledge base. A sentence is a proposition about the world. The inference system applies
logical rules to the KB to deduce new information.

 The inference system generates new facts so that an agent can update the KB. An inference system works
mainly in two rules which are given:

 Forward chaining

 Backward chaining

Various levels of knowledge-based agents

A knowledge-based agent can be viewed at different levels which are given below:

1. Knowledge level

Knowledge level is the first level of knowledge-based agent, and in this level, we need to specify what the agent
knows, and what the agent goals are. With these specifications, we can fix its behavior. For example, suppose an
automated taxi agent needs to go from a station A to station B, and he knows the way from A to B, so this comes at
the knowledge level.
UNIT 4: LOGICAL AGENTS
2. Logical level

At this level, we understand that how the knowledge representation of knowledge is stored. At this level, sentences
are encoded into different logics. At the logical level, an encoding of knowledge into logical sentences occurs. At the
logical level we can expect to the automated taxi agent to reach to the destination B.

3. Implementation level

This is the physical representation of logic and knowledge. At the implementation level agent perform actions as per
logical and knowledge level. At this level, an automated taxi agent actually implement his knowledge and logic so
that he can reach to the destination.

Knowledge-based agents have explicit representation of knowledge that can be reasoned. They maintain internal
state of knowledge, reason over it, update it and perform actions accordingly. These agents act intelligently
according to requirements.

Knowledge based agents give the current situation in the form of sentences. They have complete knowledge of
current situation of mini-world and its surroundings. These agents manipulate knowledge to infer new things at
“Knowledge level”.

knowledge-based system has following features

Knowledge base (KB): It is the key component of a knowledge-based agent. These deal with real facts of world. It is a
mixture of sentences which are explained in knowledge representation language.

Inference Engine(IE): It is knowledge-based system engine used to infer new knowledge in the system.

Actions performed by an agent

Inference System is used when we want to update some information (sentences) in Knowledge-Based System and to
know the already present information. This mechanism is done by TELL and ASK operations. They include inference
i.e. producing new sentences from old. Inference must accept needs when one asks a question to KB and answer
should follow from what has been Told to KB. Agent also has a KB, which initially has some background Knowledge.
Whenever, agent program is called, it performs some actions.

Actions done by KB Agent:

1. It TELLS what it recognized from the environment and what it needs to know to the knowledge base.

2. It ASKS what actions to do? and gets answers from the knowledge base.

3. It TELLS the which action is selected , then agent will execute that action.

Algorithm :

function KB_AGENT (percept) returns an action

KB : knowledge base

t : time ( counter initially 0)

TELL(KB, MAKE_PERCEPT_SENTENCE (percept,t) )

action = ASK(KB, MAKE_ACTION_QUERY (t) )

TELL(KB, MAKE_ACTION_SENTENCE (action,t) )

t=t+1

return action
UNIT 4: LOGICAL AGENTS
If a percept is given, agent adds it to KB, then it will ask KB for the best action and then tells KB that it has in fact
taken that action.

Knowledge Based Agents

A Knowledge based system behavior can be designed in following approaches:-

Declarative Approach: In this beginning from an empty knowledge base, the agent can TELL sentences one after
another till the agent has knowledge of how to work with its environment. This is known as the declarative
approach. It stores required information in empty knowledge-based system.

Propositional logic in Artificial intelligence

Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions. A
proposition is a declarative statement which is either true or false. It is a technique of knowledge representation in
logical and mathematical form.

Example:

1. a) It is Sunday.

2. b) The Sun rises from West (False proposition)

3. c) 3+3= 7(False proposition)

4. d) 5 is a prime number.

Following are some basic facts about propositional logic:

ADVERTISEMENT

ADVERTISEMENT

o Propositional logic is also called Boolean logic as it works on 0 and 1.

o In propositional logic, we use symbolic variables to represent the logic, and we can use any symbol for a
representing a proposition, such A, B, C, P, Q, R, etc.

o Propositions can be either true or false, but it cannot be both.

o Propositional logic consists of an object, relations or function, and logical connectives.

o These connectives are also called logical operators.

o The propositions and connectives are the basic elements of the propositional logic.

o Connectives can be said as a logical operator which connects two sentences.

o A proposition formula which is always true is called tautology, and it is also called a valid sentence.
UNIT 4: LOGICAL AGENTS
o A proposition formula which is always false is called Contradiction.

o A proposition formula which has both true and false values is called

o Statements which are questions, commands, or opinions are not propositions such as "Where is Rohini",
"How are you", "What is your name", are not propositions.

Syntax of propositional logic:

The syntax of propositional logic defines the allowable sentences for the knowledge representation. There are two
types of Propositions:

a. Atomic Propositions

b. Compound propositions

o Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single proposition
symbol. These are the sentences which must be either true or false.

Example:

1. a) 2+2 is 4, it is an atomic proposition as it is a true fact.

2. b) "The Sun is cold" is also a proposition as it is a false fact.

o Compound proposition: Compound propositions are constructed by combining simpler or atomic


propositions, using parenthesis and logical connectives.

Example:

1. a) "It is raining today, and street is wet."

2. b) "Ankit is a doctor, and his clinic is in Mumbai."

Logical Connectives:

Logical connectives are used to connect two simpler propositions or representing a sentence logically. We can create
compound propositions with the help of logical connectives. There are mainly five connectives, which are given as
follows:

1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or negative
literal.

2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.


Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.

3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P and Q are the
propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.

4. Implication: A sentence such as P → Q, is called an implication. Implications are also known as if-then rules.
It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q

5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If I am breathing, then I am


alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
UNIT 4: LOGICAL AGENTS
Following is the summarized table for Propositional Logic Connectives:

Truth Table:

In propositional logic, we need to know the truth values of propositions in all possible scenarios. We can combine all
the possible combination with logical connectives, and the representation of these combinations in a tabular format
is called Truth table. Following are the truth table for all logical connectives:

Truth table with three propositions:


UNIT 4: LOGICAL AGENTS
We can build a proposition composing three propositions P, Q, and R. This truth table is made-up of 8n Tuples as we
have taken three proposition symbols.

Precedence of connectives:

Just like arithmetic operators, there is a precedence order for propositional connectors or logical operators. This
order should be followed while evaluating a propositional problem. Following is the list of the precedence order for
operators:

Precedence Operators

First Precedence Parenthesis

Second Precedence Negation

Third Precedence Conjunction(AND)

Fourth Precedence Disjunction(OR)

Fifth Precedence Implication

Six Precedence Biconditional

Note: For better understanding use parenthesis to make sure of the correct interpretations. Such as ¬R∨ Q, It can be
interpreted as (¬R) ∨ Q.

Logical equivalence:

Logical equivalence is one of the features of propositional logic. Two propositions are said to be logically equivalent
if and only if the columns in the truth table are identical to each other.

Let's take two propositions A and B, so for logical equivalence, we can write it as A⇔B. In below truth table we can
see that column for ¬A∨ B and A→B, are identical hence A is Equivalent to B
UNIT 4: LOGICAL AGENTS
Properties of Operators:

o Commutativity:

o P∧ Q= Q ∧ P, or

o P ∨ Q = Q ∨ P.

o Associativity:

o (P ∧ Q) ∧ R= P ∧ (Q ∧ R),

o (P ∨ Q) ∨ R= P ∨ (Q ∨ R)

o Identity element:

o P ∧ True = P,

o P ∨ True= True.

o Distributive:

o P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).

o P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).

o DE Morgan's Law:

o ¬ (P ∧ Q) = (¬P) ∨ (¬Q)

o ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).

o Double-negation elimination:

o ¬ (¬P) = P.

Limitations of Propositional logic:

o We cannot represent relations like ALL, some, or none with propositional logic. Example:

a. All the girls are intelligent.

b. Some apples are sweet.

o Propositional logic has limited expressive power.

o In propositional logic, we cannot describe statements in terms of their properties or logical relationships.

Statement and proving of propositional theorem.

o prove a propositional theorem, we typically utilize deductive reasoning based on logical rules and inference.
Propositional logic deals with propositions, which are statements that can be either true or false. Here's a simple
example of a propositional theorem and its proof:

Propositional Theorem:

Proposition: P∧(P→Q)→Q

This proposition states that if P is true and P implies Q, then Q must also be true.

Proof:

We can prove the validity of this proposition using truth tables or logical inference rules. Here, we'll use a truth table
to demonstrate the validity of the proposition:
UNIT 4: LOGICAL AGENTS
P Q P→Q P∧(P→Q) (P∧(P→Q))→Q

T T T T T

T F F F T

F T T F T

F F T F T

From the truth table, we observe that the final column is always true, regardless of the truth values of P and Q.
Therefore, the proposition P∧(P→Q)→Q holds true in all cases, and the theorem is proven to be valid.

This simple proof demonstrates the logical inference that if P is true and P implies Q, then Q must also be true, as
asserted by the propositional theorem.

Propositional model checking.

Propositional model checking is a technique used in formal methods and computer science to verify the correctness
of systems or programs with respect to a given specification expressed in propositional logic. It involves
systematically exploring the state space of a system to determine if it satisfies or violates a given set of properties.

Here's how propositional model checking typically works:

1. Model Representation:

 The system or program to be verified is modeled as a finite-state transition system.

 States represent possible configurations of the system, and transitions represent the possible state changes
triggered by events or actions.

 Each state is associated with a set of propositional variables that encode the relevant properties of the
system at that state.

2. Specification Encoding:

 The desired properties or requirements of the system are encoded as formulas in propositional logic.

 These formulas describe the conditions that the system must satisfy or avoid during its execution.

 Common properties include safety properties (e.g., "the system never reaches an unsafe state") and liveness
properties (e.g., "the system eventually reaches a desired state").

3. Model Checking Algorithm:

 The model checking algorithm systematically explores the state space of the system to verify if it satisfies the
specified properties.

 It starts from an initial state and explores the transitions of the system, propagating the truth values of
propositional variables through the state space.

 At each step, the algorithm checks if the current state satisfies or violates the given specification.

 If a violation is found, the algorithm reports a counterexample demonstrating the failure of the property.

4. Tools and Techniques:

 Model checking tools implement various algorithms and optimizations to efficiently explore large state
spaces and verify complex systems.
UNIT 4: LOGICAL AGENTS
 Symbolic model checking techniques use efficient data structures and algorithms to represent and
manipulate sets of states and transitions symbolically, rather than enumerating them explicitly.

 Bounded model checking restricts the exploration of the state space to a finite depth or bounded number of
transitions, making it applicable to systems with infinite state spaces.

5. Application Areas:

 Propositional model checking is widely used in the verification of hardware designs, concurrent systems,
distributed protocols, and software systems.

 It helps detect design errors, race conditions, deadlocks, and other subtle bugs early in the development
process.

 Model checking is particularly effective for verifying systems with complex behaviors and intricate
interactions between components.

In summary, propositional model checking provides a rigorous and systematic approach to verifying the correctness
of systems by exhaustively exploring their state space and checking if they satisfy the specified properties expressed
in propositional logic. It is a powerful technique for ensuring the reliability, safety, and correctness of critical systems
in various domains.

Agents based on propositional logic

Agents based on propositional logic are computational entities that use propositional logic as the basis for reasoning,
decision-making, and action. Propositional logic provides a formal framework for representing and reasoning about
knowledge, beliefs, goals, and actions in an agent's environment. Here's how agents based on propositional logic
work:

Components of Agents Based on Propositional Logic:

1. Knowledge Base (KB):

 The knowledge base represents the agent's current knowledge and beliefs about the world.

 It consists of a set of propositional formulas representing facts, rules, constraints, and assumptions.

 Propositional symbols represent atomic statements or predicates about the state of the world.

2. Inference Engine:

 The inference engine performs logical reasoning operations on the knowledge base to derive new
conclusions or make decisions.

 It employs deductive reasoning rules such as modus ponens, resolution, and semantic entailment to
infer new facts or derive logical consequences from existing knowledge.

3. Belief Revision:

 Belief revision mechanisms allow agents to update their beliefs in response to new information,
observations, or changes in the environment.

 Agents may revise their beliefs by adding new facts, revising existing beliefs, or retracting
inconsistent information.

4. Goal Representation:

 Agents specify their goals, objectives, or desired outcomes using propositional logic formulas.

 Goals represent the agent's intentions or preferences regarding the states of the world that it aims
to achieve.
UNIT 4: LOGICAL AGENTS
5. Action Selection:

 Agents select actions based on their current beliefs, goals, and reasoning about the environment.

 Action selection mechanisms evaluate the consequences of candidate actions and choose the most
appropriate action to achieve the agent's goals.

Applications of Agents Based on Propositional Logic:

1. Intelligent Agents:

 Intelligent agents use propositional logic to model knowledge representation, reasoning, and
decision-making in various domains such as robotics, automation, and intelligent systems.

2. Planning and Scheduling:

 Agents based on propositional logic can perform automated planning and scheduling tasks by
representing goals, actions, and constraints as logical formulas and using inference algorithms to
generate action sequences.

3. Expert Systems:

 Expert systems leverage propositional logic to encode domain-specific knowledge and rules, allowing
them to make inferences and provide expert advice or recommendations in fields such as medicine,
finance, and engineering.

4. Multi-Agent Systems:

 Multi-agent systems consist of multiple interacting agents that communicate, cooperate, or


compete to achieve common goals or individual objectives.

 Propositional logic provides a formal framework for representing communication protocols,


coordination mechanisms, and negotiation strategies among agents.

In summary, agents based on propositional logic offer a formal and expressive approach to knowledge
representation, reasoning, and decision-making in artificial intelligence systems. They enable agents to model
complex environments, perform logical inference, and make informed decisions based on their beliefs, goals, and
actions.

First-Order Logic in Artificial intelligence

In the topic of Propositional logic, we have seen that how to represent statements using propositional logic. But
unfortunately, in propositional logic, we can only represent the facts, which are either true or false. PL is not
sufficient to represent the complex sentences or natural language statements. The propositional logic has very
limited expressive power. Consider the following sentence, which we cannot represent using PL logic.

ADVERTISEMENT

o "Some humans are intelligent", or

o "Sachin likes cricket."

To represent the above statements, PL logic is not sufficient, so we required some more powerful logic, such as first-
order logic.

First-Order logic:

o First-order logic is another way of knowledge representation in artificial intelligence. It is an extension to


propositional logic.

o FOL is sufficiently expressive to represent the natural language statements in a concise way.
UNIT 4: LOGICAL AGENTS
o First-order logic is also known as Predicate logic or First-order predicate logic. First-order logic is a powerful
language that develops information about the objects in a more easy way and can also express the
relationship between those objects.

o First-order logic (like natural language) does not only assume that the world contains facts like propositional
logic but also assumes the following things in the world:

o Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ......

o Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation such as: the
sister of, brother of, has color, comes between

o Function: Father of, best friend, third inning of, end of, ......

o As a natural language, first-order logic also has two main parts:

a. Syntax

b. Semantics

Syntax of First-Order logic:

The syntax of FOL determines which collection of symbols is a logical expression in first-order logic. The basic
syntactic elements of first-order logic are symbols. We write statements in short-hand notation in FOL.

Basic Elements of First-order logic:

Following are the basic elements of FOL syntax:

Constant 1, 2, A, John, Mumbai, cat,....

Variables x, y, z, a, b,....

Predicates Brother, Father, >,....

Function sqrt, LeftLegOf, ....

Connectives ∧, ∨, ¬, ⇒, ⇔

Equality ==

Quantifier ∀, ∃

Atomic sentences:

o Atomic sentences are the most basic sentences of first-order logic. These sentences are formed from a
predicate symbol followed by a parenthesis with a sequence of terms.

o We can represent atomic sentences as Predicate (term1, term2, ......, term n).

Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).


Chinky is a cat: => cat (Chinky).
UNIT 4: LOGICAL AGENTS
Complex Sentences:

o Complex sentences are made by combining atomic sentences using connectives.

First-order logic statements can be divided into two parts:

o Subject: Subject is the main part of the statement.

o Predicate: A predicate can be defined as a relation, which binds two atoms together in a statement.

Consider the statement: "x is an integer.", it consists of two parts, the first part x is the subject of the statement and
second part "is an integer," is known as a predicate.

Quantifiers in First-order logic:

o A quantifier is a language element which generates quantification, and quantification specifies the quantity
of specimen in the universe of discourse.

o These are the symbols that permit to determine or identify the range and scope of the variable in the logical
expression. There are two types of quantifier:

a. Universal Quantifier, (for all, everyone, everything)

b. Existential quantifier, (for some, at least one).

Universal Quantifier:

Universal quantifier is a symbol of logical representation, which specifies that the statement within its range is true
for everything or every instance of a particular thing.

The Universal quantifier is represented by a symbol ∀, which resembles an inverted A.

Note: In universal quantifier we use implication "→".

If x is a variable, then ∀x is read as:

o For all x

o For each x

o For every x.

Example:

All man drink coffee.

Let a variable x which refers to a cat so all x can be represented in UOD as below:
UNIT 4: LOGICAL AGENTS

∀x man(x) → drink (x, coffee).

It will be read as: There are all x where x is a man who drink coffee.

Existential Quantifier:

Existential quantifiers are the type of quantifiers, which express that the statement within its scope is true for at
least one instance of something.

It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with a predicate variable then
it is called as an existential quantifier.

Note: In Existential quantifier we always use AND or Conjunction symbol (∧).

If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:

o There exists a 'x.'

o For some 'x.'

o For at least one 'x.'

Example:

Some boys are intelligent.


UNIT 4: LOGICAL AGENTS

∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is intelligent.

Points to remember:

o The main connective for universal quantifier ∀ is implication →.

o The main connective for existential quantifier ∃ is and ∧.

Properties of Quantifiers:

o In universal quantifier, ∀x∀y is similar to ∀y∀x.

o In Existential quantifier, ∃x∃y is similar to ∃y∃x.

o ∃x∀y is not similar to ∀y∃x.

Some Examples of FOL using quantifier:

1. All birds fly.


In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).

ADVERTISEMENT

ADVERTISEMENT

2. Every man respects his parent.


In this question, the predicate is "respect(x, y)," where x=man, and y= parent.
Since there is every man so will use ∀, and it will be represented as follows:
∀x man(x) → respects (x, parent).

3. Some boys play cricket.


In this question, the predicate is "play(x, y)," where x= boys, and y= game. Since there are some boys so we will
use ∃, and it will be represented as:
∃x boys(x) → play(x, cricket).
UNIT 4: LOGICAL AGENTS
4. Not all students like both Mathematics and Science.
In this question, the predicate is "like(x, y)," where x= student, and y= subject.
Since there are not all students, so we will use ∀ with negation, so following representation for this:
¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].

5. Only one student failed in Mathematics.


In this question, the predicate is "failed(x, y)," where x= student, and y= subject.
Since there is only one student who failed in Mathematics, so we will use following representation for this:
∃(x) [ student(x) → failed (x, Mathematics) ∧∀ (y) [¬(x==y) ∧ student(y) → ¬failed (x, Mathematics)].

Free and Bound Variables:

The quantifiers interact with variables which appear in a suitable way. There are two types of variables in First-order
logic which are given below:

Free Variable: A variable is said to be a free variable in a formula if it occurs outside the scope of the quantifier.

Example: ∀x ∃(y)[P (x, y, z)], where z is a free variable.

Bound Variable: A variable is said to be a bound variable in a formula if it occurs within the scope of the quantifier.

Example: ∀x [A (x) B( y)], here x and y are the bound variables.

What is knowledge representation?

Humans are best at understanding, reasoning, and interpreting knowledge. Human knows things, which is
knowledge and as per their knowledge they perform various actions in the real world. But how machines do all
these things comes under knowledge representation and reasoning. Hence we can describe Knowledge
representation as following:

ADVERTISEMENT

ADVERTISEMENT

o Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence which concerned with
AI agents thinking and how thinking contributes to intelligent behavior of agents.

o It is responsible for representing information about the real world so that a computer can understand and
can utilize this knowledge to solve the complex real world problems such as diagnosis a medical condition or
communicating with humans in natural language.

o It is also a way which describes how we can represent knowledge in artificial intelligence. Knowledge
representation is not just storing data into some database, but it also enables an intelligent machine to learn
from that knowledge and experiences so that it can behave intelligently like a human.

What to Represent:

Following are the kind of knowledge which needs to be represented in AI systems:

o Object: All the facts about objects in our world domain. E.g., Guitars contains strings, trumpets are brass
instruments.

o Events: Events are the actions which occur in our world.

o Performance: It describe behavior which involves knowledge about how to do things.

o Meta-knowledge: It is knowledge about what we know.

o Facts: Facts are the truths about the real world and what we represent.
UNIT 4: LOGICAL AGENTS
o Knowledge-Base: The central component of the knowledge-based agents is the knowledge base. It is
represented as KB. The Knowledgebase is a group of the Sentences (Here, sentences are used as a technical
term and not identical with the English language).

Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data, and situations. Following are
the types of knowledge in artificial intelligence:

Types of knowledge

Following are the various types of knowledge:

Backward Skip 10sPlay VideoForward Skip 10s

1. Declarative Knowledge:

o Declarative knowledge is to know about something.

o It includes concepts, facts, and objects.

o It is also called descriptive knowledge and expressed in declarativesentences.

o It is simpler than procedural language.

2. Procedural Knowledge

o It is also known as imperative knowledge.

o Procedural knowledge is a type of knowledge which is responsible for knowing how to do something.

o It can be directly applied to any task.

o It includes rules, strategies, procedures, agendas, etc.

o Procedural knowledge depends on the task on which it can be applied.

3. Meta-knowledge:

o Knowledge about the other types of knowledge is called Meta-knowledge.


UNIT 4: LOGICAL AGENTS
4. Heuristic knowledge:

o Heuristic knowledge is representing knowledge of some experts in a filed or subject.

o Heuristic knowledge is rules of thumb based on previous experiences, awareness of approaches, and which
are good to work but not guaranteed.

5. Structural knowledge:

o Structural knowledge is basic knowledge to problem-solving.

o It describes relationships between various concepts such as kind of, part of, and grouping of something.

o It describes the relationship that exists between concepts or objects.

The relation between knowledge and intelligence:

Knowledge of real-worlds plays a vital role in intelligence and same for creating artificial intelligence. Knowledge
plays an important role in demonstrating intelligent behavior in AI agents. An agent is only able to accurately act on
some input when he has some knowledge or experience about that input.

Let's suppose if you met some person who is speaking in a language which you don't know, then how you will able to
act on that. The same thing applies to the intelligent behavior of the agents.

As we can see in below diagram, there is one decision maker which act by sensing the environment and using
knowledge. But if the knowledge part will not present then, it cannot display intelligent behavior.

AI knowledge cycle:

An Artificial intelligence system has the following components for displaying intelligent behavior:

o Perception

o Learning

o Knowledge Representation and Reasoning

o Planning

o Execution
UNIT 4: LOGICAL AGENTS

The above diagram is showing how an AI system can interact with the real world and what components help it to
show intelligence. AI system has Perception component by which it retrieves information from its environment. It
can be visual, audio or another form of sensory input. The learning component is responsible for learning from data
captured by Perception comportment. In the complete cycle, the main components are knowledge representation
and Reasoning. These two components are involved in showing the intelligence in machine-like humans. These two
components are independent with each other but also coupled together. The planning and execution depend on
analysis of Knowledge representation and reasoning.

Approaches to knowledge representation:

There are mainly four approaches to knowledge representation, which are givenbelow:

1. Simple relational knowledge:

o It is the simplest way of storing facts which uses the relational method, and each fact about a set of the
object is set out systematically in columns.

o This approach of knowledge representation is famous in database systems where the relationship between
different entities is represented.

o This approach has little opportunity for inference.

Example: The following is the simple relational knowledge representation.

Player Weight Age

Player1 65 23

Player2 58 18

Player3 75 24

2. Inheritable knowledge:

o In the inheritable knowledge approach, all data must be stored into a hierarchy of classes.

o All classes should be arranged in a generalized form or a hierarchal manner.


UNIT 4: LOGICAL AGENTS
o In this approach, we apply inheritance property.

o Elements inherit values from other members of a class.

o This approach contains inheritable knowledge which shows a relation between instance and class, and it is
called instance relation.

o Every individual frame can represent the collection of attributes and its value.

o In this approach, objects and values are represented in Boxed nodes.

o We use Arrows which point from objects to their values.

o Example:

3. Inferential knowledge:

o Inferential knowledge approach represents knowledge in the form of formal logics.

o This approach can be used to derive more facts.

o It guaranteed correctness.

o Example: Let's suppose there are two statements:

a. Marcus is a man

b. All men are mortal


Then it can represent as;

man(Marcus)
∀x = man (x) ----------> mortal (x)s

4. Procedural knowledge:

o Procedural knowledge approach uses small programs and codes which describes how to do specific things,
and how to proceed.

o In this approach, one important rule is used which is If-Then rule.


UNIT 4: LOGICAL AGENTS
o In this knowledge, we can use various coding languages such as LISP language and Prolog language.

o We can easily represent heuristic or domain-specific knowledge using this approach.

o But it is not necessary that we can represent all cases in this approach.

Requirements for knowledge Representation system:

A good knowledge representation system must possess the following properties.

1. 1. Representational Accuracy:
KR system should have the ability to represent all kind of required knowledge.

2. 2. Inferential Adequacy:
KR system should have ability to manipulate the representational structures to produce new knowledge
corresponding to existing structure.

3. 3. Inferential Efficiency:
The ability to direct the inferential knowledge mechanism into the most productive directions by storing
appropriate guides.

4. 4. Acquisitional efficiency- The ability to acquire the new knowledge easily using automatic methods.

Inferencesin first-order logic

In first-order logic, inferences refer to the process of deriving new sentences (conclusions) from existing sentences
(premises) based on the rules of inference. First-order logic allows for quantification over individual objects and
predicates, making it more expressive than propositional logic.

Here are some common types of inferences in first-order logic:

1. Modus Ponens:

 Modus Ponens is a valid form of inference that applies to conditional sentences.

 It states that if we have a conditional sentence of the form P→Q and we also have P as a premise, then we
can infer Q as a conclusion.

Example: - Premises: P→Q, P - Conclusion: Q

2. Modus Tollens:

 Modus Tollens is another valid form of inference that applies to conditional sentences.

 It states that if we have a conditional sentence of the form P→Q and we know that Q is false, then we can
infer that P must also be false.

Example: - Premises: P→Q, ¬Q - Conclusion: ¬P

3. Universal Instantiation:

 Universal instantiation allows us to instantiate universally quantified sentences.

 If we have a universally quantified sentence)∀xP(x), we can instantiate it with any specific individual, leading
to a sentenceP(a) where a is a specific object.

Example: - Premise ∀xP(x) - Conclusion: P(a) (where a is a specific object)


UNIT 4: LOGICAL AGENTS
4. Existential Generalization:

 Existential generalization allows us to introduce existential quantifiers.

 If we have a sentence P(a) where a is a specific object, we can generalize it to an existential quantification
∃xP(x).

Example: - Premise:P(a) - Conclusion∃xP(x)

5. Universal Generalization:

 Universal generalization allows us to introduce universal quantifiers.

 If we have a sentence P(a) where a is an arbitrary object, we can generalize it to a universally quantified
sentence ∀xP(x).

Example: - Premise: P(a) - Conclusion: ∀xP(x)

These are just a few examples of the types of inferences that can be made in first-order logic. In practice, various
rules of inference and techniques are used to reason about relationships and derive conclusions from premises in a
logical and systematic manner.

Forward Chaining and backward chaining in AI

In artificial intelligence, forward and backward chaining is one of the important topics, but before understanding
forward and backward chaining lets first understand that from where these two terms came.

Inference engine:

The inference engine is the component of the intelligent system in artificial intelligence, which applies logical rules to
the knowledge base to infer new information from known facts. The first inference engine was part of the expert
system. Inference engine commonly proceeds in two modes, which are:

a. Forward chaining

b. Backward chaining

Horn Clause and Definite clause:

Horn clause and definite clause are the forms of sentences, which enables knowledge base to use a more restricted
and efficient inference algorithm. Logical inference algorithms use forward and backward chaining approaches,
which require KB in the form of the first-order definite clause.

Definite clause: A clause which is a disjunction of literals with exactly one positive literal is known as a definite
clause or strict horn clause.

Horn clause: A clause which is a disjunction of literals with at most one positive literal is known as horn clause.
Hence all the definite clauses are horn clauses.

Example: (¬ p V ¬ q V k). It has only one positive literal k.

It is equivalent to p ∧ q → k.

A. Forward Chaining

Forward chaining is also known as a forward deduction or forward reasoning method when using an inference
engine. Forward chaining is a form of reasoning which start with atomic sentences in the knowledge base and
applies inference rules (Modus Ponens) in the forward direction to extract more data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are satisfied, and add
their conclusion to the known facts. This process repeats until the problem is solved.
UNIT 4: LOGICAL AGENTS
Properties of Forward-Chaining:

ADVERTISEMENT

o It is a down-up approach, as it moves from bottom to top.

o It is a process of making a conclusion based on known facts or data, by starting from the initial state and
reaches the goal state.

o Forward-chaining approach is also called as data-driven as we reach to the goal using available data.

o Forward -chaining approach is commonly used in the expert system, such as CLIPS, business, and production
rule systems.

Consider the following famous example which we will use in both approaches:

Example:

"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A, an enemy of America,
has some missiles, and all the missiles were sold to it by Robert, who is an American citizen."

Prove that "Robert is criminal."

To solve the above problem, first, we will convert all the above facts into first-order definite clauses, and then we
will use a forward-chaining algorithm to reach the goal.

Facts Conversion into FOL:

o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)

o Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two definite clauses by using
Existential Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)

o All of the missiles were sold to country A by Robert.


?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)

o Missiles are weapons.


Missile(p) → Weapons (p) .......(5)

o Enemy of America is known as hostile.


Enemy(p, America) →Hostile(p) ........(6)

o Country A is an enemy of America.


Enemy (A, America) .........(7)

o Robert is American
American(Robert). ..........(8)

Forward chaining proof:

Step-1:

In the first step we will start with the known facts and will choose the sentences which do not have implications,
such as: American(Robert), Enemy(A, America), Owns(A, T1), and Missile(T1). All these facts will be represented as
below.
UNIT 4: LOGICAL AGENTS
Step-2:

At the second step, we will see those facts which infer from available facts and with satisfied premises.

Rule-(1) does not satisfy premises, so it will not be added in the first iteration.

Rule-(2) and (3) are already added.

Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers from the conjunction of
Rule (2) and (3).

ADVERTISEMENT

ADVERTISEMENT

Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from Rule-(7).

Step-3:

At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so we can add
Criminal(Robert) which infers all the available facts. And hence we reached our goal statement.

Hence it is proved that Robert is Criminal using forward chaining approach.

B. Backward Chaining:

Backward-chaining is also known as a backward deduction or backward reasoning method when using an inference
engine. A backward chaining algorithm is a form of reasoning, which starts with the goal and works backward,
chaining through rules to find known facts that support the goal.

Properties of backward chaining:

o It is known as a top-down approach.

o Backward-chaining is based on modus ponens inference rule.


UNIT 4: LOGICAL AGENTS
o In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts true.

o It is called a goal-driven approach, as a list of goals decides which rules are selected and used.

o Backward -chaining algorithm is used in game theory, automated theorem proving tools, inference engines,
proof assistants, and various AI applications.

o The backward-chaining method mostly used a depth-first search strategy for proof.

Example:

In backward-chaining, we will use the same above example, and will rewrite all the rules.

o American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)


Owns(A, T1) ........(2)

o Missile(T1)

o ?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)

o Missile(p) → Weapons (p) .......(5)

o Enemy(p, America) →Hostile(p) ........(6)

o Enemy (A, America) .........(7)

o American(Robert). ..........(8)

Backward-Chaining proof:

In Backward chaining, we will start with our goal predicate, which is Criminal(Robert), and then infer further rules.

Step-1:

At the first step, we will take the goal fact. And from the goal fact, we will infer other facts, and at last, we will prove
those facts true. So our goal fact is "Robert is Criminal," so following is the predicate of it.

Step-2:

At the second step, we will infer other facts form goal fact which satisfies the rules. So as we can see in Rule-1, the
goal predicate Criminal (Robert) is present with substitution {Robert/P}. So we will add all the conjunctive facts
below the first level and will replace p with Robert.

Here we can see American (Robert) is a fact, so it is proved here.


UNIT 4: LOGICAL AGENTS

Step-3:t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it satisfies Rule-(5). Weapon
(q) is also true with the substitution of a constant T1 at q.

Step-4:

At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which satisfies the Rule- 4, with the
substitution of A in place of r. So these two statements are proved here.
UNIT 4: LOGICAL AGENTS

Step-5:

At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6. And hence all the
statements are proved true using backward chaining.

Difference between backward chaining and forward chaining

Following is the difference between the forward chaining and backward chaining:
UNIT 4: LOGICAL AGENTS
ADVERTISEMENT

o Forward chaining as the name suggests, start from the known facts and move forward by applying inference
rules to extract more data, and it continues until it reaches to the goal, whereas backward chaining starts
from the goal, move backward by using inference rules to determine the facts that satisfy the goal.

o Forward chaining is called a data-driven inference technique, whereas backward chaining is called a goal-
driven inference technique.

o Forward chaining is known as the down-up approach, whereas backward chaining is known as a top-
down approach.

o Forward chaining uses breadth-first search strategy, whereas backward chaining uses depth-first
search strategy.

o Forward and backward chaining both applies Modus ponens inference rule.

o Forward chaining can be used for tasks such as planning, design process monitoring, diagnosis, and
classification, whereas backward chaining can be used for classification and diagnosis tasks.

o Forward chaining can be like an exhaustive search, whereas backward chaining tries to avoid the
unnecessary path of reasoning.

o In forward-chaining there can be various ASK questions from the knowledge base, whereas in backward
chaining there can be fewer ASK questions.

o Forward chaining is slow as it checks for all the rules, whereas backward chaining is fast as it checks few
required rules only.
UNIT 4: LOGICAL AGENTS
S. Forward Chaining Backward Chaining
No.

1. Forward chaining starts from known facts and Backward chaining starts from the goal and works
applies inference rule to extract more data unit it backward through inference rules to find the required
reaches to the goal. facts that support the goal.

2. It is a bottom-up approach It is a top-down approach

3. Forward chaining is known as data-driven Backward chaining is known as goal-driven technique


inference technique as we reach to the goal using as we start from the goal and divide into sub-goal to
the available data. extract the facts.

4. Forward chaining reasoning applies a breadth- Backward chaining reasoning applies a depth-first
first search strategy. search strategy.

5. Forward chaining tests for all the available rules Backward chaining only tests for few required rules.

6. Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic,
monitoring, control, and interpretation prescription, and debugging application.
application.

7. Forward chaining can generate an infinite number Backward chaining generates a finite number of
of possible conclusions. possible conclusions.

8. It operates in the forward direction. It operates in the backward direction.

9. Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the required data.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
Introduction to Ontologies

OWL is built on RDFS which helps us to define ontologies.


Ontologies are formal definitions of vocabularies that allow us to define difficult or complex structures and new
relationships between vocabulary terms and members of classes that we define. Ontologies generally describe
specific domains such as scientific research areas.

Example:
Ontology depicting Movie:-

Components:

1. Individuals –
Individuals are also known as instances of objects or concepts.It may or may not be present in an ontology.It
represents the atomic level of an ontology.

For example, in the above ontology of movie, individuals can be a film (Titanic), a director (James Cameron), an actor
(Leonardo DiCaprio).

2. Classes –
Sets of collections of various objects are termed as classes.

For example, in the above ontology representing movie, movie genre (e.g. Thriller, Drama), types of person (Actor or
Director) are classes.

3. Attributes –
Properties that objects may possess.

For example, a movie is described by the set of ‘parts’ it contains like Script, Director, Actors.

4. Relations –
Ways in which concepts are related to one another.

For example, as shown above in the diagram a movie has to have a script and actors in it.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
Different Ontology Languages:

 CycL – It was developed for the Cyc project and is based on First Order Predicate Calculus.

 Rule Interchange Format (RIF) – It is the language used for combining ontologies and rules.

 Open Biomedical Ontologies (OBO) – It is used for various biological and biomedical ontologies.

 Web Ontology Language (OWL) – It is developed for using ontologies over the World Wide Web (WWW).

Mental objects and modal logic

In modal logic, mental objects can be represented and reasoned about using modal operators to capture the
modalities of belief, knowledge, possibility, necessity, and others. Modal logic extends classical propositional logic by
introducing modal operators that express various kinds of modalities.

Here's how modal logic can be used to reason about mental objects:

1. Belief and Knowledge:

Modal logic can represent beliefs and knowledge by introducing modal operators such as B for belief and K for
knowledge. For example:

 Bp represents the belief that proposition p is true.

 Kp represents the knowledge that proposition p is true.

Agents reason about beliefs and knowledge using inference rules that capture the logical relationships between
propositions and the agent's beliefs or knowledge.

2. Possibility and Necessity:

Modal logic also deals with notions of possibility and necessity using modal operators such as ◊◊ for possibility and
□□ for necessity. For example:

 ◊p represents the possibility that proposition p is true.

 □p represents the necessity that proposition p is true.

These modal operators allow reasoning about the possible worlds in which propositions hold true and the necessary
conditions under which they hold.

3. Epistemic Logic:

Epistemic logic is a branch of modal logic that specifically deals with knowledge and belief. It provides formal
languages and semantics for reasoning about knowledge and belief in multi-agent systems. Epistemic logic allows
agents to reason about what they know, what others know, and what others believe about what they know.

4. Deontic Logic:

Deontic logic is another branch of modal logic that deals with normative concepts such as obligation, permission,
and prohibition. It allows agents to reason about what is obligatory, permissible, or forbidden given certain norms or
rules.

5. Temporal Logic:

Temporal logic is a modal logic that deals with time and temporal relationships. It allows agents to reason about
temporal properties such as past, present, and future, as well as temporal relations such as before, after, and during.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
6. Intuitionistic Logic:

Intuitionistic logic is a modal logic that extends classical logic by introducing the notion of constructive truth. It
allows agents to reason about constructive proofs and constructive truth values, as well as the absence of proofs for
certain propositions.

In summary, modal logic provides a powerful formal framework for reasoning about mental objects and modalities
such as belief, knowledge, possibility, necessity, obligation, permission, and temporal relationships. It allows agents
to reason about the world, their beliefs, and the beliefs of others in a systematic and rigorous manner.

Reasoning in Artificial intelligence

In previous topics, we have learned various ways of knowledge representation in artificial intelligence. Now we will
learn the various ways to reason on this knowledge using different logical schemes.

Reasoning:

The reasoning is the mental process of deriving logical conclusion and making predictions from available knowledge,
facts, and beliefs. Or we can say, "Reasoning is a way to infer facts from existing data." It is a general process of
thinking rationally, to find valid conclusions.

In artificial intelligence, the reasoning is essential so that the machine can also think rationally as a human brain, and
can perform like a human.

Types of Reasoning

In artificial intelligence, reasoning can be divided into the following categories:

o Deductive reasoning

o Inductive reasoning

o Abductive reasoning

o Common Sense Reasoning

o Monotonic Reasoning

o Non-monotonic Reasoning

Note: Inductive and deductive reasoning are the forms of propositional logic.

1. Deductive reasoning:

Deductive reasoning is deducing new information from logically related known information. It is the form of valid
reasoning, which means the argument's conclusion must be true when the premises are true.

Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts. It is sometimes
referred to as top-down reasoning, and contradictory to inductive reasoning.

In deductive reasoning, the truth of the premises guarantees the truth of the conclusion.

Deductive reasoning mostly starts from the general premises to the specific conclusion, which can be explained as
below example.

Example:

Premise-1: All the human eats veggies

Premise-2: Suresh is human.

Conclusion: Suresh eats veggies.


UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
The general process of deductive reasoning is given below:

2. Inductive Reasoning:

Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts by the process of
generalization. It starts with the series of specific facts or data and reaches to a general statement or conclusion.

Inductive reasoning is a type of propositional logic, which is also known as cause-effect reasoning or bottom-up
reasoning.

In inductive reasoning, we use historical data or various premises to generate a generic rule, for which premises
support the conclusion.

In inductive reasoning, premises provide probable supports to the conclusion, so the truth of premises does not
guarantee the truth of the conclusion.

Example:

Premise: All of the pigeons we have seen in the zoo are white.

Conclusion: Therefore, we can expect all the pigeons to be white.

3. Abductive reasoning:

Abductive reasoning is a form of logical reasoning which starts with single or multiple observations then seeks to find
the most likely explanation or conclusion for the observation.

Abductive reasoning is an extension of deductive reasoning, but in abductive reasoning, the premises do not
guarantee the conclusion.

ADVERTISEMENT

ADVERTISEMENT

Example:

Implication: Cricket ground is wet if it is raining

Axiom: Cricket ground is wet.

Conclusion It is raining.

4. Common Sense Reasoning

Common sense reasoning is an informal form of reasoning, which can be gained through experiences.

Common Sense reasoning simulates the human ability to make presumptions about events which occurs on every
day.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
It relies on good judgment rather than exact logic and operates on heuristic knowledge and heuristic rules.

Example:

1. One person can be at one place at a time.

2. If I put my hand in a fire, then it will burn.

The above two statements are the examples of common sense reasoning which a human mind can easily understand
and assume.

5. Monotonic Reasoning:

In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we add some other
information to existing information in our knowledge base. In monotonic reasoning, adding knowledge does not
decrease the set of prepositions that can be derived.

To solve monotonic problems, we can derive the valid conclusion from the available facts only, and it will not be
affected by new facts.

Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so we cannot use
monotonic reasoning.

Monotonic reasoning is used in conventional reasoning systems, and a logic-based system is monotonic.

Any theorem proving is an example of monotonic reasoning.

Example:

o Earth revolves around the Sun.

It is a true fact, and it cannot be changed even if we add another sentence in knowledge base like, "The moon
revolves around the earth" Or "Earth is not round," etc.

Advantages of Monotonic Reasoning:

o In monotonic reasoning, each old proof will always remain valid.

o If we deduce some facts from available facts, then it will remain valid for always.

Disadvantages of Monotonic Reasoning:

o We cannot represent the real world scenarios using Monotonic reasoning.

o Hypothesis knowledge cannot be expressed with monotonic reasoning, which means facts should be true.

o Since we can only derive conclusions from the old proofs, so new knowledge from the real world cannot be
added.

6. Non-monotonic Reasoning

In Non-monotonic reasoning, some conclusions may be invalidated if we add some more information to our
knowledge base.

Logic will be said as non-monotonic if some conclusions can be invalidated by adding more knowledge into our
knowledge base.

Non-monotonic reasoning deals with incomplete and uncertain models.

"Human perceptions for various things in daily life, "is a general example of non-monotonic reasoning.

Example: Let suppose the knowledge base contains the following knowledge:
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
o Birds can fly

o Penguins cannot fly

o Pitty is a bird

So from the above sentences, we can conclude that Pitty can fly.

However, if we add one another sentence into knowledge base "Pitty is a penguin", which concludes "Pitty cannot
fly", so it invalidates the above conclusion.

Advantages of Non-monotonic reasoning:

o For real-world systems such as Robot navigation, we can use non-monotonic reasoning.

o In Non-monotonic reasoning, we can choose probabilistic facts or can make assumptions.

Disadvantages of Non-monotonic Reasoning:

o In non-monotonic reasoning, the old facts may be invalidated by adding new sentences.

o It cannot be used for theorem proving.

Classical planning

Classical planning is a branch of artificial intelligence (AI) that deals with the problem of generating a sequence of
actions to achieve a desired goal in a deterministic, fully observable environment. It involves the formalization of
planning problems, the representation of states and actions, and the development of algorithms to search for plans
that lead from an initial state to a goal state.

Here are the key components and concepts of classical planning:

Components of Classical Planning:

1. State Space: The state space represents all possible states of the environment. Each state describes the
configuration of the world at a particular point in time.

2. Actions: Actions are atomic operations that can be performed to change the state of the environment.
Actions have preconditions (conditions that must be true for the action to be applicable) and effects
(changes that occur in the state after the action is executed).

3. Initial State: The initial state represents the starting configuration of the environment.

4. Goal State: The goal state specifies the desired configuration that the planner aims to achieve.

Concepts in Classical Planning:

1. Planning Problem: A planning problem consists of an initial state, a set of actions, and a goal state. The goal
of the planner is to find a sequence of actions (a plan) that transforms the initial state into the goal state.

2. Plan: A plan is a sequence of actions that, when executed in the initial state, lead to the achievement of the
goal state.

3. Search Algorithms: Classical planning algorithms use search techniques to explore the space of possible
plans and find a solution that satisfies the goal. Common search algorithms include breadth-first search,
depth-first search, A* search, and heuristic search algorithms.

4. Heuristics: Heuristics are domain-specific knowledge or rules that guide the search process by providing
estimates of the distance from the current state to the goal state. Heuristics help focus the search on
promising areas of the state space, leading to more efficient planning algorithms.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
5. Plan Representation: Plans can be represented using formal languages such as STRIPS (Stanford Research
Institute Problem Solver), PDDL (Planning Domain Definition Language), or other domain-specific languages.
These representations specify the sequence of actions and their parameters needed to achieve the goal.

Applications of Classical Planning:

1. Robotics: Classical planning is used in robotics for task planning and motion planning. Robots can use
planning algorithms to generate sequences of actions to navigate through environments, manipulate
objects, and perform tasks autonomously.

2. Autonomous Agents: Autonomous agents in video games, virtual environments, and intelligent systems use
planning techniques to make decisions and achieve objectives in dynamic and uncertain environments.

3. Resource Allocation: Classical planning techniques are used in resource allocation problems, scheduling
tasks, and optimizing workflows in various domains such as manufacturing, logistics, and project
management.

4. Natural Language Understanding: Classical planning is also applied in natural language understanding
systems to generate coherent plans or responses based on user queries or commands.

Overall, classical planning provides a formal and systematic approach to solving complex problems by generating
sequences of actions to achieve desired goals in deterministic environments. It forms the basis for many applications
in artificial intelligence, robotics, and decision-making systems.

Algorithms for classical planning

Classical planning algorithms are designed to find a sequence of actions (a plan) that transforms an initial state of the
world into a desired goal state, given a set of actions and their effects. These algorithms explore the state space of
possible plans to find a solution that satisfies the goal. Here are some common algorithms used for classical
planning:

1. STRIPS Algorithm:

 STRIPS (Stanford Research Institute Problem Solver) is one of the earliest and most well-known algorithms
for classical planning.

 It uses a forward state-space search approach to explore the space of possible plans.

 STRIPS represents the state space using first-order logic and uses heuristic search algorithms such as A* to
find optimal plans efficiently.

2. Graph Plan:

 Graph Plan is a planning algorithm based on constructing a graph representation of the planning problem.

 It represents actions and their preconditions and effects as nodes and edges in a planning graph.

 Graph Plan uses graph-based algorithms to search for a valid plan by exploring the planning graph and
constructing a solution.

3. Planning Domain Definition Language (PDDL):

 PDDL is a declarative language for describing planning problems and domains.

 It provides a standardized format for specifying the initial state, actions, goals, and constraints of a planning
problem.

 Many planning algorithms use PDDL as input and output formats for planning problems, allowing
interoperability between different planners and tools.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
4. Heuristic Search Algorithms:

 Many classical planning algorithms use heuristic search techniques to guide the search process towards
promising areas of the state space.

 A* search, in particular, is a popular heuristic search algorithm used in classical planning.

 A* uses a heuristic function to estimate the cost of reaching the goal from a given state and combines this
estimate with the actual cost of reaching that state to guide the search towards the most promising states
first.

5. Fast Downward:

 Fast Downward is a state-of-the-art classical planning system that combines several advanced techniques for
efficient planning.

 It uses various heuristic search algorithms, including pattern databases and symbolic search, to solve
planning problems quickly and effectively.

 Fast Downward is highly optimized and capable of handling large and complex planning domains efficiently.

6. Monte Carlo Tree Search (MCTS):

 Monte Carlo Tree Search is a search algorithm commonly used in planning and decision-making problems.

 MCTS is particularly effective in domains with large branching factors and uncertainty.

 It constructs a search tree by repeatedly simulating possible future states and actions and selecting the most
promising branches based on statistical sampling.

These are just a few examples of classical planning algorithms used to solve planning problems in various domains.
The choice of algorithm depends on factors such as the complexity of the planning problem, the size of the state
space, and the available computational resources.

Heuristics for planning

Heuristics play a crucial role in classical planning by guiding the search process towards promising areas of the state
space and helping to efficiently find solutions to planning problems. Here are some common heuristics used in
classical planning:

1. Relaxed Planning Graph Heuristic:

 The relaxed planning graph heuristic is based on the construction and analysis of a simplified version of the
planning graph.

 It relaxes the constraints of the original planning problem by removing delete effects from actions and
ignoring mutual exclusions between actions.

 The relaxed planning graph heuristic estimates the number of actions needed to achieve the goal in the
relaxed planning graph, providing an admissible heuristic for classical planning algorithms like A*.

2. Additive Heuristics:

 Additive heuristics compute estimates of the distance from the current state to the goal by summing the
estimated costs of achieving individual subgoals or substates.

 Additive heuristics are often derived from the decomposition of the planning problem into subproblems or
subgoals, allowing for more informed estimates of the remaining work to reach the goal.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
3. Pattern Database (PDB) Heuristic:

 Pattern databases are precomputed databases of heuristic values for states or substates of the planning
problem.

 The pattern database heuristic provides an estimate of the cost to reach the goal state from a given state by
looking up heuristic values in the database.

 Pattern databases can be constructed using various pattern extraction techniques and can significantly
improve the efficiency of heuristic search algorithms.

4. Critical Path Heuristic:

 The critical path heuristic identifies the sequence of actions or states that are most critical for achieving the
goal.

 It focuses the search on the most essential parts of the state space that must be traversed to reach the goal,
potentially pruning unproductive branches of the search tree.

5. MaxSAT Heuristic:

 MaxSAT-based heuristics formulate the planning problem as a MaxSAT (Maximum Satisfiability) problem and
use SAT solvers to compute an admissible heuristic estimate of the remaining work to achieve the goal.

 MaxSAT heuristics can be effective for planning problems with complex constraints and non-binary state
variables.

6. Domain-Specific Heuristics:

 Domain-specific heuristics leverage domain knowledge and problem structure to derive more informed
estimates of the distance to the goal.

 These heuristics may exploit specific features of the planning domain, such as state variables, action costs,
goal structure, and problem constraints, to guide the search effectively.

7. Relaxed Plan Length Heuristic:

 The relaxed plan length heuristic estimates the number of actions required to achieve the goal under relaxed
problem constraints.

 It provides a lower bound on the actual plan length needed to achieve the goal and can be used as a
heuristic for guiding search algorithms towards more promising regions of the state space.

These are just a few examples of heuristics used in classical planning. Effective heuristics can significantly improve
the performance of planning algorithms by guiding the search towards relevant parts of the state space and reducing
the computational effort required to find solutions. The choice of heuristic depends on the characteristics of the
planning problem and the available domain knowledge.

Hierarchical planning

Hierarchical planning in artificial intelligence involves organizing planning tasks into a hierarchy of levels, where each
level represents a different level of abstraction or granularity. This hierarchical structure allows for more efficient
planning by decomposing complex tasks into simpler subtasks and coordinating their execution to achieve higher-
level goals. Here's an overview of hierarchical planning in AI:

Key Components of Hierarchical Planning:

1. Abstraction Levels: Hierarchical planning involves organizing planning tasks into multiple abstraction levels,
ranging from high-level goals to low-level actions.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
2. Task Decomposition: At each level of the hierarchy, planning tasks are decomposed into subtasks or actions
that contribute to achieving higher-level objectives.

3. Inter-Level Coordination: Coordination mechanisms are used to manage interactions between different
levels of the hierarchy, ensuring that lower-level plans contribute to the achievement of higher-level goals.

4. Plan Refinement: Hierarchical planning involves refining high-level plans into more detailed and executable
plans at lower levels of the hierarchy.

Hierarchical Task Networks (HTNs):

Hierarchical Task Networks (HTNs) are a formal framework for representing and reasoning about hierarchical
planning problems. In HTNs:

 Tasks are organized into a hierarchy, with higher-level tasks decomposed into lower-level subtasks.

 Each task has preconditions that must be satisfied before it can be executed and postconditions that are
achieved upon completion.

 Constraints and ordering constraints can be specified to control the execution of tasks within the hierarchy.

Hierarchical Planning Algorithms:

Several algorithms are used for hierarchical planning, including:

1. Decomposition-based Approaches: These algorithms decompose high-level goals into subgoals and
recursively decompose them into lower-level tasks until primitive actions are reached.

2. Partial Order Planning: Partial order planning algorithms construct plans as partially ordered sets of actions,
allowing for flexible ordering of actions to achieve goals.

3. Hierarchical Task Decomposition: This approach decomposes planning tasks into a hierarchy of subtasks and
uses heuristics to guide the search for feasible plans.

Applications of Hierarchical Planning:

1. Robotics: Hierarchical planning is used in robotics for task planning, motion planning, and task execution.
Robots can use hierarchical planning to plan and execute complex tasks such as navigation, manipulation,
and assembly.

2. Autonomous Agents: Autonomous agents in video games, virtual environments, and intelligent systems use
hierarchical planning to make decisions and achieve objectives in complex and dynamic environments.

3. Resource Allocation: Hierarchical planning techniques are used in resource allocation problems, scheduling
tasks, and optimizing workflows in various domains such as manufacturing, logistics, and project
management.

4. Multi-Agent Systems: Hierarchical planning is used in multi-agent systems to coordinate the actions of
multiple agents and achieve collective goals or objectives.

In summary, hierarchical planning in AI provides a flexible and scalable approach to solving complex planning
problems by organizing tasks into a structured hierarchy and coordinating their execution to achieve goals
efficiently. It allows for the decomposition of complex tasks into simpler subtasks and provides mechanisms for
coordinating and refining plans at different levels of abstraction.

Non-deterministic domains

Non-deterministic domains in artificial intelligence refer to environments or systems where the outcome of actions
or events is uncertain or probabilistic. In such domains, the same action taken in the same state may lead to
different outcomes with certain probabilities. Modeling and reasoning in non-deterministic domains require
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
techniques that can handle uncertainty and probabilistic outcomes effectively. Here are some key aspects of non-
deterministic domains:

1. Uncertainty:

 Non-deterministic domains often involve uncertainty about the outcome of actions or events due to factors
such as incomplete information, noisy sensors, or stochastic processes.

 Uncertainty can arise in various forms, including probabilistic transitions between states, uncertain
observations, and unknown initial conditions.

2. Probabilistic Transitions:

 In non-deterministic domains, the transition from one state to another following an action is probabilistic
rather than deterministic.

 Each action may have multiple possible outcomes, each associated with a probability distribution over
possible successor states.

 Agents must reason about these probabilistic transitions when planning and decision-making.

3. Markov Decision Processes (MDPs):

 Markov Decision Processes (MDPs) are a mathematical framework for modeling decision-making in non-
deterministic domains with probabilistic transitions.

 In MDPs, the environment is modeled as a set of states, actions, transition probabilities, rewards, and a
discount factor.

 Agents make decisions based on the current state and select actions to maximize expected cumulative
rewards over time.

4. Partially Observable Markov Decision Processes (POMDPs):

 Partially Observable Markov Decision Processes (POMDPs) extend MDPs to handle environments where the
agent's observations are partial or incomplete.

 In POMDPs, the agent maintains a belief state representing the probability distribution over possible states
given the history of observations and actions.

 Planning and decision-making in POMDPs involve reasoning about the belief state and selecting actions to
maximize expected cumulative rewards.

5. Uncertainty Representation:

 Uncertainty in non-deterministic domains can be represented using probability distributions, belief states,
Bayesian networks, or other probabilistic models.

 Agents use probabilistic reasoning techniques such as Bayesian inference, Monte Carlo methods, and
particle filters to update beliefs and make decisions under uncertainty.

6. Stochastic Processes:

 Stochastic processes are used to model the evolution of non-deterministic systems over time.

 Examples of stochastic processes include random walks, Markov chains, and continuous-time stochastic
processes.

 Agents reason about stochastic processes to predict future states, estimate probabilities, and plan actions
accordingly.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
7. Applications:

 Non-deterministic domains arise in various real-world applications, including robotics, autonomous systems,
decision support systems, finance, healthcare, and natural language processing.

 Examples include robot navigation in dynamic environments, autonomous vehicle control, medical diagnosis
under uncertainty, and probabilistic language understanding.

In summary, non-deterministic domains pose challenges for modeling, planning, and decision-making in artificial
intelligence. Techniques such as MDPs, POMDPs, and probabilistic reasoning are essential for addressing uncertainty
and probabilistic outcomes effectively in these domains.

Time-schedule and resources analysis

Time, schedule, and resource analysis in artificial intelligence (AI) involves the use of algorithms and techniques to
manage and optimize the allocation of time, scheduling of tasks, and utilization of resources in various applications.
Here's an overview of how AI contributes to time, schedule, and resource analysis:

1. Task Scheduling and Optimization:

AI techniques such as constraint satisfaction, heuristic search, genetic algorithms, and simulated annealing are used
to schedule tasks efficiently while considering constraints such as deadlines, dependencies, and resource availability.

 Constraint Satisfaction: Constraint satisfaction techniques ensure that tasks are scheduled in a way that
satisfies all constraints and requirements.

 Heuristic Search: Heuristic search algorithms explore the space of possible schedules to find solutions that
optimize specific criteria, such as minimizing completion time or maximizing resource utilization.

 Genetic Algorithms: Genetic algorithms can be used to evolve schedules over multiple iterations, allowing
for exploration of a wide range of possible solutions.

 Simulated Annealing: Simulated annealing is a probabilistic optimization technique that finds near-optimal
schedules by simulating the annealing process in metallurgy, gradually decreasing the likelihood of accepting
worse solutions.

2. Resource Allocation and Management:

AI plays a crucial role in optimizing the allocation and management of resources such as manpower, equipment, and
facilities in various domains.

 Resource Allocation Algorithms: Resource allocation algorithms aim to distribute resources effectively to
maximize productivity and minimize costs.

 Multi-Agent Systems: Multi-agent systems employ AI techniques for distributed resource allocation, where
autonomous agents negotiate and coordinate resource usage based on local and global objectives.

 Dynamic Resource Management: AI enables dynamic resource management systems that can adapt to
changing demands, allocate resources in real-time, and optimize resource utilization based on evolving
priorities and constraints.

3. Predictive Analytics and Forecasting:

AI-powered predictive analytics and forecasting techniques analyze historical data and patterns to predict future
trends, demand, and resource requirements.

 Time Series Analysis: Time series analysis methods, including ARIMA (AutoRegressive Integrated Moving
Average) models, exponential smoothing, and machine learning algorithms, are used to forecast future
resource demands and schedule tasks accordingly.
UNIT 5: KNOWLEDGE REPRESENTATION AND PLANNING.
 Predictive Maintenance: AI models predict equipment failures and maintenance needs based on sensor
data, enabling proactive scheduling of maintenance activities to minimize downtime and optimize resource
usage.

4. Project Management and Optimization:

AI supports project management by automating repetitive tasks, optimizing resource allocation, and improving
decision-making processes.

 Project Scheduling Software: AI-powered project management tools help create, optimize, and visualize
project schedules, dependencies, and critical paths.

 Risk Analysis and Mitigation: AI techniques analyze project risks, identify potential bottlenecks, and suggest
mitigation strategies to ensure projects stay on schedule and within budget.

 Optimization Algorithms: Optimization algorithms, such as linear programming, integer programming, and
dynamic programming, are used to optimize project schedules and resource allocation under various
constraints and objectives.

5. Real-Time Decision-Making:

AI enables real-time decision-making by analyzing streaming data, detecting anomalies, and dynamically adjusting
schedules and resource allocation in response to changing conditions.

 Stream Processing and Complex Event Processing: Stream processing techniques analyze real-time data
streams to detect patterns, trends, and anomalies that may impact schedules or resource availability.

 Reinforcement Learning: Reinforcement learning algorithms enable agents to learn optimal scheduling and
resource allocation policies through trial and error, adapting to changing environments and objectives over
time.

In summary, AI techniques empower organizations to analyze, optimize, and manage time, schedules, and resources
effectively across various domains, leading to improved efficiency, productivity, and decision-making.

You might also like