Artificial Intelligence Note
Artificial Intelligence Note
Intelligence
Artificial device
The Turing Test approach has led to the development of a wide range of AI
technologies, including chatbots, virtual assistants, and recommendation
engines.
DEFINING THE SCOPE AND VIEW OF ARTIFICIAL INTELLIGENCE
Thinking rationally: The “laws of thought” approach
For example, if we know that Samson is a man and that all men are good,
we can conclude that Samson is a good man. This approach of using logical
notations and inferences is appealing because it mirrors how our own
minds work.
These tasks are done matter of firstly and routinely by people and some
other animals.
These tasks cannot be done by all people, and can only be performed by
skilled specialists.
TYPICAL AI PROBLEMS
However, when we look at what computer systems have been able to achieve
to date, we see that their achievements include performing sophisticated
tasks like medical diagnosis, performing symbolic integration,
Mathematical problem solving and playing chess.
On the other hand it has proved to be very hard to make computer systems
perform many routine tasks that all humans and a lot of animals can do.
Examples of such tasks include navigating our way without running into
things, catching prey and avoiding predators.
Robotics
FAMOUS AI SYSTEM
IBM Deep Blue, a chess-playing supercomputer that defeated international
grandmaster Garry Kasparov.
Internet Agents: The explosive growth of the internet has also led to
growing interest in internet agents to monitor users' tasks, seek needed
information, and to learn which information is most useful.
For example, when a child is born, they don't know anything about the
world. However, over time, they learn and pick up human behavior from
their surroundings. If they perform an undesirable action, they are
punished. Likewise, if they do something good, they are rewarded.
Cognitive AI similarly aims for systems to learn from their environment.
APPROACHES TO ARTIFICIAL INTELLIGENCE
Strong AI, also known as artificial general intelligence or AGI, is
capable of behaving and performing actions in the same ways human beings
can. AGI mimics human intelligence, and is able to solve problems and
learn news skills in ways similar to our own.
The more an AI system approaches the abilities of a human being, with all
the intelligence, emotion, and broad applicability of knowledge, the more
‘strong’ the AI system is considered. Strong AI maintains that suitably
programmed machines are capable of cognitive mental states.
Search engines: Google and other search engines are also examples of weak
AI. When you type in your question, the algorithm gets to work to run
that question through its vast database to classify it and come back with
answers.
PRACTICAL EXAMPLES OF WEAK/NARROW AI
Autonomous vehicles: AI that allows vehicles to operate without a human
driver is weak AI. The challenge, since this AI doesn’t possess full
cognitive abilities like a human brain, is to program and train the AI
regarding any potential road hazard or situation the vehicle might
encounter.
Robots: Currently, robots don’t have a mind of their own. Drones and
manufacturing robots operate with narrow AI and are able to complete a
LIMITATIONS OF AI
Today’s successful AI systems operate in well-defined domains and employ
narrow, specialized knowledge. Common sense knowledge is needed to
function in complex, open-ended worlds. Such a system also needs to
understand unconstrained natural language. However these capabilities are
not yet fully present in today’s intelligent systems.
List five tasks that computers are unlikely to be able to do in the next five years.
Robotics: We have been able to make vehicles that are mostly autonomous.
A robot is a programmable autonomous machine capable of sensing its
environment, carrying out computations to make decisions, and performing
actions in the real world. The term robotics describes the field of study
focused on developing robots and automation.
Modern cars are equipped with a wide range of sensors that gather data
from the vehicle’s surroundings and internal systems. These sensors
provide crucial information to the car’s AI system, enabling it to make
informed decision. E.g.
INTRODUCTION TO AGENT
An agent perceives its environment through sensors.
The complete set of inputs at a given time is called a percept.
The current percept can influence the actions of an agent.
The agent can change the environment through actuators.
Actuators in a car are responsible for
translating the decisions made by the
AI system into physical actions that
affect the car’s movement and
behavior.
Sensor is a device which detects the
change in the environment and sends
the information to other electronic
devices. An agent observes its
environment through sensors.
Agent Behavior: It refers to the
actions and responses it exhibits in
its environment based on the
An operation involving an Effector information
is called anitaction.
receives through
Actions canits
be
grouped into action sequences. The sensors.
agent can have goals which it tries to
INTRODUCTION TO AGENT
EXAMPLES OF AGENTS
Human Agent: A human agent refers to a human being who interacts with
their environment. Humans possess sensory organs such as eyes for visual
perception and ears for auditory input, which serve as sensors.
Additionally, hands, legs, and vocal tract function as actuators,
enabling humans to manipulate objects, move around, and communicate.
Human agents rely on their senses to perceive the environment and use
their limbs and voice to effect changes in the world.
Some examples of robots are Xavier from CMU, COG from MIT, Aibo from SONY
EXAMPLES OF AGENTS
Software Agent: A software agent is a program that operates in a digital
environment. Sensors for software agents can include keyboard input,
mouse movements, or even data read from files or network streams. These
agents use these inputs to make decisions and perform actions. The
outputs are usually displayed on a screen, sent over a network, or stored
in files.
Gaming agents: These are agents that are designed to play games, either
against human opponents or other agents. Examples of gaming agents
include chess-playing agents and poker-playing agents.
Fraud detection agents: These are agents that are designed to detect
fraudulent behavior in financial transactions. They can analyze patterns
of behavior to identify suspicious activity and alert authorities.
Examples of fraud detection agents include those used by banks and credit
card companies.
AGENT PERFORMANCE
An agent function to implements a mapping that connects sequences of
perceptions to corresponding actions.
The ideal mapping specifies which actions an agent ought to take at any
point in time.
Perfect Rationality assumes that the rational agent knows all and will
take the action that maximizes her utility. Human beings do not satisfy
this definition of rationality.
Rational Action is the action that maximizes the expected value of the
performance measure given the percept sequence to date.
RATIONALITY
1.Performance Measure
The performance measure evaluates how well the agent is achieving its
goals. It can be defined in various ways depending on the application.
For instance, in a self-driving car, the performance measure could
include safety, speed, and passenger comfort.
2. Rationality
Rationality is not just about achieving the best possible outcome but
also about acting optimally given the information and computational
resources available. An agent is considered rational if it does the
“right thing,” given what it knows.
3. Autonomy
A rational agent should operate autonomously to a certain extent, making
decisions and taking actions without human intervention. This involves
learning from the environment and updating its knowledge base.
AGENT ENVIRONMENT
Chess – the board is fully observable, and so are the opponent’s moves.
Autonomous vehicle – the environment is partially observable because
what’s around the corner is not known.
The action on a state has nothing to do with the next state. Real-life
Example: A support bot (agent) answer to a question and then answer to
another question and so on. An episode is a single question and answer.
For example, customers can use a utility-based agent to search for flight
tickets with minimum traveling time, irrespective of the price.
CLASSES OF INTELLIGENT AGENT
Table-Driven Agent : The table driven agent program is invoked for each
new percept and returns an action each time using a table that contains
the appropriate actions for every possible percept sequence.
What is needed is the ability to react quickly to the present. So, use
minimal internal state representation, complement at each time step with
sensor input.
When you add a second level of behavior called wander, the robot will
wander about in random directions. This behavior “subsumes” or override
the lower-level “avoid collision” behavior. But critically, if an object
appears in its path, then “avoid collisions” kicks in again and the robot
will back off for a moment.
The agents follow predefined rules and often employ search and planning
algorithms to determine the most efficient path towards their goals.
Goals alone are not really enough to generate high-quality behavior. For
example, there are many action sequences that will get the taxi to its
destination, thereby achieving the goal, but some are quicker, safer,
more reliable, or cheaper than others.
Utility-based agent act based not only goals but also the best way to
achieve the goal. They choose actions with the highest expected utility,
which measures how good the outcomes are.
The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
UTILITY BASED AGENT ARCHITECTURES
Utility-based agents evaluate different courses of action based on a
utility function. This function assigns a specific numerical value to
each possible outcome, representing how desirable that outcome is for
the agent.
Goal State: The ideal configuration denoting where the agent may end the
search. It is the partial description of the solution.
Intermediate states: These are states in between initial states and goal
states.
Actions and Transitions: They cause transitions from one state to another
which is called a successor state. Actions or transitions are also known
as operator. In a pathfinding problem, operators correspond to possible
movements like moving up, down, left, or right.
What is the optimal solution? Obviously the shortest path from the
initial state to the goal state is the best one. Shortest path has only a
few operations compared to all other possible solution paths.
Obviously the shortest path from the initial state to the goal state is
the best one. Shortest path has only a few operations compared to all
other possible solution paths. Solution path forms a tree structure where
each node is a state. So searching is nothing but exploring the tree from
SEARCH PROBLEM
In depth first search, newly explored nodes were added to the beginning
of your Open list. In breadth first search, newly explored nodes are
added to the end of your Open list.
DEPTH FIRST SEARCH
This is an algorithm for searching tree or graph data structures to find
the shortest path from the initial state to the goal state. The algorithm
starts at the root node and explores straight down into the tree as deep
as it can go, before backing up and trying different paths.
Like tree, we begin with the given source (in tree, we begin with root)
and traverse vertices level by level using a queue data structure. The
only catch here is that, unlike trees, graphs may contain cycles, so we
may come to the same node again. To avoid processing a node more than
once, we use a boolean visited array.
ARCHITECTURE OF THE BFS ALGORITHM
BFS is a simple strategy in which the root node is expanded first, then
all the successors of the root node are expanded next, then their
successors, etc.
WHAT IS INFORMED SEARCH?
Traditional programming is used for tasks with clear rules and defined
inputs and outputs. AI is used in situations where it would be too
complex or impractical to write traditional code, like natural language
processing, image recognition, and predictive modeling.
Greedy Search: The idea is to expand the node with the smallest estimated
cost to reach the goal. Greedy algorithms often perform very well. They
tend to find good solutions quickly, although not always optimal ones.
The previous Greedy Search considered the distance of the nodes from the
goal. A* uses the path of reaching to the current node from the starting
node, and the path of reaching the goal from the current node. So, the
heuristic function becomes:
f(n) = g(n) + h(n)
where:
f(n): cost of the optimal path from start to goal
g(n): shortest path of the current node from the start
HILL CLIMBING
Its primary goal is to find the best solution within a given search space
by iteratively improving the current solution.