0% found this document useful (0 votes)
18 views

AI Test1 Notes

Uploaded by

Alex
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

AI Test1 Notes

Uploaded by

Alex
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Q. What is Artificial Intelligence?

A. It is the science and engineering of making intelligent


machines, especially intelligent computer programs. It is
related to the similar task of using computers to
understand human intelligence.

1. Agents and Environments: An agent is essentially anything that can


perceive its surroundings through sensors and take actions based on that
perception through actuators.

àThe environment is what the agent interacts with and perceives.

2. Coupling Between Them: The effectiveness of an agent's actions


depends on its understanding of the environment and how well it can
manipulate it.

3. Rational Agent

A rational agent is one that strives to make the best decisions possible
given its perceptions of the environment and its goals. In essence, a
rational agent behaves optimally, aiming to achieve its objectives
efficiently.

4. Environment Complexity:

Some environments are more challenging than others, requiring agents to


adapt and make more sophisticated decisions to achieve their goals.

The nature of the environment directly impacts how well an


agent can perform.

Rational Agent: For each possible percept sequence, a


rational agent should select an action that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.

The autonomy of an agent is the extent to which its


behaviour is determined by its own experience, rather
than knowledge of designer.

PEAS
PEAS: Performance measure, Environment, Actuators,
Sensors

Environment types
Fully observable (vs. partially observable)

Deterministic (vs. stochastic)

Episodic (vs. sequential)

Static (vs. dynamic)

Discrete (vs. continuous)

Single agent (vs. multiagent)

The structure of Agent

• This program's main job is to take in information


(percepts) from its environment and decide what actions
to take based on that information.

Agent = Architecture + Program

Table-driven agent

• A table-driven agent is a type of artificial intelligence agent that makes


decisions based on a pre-defined table of rules or values.

• Instead of relying on complex algorithms or learning processes, a table-


driven agent looks up actions in a table based on the current state of the
environment.

• This approach allows the agent to make decisions quickly and

efficiently by referencing stored mappings between percept

sequences and actions.


Agents are grouped in to four types based on their degree
of

perceived intelligence and capacity.

Simple reflex agents

Model based Reflex agents (with state)

Goal-based agents

Utility-based agents

simple reflex agent. ØThese agents select actions on the


basis of the current

percept, ignoring the rest of the percept history.

ØEnvironment should be fully observable. Eg: Tic-tac-toe

Function SIMPLE-REFLEX-AGENT( percepts ) returns an action

Persistent: rules, a set of condition-action rules

stateßINTERPRET-INPUT(percept) #generates abstracted


description of the current state from the percept.
ruleßRULE-MATCH(state, rules) #function returns the first rule in
the set of rules that matches the given state description.

actionß rule.ACTION

return action

Model-based reflex agents

Works in partially observable environment(Agent does not


have complete information about the environment)

An agent should maintain an internal state(Percept


History) that captures the unobserved aspects of the
current environment based on its percept history.
Utility-based agents

A utility function maps a state onto a real number to check


how efficiently each action achieves the goal.

Describes the associated degree of “happiness”,


“goodness”, “success”.

Is useful when there are multiple possible alternatives and


what an agent has to choose in order to perform the best
action.
Learning agent can learn from its past experiences or it
has the
learning capabilities.
It starts to act with basic knowledge and then able to act
and adopt automatically.

A learning agent can be divided into four conceptual


components:

1. The learning element, responsible for making


improvements by

learning from environments.

2. The performance element, responsible for selecting


external actions.

Problem‐solving agents

• A problem-solving agent is an intelligent agent that


operates by identifying and executing solutions to specific
problems or tasks.

These agents analyze:


The current state of their environment
Determine the desired goal(outcome)
Generate and execute a sequence of actions to achieve
that goal.

The process of looking for a sequence of actions that


reaches the goal is called search.

A search algorithm takes a problem as input and returns a


solution in the form of an action sequence.
Well-defined problems and solutions

A problem can be defined formally by five components: The initial state


that the agent starts in.

A description of the possible actions available to the agent. Given a


particular state s, ACTIONS(s) returns the set of actions that can be
executed in s. We say that each of these actions is applicable in s.

A description of what each action does is the transition model, specified


by a function RESULT(s,a) that returns the state that results from doing
action a in state s.

Together, the initial state, actions, and transition model implicitly define
the state space of the problem—the set of all states reachable from the
initial state by any sequence of actions.

The state space forms a directed network or graph in which the nodes are
states and the links between nodes are actions.

vA path in the state space is a sequence of states connected by a


sequence of actions.

5. The goal test, which determines whether a given state is a goal state.
vSometimes there is an explicit set of possible goal states, and the test
simply checks whether the given state is one of them.

Measuring problem-solving performance

• Completeness: Is the algorithm guaranteed to find a solution

when there is one?

• Optimality: Does the strategy find the optimal solution

• Time complexity: How long does it take to find a solution?

• Space complexity: How much memory is needed to perform the


search?

b, the branching factor or maximum number of successors of any node.

d, the depth of the shallowest goal node (i.e., the number of steps along
the path from the root).

m, the maximum length of any path in the state space.

UNINFORMED SEARCH STRATEGIES(Blind Search)

• The term means that the strategies have no additional information


about states beyond that provided in the problem definition.

• All they can do is generate successors and distinguish a goal state from
a non-goal state.

• All search strategies are distinguished by the order in which nodes are
expanded.

Breadth-first search

• Breadth-first search is a simple strategy in which the root node is


expanded first, then all the successors of the root node are expanded next,
then their successors, and so on.

FIFO queue for the frontier.

The space complexity is specifically O(b^d).


Uniform-cost search
• Instead of expanding the shallowest node, uniform-cost search

expands the node n with the lowest path cost g(n).

This is done by storing the frontier as a priority queue ordered by g.

1. The goal test is applied to a node when it is selected for expansion


rather than when it is first generated.

2. The second difference is that a test is added in case if a better path is


found to a node currently on the frontier.

Uniform-cost search is guided by path costs rather than depths, so its


complexity is not easily characterized in terms of b and d.
Depth-first search

Depth-first search always expands the deepest node in the current


frontier of the search tree.

The search proceeds immediately to the deepest level of the search tree,
where the nodes have no successors.

As those nodes are expanded, they are dropped from the frontier, so then
the search “backs up” to the next deepest node that still has unexplored
successors.

Depth-limited search

Its time complexity is O(b^l) and its space complexity is O(bl) Depth-first
search can be viewed as a special case of depth-limited search with =∞.

You might also like