0% found this document useful (0 votes)
9 views

Unit 1

computational intelligence

Uploaded by

Moon Nila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Unit 1

computational intelligence

Uploaded by

Moon Nila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

1908602-COMPUTATIONAL INTELLIGENCE

UNIT-I
INTRODUCTION
ARTIFICIAL INTELLIGENCE

Artificial intelligence is the simulation of human intelligence processes by


machines, especially computer systems. Specific applications of AI
include expert systems, natural language processing, speech recognition
and machine vision.

AI programming focuses on cognitive skills that include the following:


 Learning. This aspect of AI programming focuses on acquiring data and
creating rules for how to turn it into actionable information. The rules, which
are called algorithms, provide computing devices with step-by-step instructions
for how to complete a specific task.
 Reasoning. This aspect of AI programming focuses on choosing the right
algorithm to reach a desired outcome.
 Self-correction. This aspect of AI programming is designed to continually
fine-tune algorithms and ensure they provide the most accurate results possible.
 Creativity. This aspect of AI uses neural networks, rules-based systems,
statistical methods and other AI techniques to generate new images, new text,
new music and new ideas.
PROBLEM FORMULATION:
Problem formulation in artificial intelligence (AI) involves defining and
structuring a task or challenge in a way that an AI system can understand and
address it. It is a crucial step in the AI development process, as it sets the
foundation for creating a solution. Here are the key components of problem
formulation in AI:
**Define the Problem:**
- Clearly articulate the problem you want the AI system to solve. This
involves understanding the nature of the problem, its scope, and the specific
goals you want to achieve.
**Input and Output Specification:**
- Specify the input data that the AI system will receive and the desired output
it should produce. This involves defining the features, variables, or
characteristics of the problem that the AI model needs to consider.
**Formalize the Task:**
- Mathematically or logically formalize the problem to make it suitable for
computational methods. This step often involves defining the problem as a
function or set of functions that map inputs to outputs.
**Define Constraints:**
- Identify any constraints or limitations that the AI system must adhere to.
Constraints can include resource limitations, time constraints, ethical
considerations, or specific requirements for the solution.
**Define Success Criteria:**
- Establish metrics or criteria to evaluate the performance of the AI system.
This helps in determining when the system is providing satisfactory solutions.
**Scope and Abstraction:**
- Define the scope of the problem and decide on the appropriate level of
abstraction. This involves deciding which details are relevant for the AI model
and which can be abstracted or simplified.
**Domain Knowledge:**
- Integrate domain knowledge into the problem formulation. Understanding
the context of the problem helps in making informed decisions about how to
approach and solve it.
**Identify Decision-Making Components:**
- If the problem involves decision-making, identify.

PROBLEM DEFINITION:
Problem definition in artificial intelligence (AI) is the process of clearly and
precisely defining a task or challenge that an AI system is intended to address.
This step is crucial for the successful development and deployment of AI
solutions. Here is a more detailed breakdown of the components involved in
problem definition:
**Understanding the Problem:**
- Begin by gaining a thorough understanding of the problem domain. This
involves identifying the key issues, challenges, and goals associated with the
problem. Engage with domain experts and stakeholders to gather insights.
**Problem Scope:**
- Clearly define the boundaries and limitations of the problem. Determine
what aspects will be included and excluded from the problem-solving process.
This helps in managing expectations and focusing efforts on a specific area.
**Stakeholder Requirements:**
- Identify the needs and requirements of the stakeholders who will benefit
from or be affected by the AI solution. Understanding their perspectives is
crucial for tailoring the solution to meet their expectations.
**Data Requirements:**
- Determine the type and quality of data needed for solving the problem.
Define the sources of data, data formats, and any preprocessing steps required to
make the data suitable for AI models.
**Formalization of the Problem:**
- Express the problem in a formal or mathematical framework. This involves
defining the inputs, outputs, and relationships between them. Formulating the
problem in a structured manner makes it amenable to computational
approaches.
**Success Criteria:**
- Establish clear criteria for evaluating the success of the AI solution. This
could involve defining performance metrics, accuracy thresholds, or other
relevant measures based on the goals of the project.
**Constraints:**
- Identify any constraints that need to be considered during the development
of the AI system. This could include resource constraints, ethical
considerations, legal requirements, or specific technical limitations.
**Risk Analysis:**
- Conduct a risk analysis to identify potential challenges, uncertainties, or
obstacles that may arise during the project. Understanding and addressing risks
early in the process can contribute to more effective problem-solving.
**Iterative Refinement:**
- Problem definition is often an iterative process. Refine and adjust the
problem definition as more information becomes available or as the project
progresses.
**Communication:**
- Clearly communicate the problem definition to all stakeholders, including
developers, domain experts, and end-users. Effective communication ensures
that everyone involved has a shared understanding of the problem and its
requirements.
By thoroughly defining the problem, AI practitioners can pave the way for the
development of effective and targeted solutions that align with the goals and
expectations of the stakeholders.
PRODUCTION SYSTEM:
Production system or production rule system is a computer program typically
used to provide some form of artificial intelligence, which consists primarily of
a set of rules about behavior but it also includes the mechanism necessary to
follow those rules as the system responds to states of the world.
Components of Production System

 Global Database: The global database is the central data structure used
by the production system in Artificial Intelligence.
 Set of Production Rules: The production rules operate on the global
database. Each rule usually has a precondition that is either satisfied or
not by the global database. If the precondition is satisfied, the rule is
usually be applied. The application of the rule changes the database.
 A Control System: The control system then chooses which applicable
rule should be applied and ceases computation when a termination
condition on the database is satisfied. If multiple rules are to fire at the
same time, the control system resolves the conflicts.

Features of Production system:


1. Simplicity: The structure of each sentence in a production system is unique
and uniform as they use the “IF-THEN” structure. This structure provides
simplicity in knowledge representation. This feature of the production system
improves the readability of production rules.

2. Modularity: This means the production rule code the knowledge available in
discrete pieces. Information can be treated as a collection of independent facts
which may be added or deleted from the system with essentially no deleterious
side effects.

3. Modifiability: This means the facility for modifying rules. It allows the
development of production rules in a skeletal form first and then it is accurate to
suit a specific application.

4. Knowledge-intensive: The knowledge base of the production system stores


pure knowledge. This part does not contain any type of control or programming
information. Each production rule is normally written as an English sentence;
the problem of semantics is solved by the very structure of the representation.

CONTROL STRATEGIES:

Control Strategy in Artificial Intelligence scenario is a technique or strategy,


tells us about which rule has to be applied next while searching for the solution
of a problem within problem space. It helps us to decide which rule has to apply
next without getting stuck at any point. These rules decide the way we approach
the problem and how quickly it is solved and even whether a problem is finally
solved.
Control Strategy helps to find the solution when there is more than one rule or
fewer rules for finding the solution at each point in problem space. A good
Control strategy has two main characteristics:
Control Strategy should cause Motion:Each rule or strategy applied should
cause the motion because if there will be no motion than such control strategy
will never lead to a solution. Motion states about the change of state and if a
state will not change then there be no movement from an initial state and we
would never solve the problem.

Control strategy should be Systematic:Though the strategy applied should


create the motion but if do not follow some systematic strategy than we are
likely to reach the same state number of times before reaching the solution
which increases the number of steps. Taking care of only first strategy we may
go through particular useless sequences of operators several times. Control
Strategy should be systematic implies a need for global motion (over the course
of several steps) as well as for local motion (over the course of single step).

GAME PLAYING:
MINI-MAX ALGORITHM
Step-1: In the first step, the algorithm generates the entire game-tree and apply
the utility function to get the utility values for the terminal states. In the below
tree diagram, let's take A is the initial state of the tree. Suppose maximizer takes
first turn which has worst-case initial value =- infinity, and minimizer will take
next turn which has worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value
is -∞, so we will compare each value in terminal state with initial value of
Maximizer and determines the higher nodes values. It will find the maximum
among the all.
o For node D max(-1,- -∞) => max(-1,4)= 4
o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes
value with +∞, and will find the 3rd layer node values.
o For node B= min(4,6) = 4
o For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of
all nodes value and find the maximum value for the root node. In this game tree,
there are only 4 layers, hence we reach immediately to the root node, but in real
games, there will be more than 4 layers.

o For node A max(4, -3)= 4

ALPHA-BETA PRUNING:

o Alpha-beta pruning is a modified version of the minimax algorithm. It is


an optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of
game states it has to examine are exponential in depth of the tree. Since
we cannot eliminate the exponent, but we can cut it to half. Hence there is
a technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is
called pruning. This involves two threshold parameter Alpha and beta
for future expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes
it not only prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found so far at any


point along the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any
point along the path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the
same move as the standard algorithm does, but it removes all the nodes
which are not really affecting the final decision but making algorithm
slow. Hence by pruning these nodes, it makes the algorithm fast.

Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is:

α>=β
Key points about alpha-beta pruning:
o The Max player will only update the value of alpha.
o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper
nodes instead of values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:


Let's take an example of two-player search tree to understand the working of
Alpha-beta pruning

Step 1: At the first step the, Max player will start first move from node A where
α= -∞ and β= +∞, these value of alpha and beta passed down to node B where
again α= -∞ and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The
value of α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be
the value of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as
this is a turn of Min, Now β= +∞, will compare with the available subsequent
nodes value, i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3

In the next step, algorithm traverse the next successor of Node B which is node
E, and the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change.
The current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at
node E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned,
and algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At
node A, the value of alpha will be changed the maximum available value is 3 as max
(-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is
Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3
still α remains 3, but the node value of F will become 1

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here
the value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now
at C, α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of
C which is G will be pruned, and the algorithm will not compute the entire sub-
tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1)
= 3. Following is the final game tree which is the showing the nodes which are
computed and nodes which has never computed. Hence the optimal value for
the maximizer is 3 for this example.

Move Ordering in Alpha-Beta pruning:


The effectiveness of alpha-beta pruning is highly dependent on the order in
which each node is examined. Move order is an important aspect of alpha-beta
pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not


prune any of the leaves of the tree, and works exactly as minimax
algorithm. In this case, it also consumes more time because of alpha-beta
factors, such a move of pruning is called worst ordering. In this case, the
best move occurs on the right side of the tree. The time complexity for
such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when
lots of pruning happens in the tree, and best moves occur at the left side
of the tree. We apply DFS hence it first search left of the tree and go deep
twice as minimax algorithm in the same amount of time. Complexity in
ideal ordering is O(bm/2)

Rules to find good ordering:


Following are some rules to find good ordering in alpha-beta pruning:

o Occur the best move from the shallowest node.


o Order the nodes in the tree such that the best nodes are checked first.
o Use domain knowledge while finding the best move. Ex: for Chess, try
order: captures first, then threats, then forward moves, backward moves.
o We can bookkeep the states, as there is a possibility that states may
repeat.

Water Jug problem


A Water Jug Problem: You are given two jugs, a 4-gallon one and a 3-gallon
one, a pump which has unlimited water which you can use to fill the jug, and the
ground on which water may be poured. Neither jug has any measuring markings
on it. How can you get exactly 2 gallons of water in the 4-gallon jug?
Here the initial state is (0, 0). The goal state is (2, n) for any value of n.
State Space Representation: we will represent a state of the problem as a tuple
(x, y) where x represents the amount of water in the 4-gallon jug and y
represents the amount of water in the 3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y
≤ 3.
To solve this we have to make some assumptions not mentioned in the problem.
They are:
 We can fill a jug from the pump.
 We can pour water out of a jug to the ground.
 We can pour water from one jug to another.
 There is no measuring device available.
Operators — we must define a set of operators that will take us from one state to
another.
There are several sequences of operations that will solve the problem.
One of the possible solutions is given as

HILL CLIMIBING:

Hill climbing is a simple optimization algorithm used in Artificial Intelligence


(AI) to find the best possible solution for a given problem. It belongs to the
family of local search algorithms and is often used in optimization problems
where the goal is to find the best solution from a set of possible solutions.
 In Hill Climbing, the algorithm starts with an initial solution and then
iteratively makes small changes to it in order to improve the solution.
These changes are based on a heuristic function that evaluates the quality of
the solution. The algorithm continues to make these small changes until it
reaches a local maximum, meaning that no further improvement can be
made with the current set of moves.
State Space Diagram – Hill Climbing in Artificial Intelligence

 Local Maxima/Minima: Local Minima is a state which is better than its

neighbouring state, however, it is not the best possible state as there

exists a state where objective function value is higher

 Global Maxima/Minima: It is the best possible state in the state

diagram. Here the value of the objective function is highest

 Current State: Current State is the state where the agent is present

currently

 Flat Local Maximum: This region is depicted by a straight line where all

neighbouring states have the same value so every node is local maximum

over the region.


Types of Hill Climbing

1. Simple Hill Climbing:

Simple hill climbing is the simplest way to implement a hill climbing


algorithm. It only evaluates the neighbor node state at a time and selects the
first one which optimizes current cost and set it as a current state. It only
checks it's one successor state, and if it finds better than the current state, then
move else be in the same state. This algorithm has the following features:

o Less time consuming


o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and
Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to
apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:

a. If it is goal state, then return success and quit.


b. Else if it is better than the current state then assign new state as a
current state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.

2. Steepest-Ascent hill climbing:

The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.


This algorithm examines all the neighboring nodes of the current state and
selects one neighbor node which is closest to the goal state. This algorithm
consumes more time as it searches for multiple neighbors

Algorithm for Steepest-Ascent hill climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not
change.

a. Let SUCC be a state such that any successor of the current state
will be better than it.
b. For each operator that applies to the current state:

a. Apply the new operator and generate a new state.


b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to
the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current
state to SUCC.
o Step 5: Exit

2. Simulated Annealing:
 Simulated annealing is a probabilistic variation of Hill Climbing
that allows the algorithm to occasionally accept worse moves in
order to avoid getting stuck in local maxima.
 Simulated Annealing is a probabilistic optimization algorithm that
simulates the metallurgical annealing process in order to discover
the best solution in a given search area by accepting less-than-
ideal solutions with a predetermined probability.
 Simulated annealing seeks the global optimum in a given search
space by accepting poorer answers with a predetermined
probability. This allows it to bypass local optimum conditions.
 Simulated annealing explores the search space and avoids local
optimum by employing a probabilistic method to accept a worse
solution with a given probability. As the algorithm advances, the
likelihood of accepting an inferior answer diminishes.
 Simulated annealing has a chance of escaping the local optimum
and locating the global optimum.
 When the temperature hits a predetermined level or the maximum
number of repetitions, simulated annealing comes to an end.
 Simulated annealing is more efficient at locating the global
optimum than Hill Climbing, particularly for complicated
situations with numerous local optima. Simulated annealing is
slower than Hill Climbing.
 The beginning temperature, cooling schedule, and acceptance
probability function are only a few of the tuning factors for
Simulated Annealing.
 Several fields, including logistics, scheduling, and circuit design,
use simulated annealing.

Problems in Hill Climbing Algorithm


Here ve discuss the problems in the hill-climbing algorithm:
1. Local Maximum
The algorithm terminates when the current node is local maximum as it is better
than its neighbours. However, there exists a global maximum where objective
function value is higher
Solution: Back Propagation can mitigate the problem of Local maximum as it
starts exploring alternate paths when it encounters Local Maximum.
2. Ridge
Ridge occurs when there are multiple peaks and all have the same value or in
other words, there are multiple local maxima which are same as global maxima.

Solution: Ridge obstacle can be solved by moving in several directions at the


same time
3. Plateau
Plateau is the region where all the neighbouring nodes have the same value of
objective function so the algorithm finds it hard to select an appropriate
direction.

Solution: Plateau obstacle can be solved by taking making a big jump


from the current state which will land you in non-plateau region.
WHAT IS AN EXPERT SYSTEM?

An expert system is a computer program that is designed to solve complex


problems and to provide decision-making ability like a human expert. It
performs this by extracting knowledge from its knowledge base using the
reasoning and inference rules according to the user queries.
The expert system is a part of AI, and the first ES was developed in the year
1970, which was the first successful approach of artificial intelligence. It solves
the most complex issue as an expert by extracting the knowledge stored in its
knowledge base. The system helps in decision making for compsex problems
using both facts and heuristics like a human expert. It is called so because it
contains the expert knowledge of a specific domain and can solve any complex
problem of that particular domain. These systems are designed for a specific
domain, such as medicine, science, etc.

The performance of an expert system is based on the expert's knowledge stored


in its knowledge base. The more knowledge stored in the KB, the more that
system improves its performance. One of the common examples of an ES is a
suggestion of spelling errors while typing in the Google search box.

Below is the block diagram that represents the working of an expert system:

Components of Expert System


An expert system mainly consists of three components:

o User Interface
o Inference Engine
o Knowledge Base
1. User Interface

With the help of a user interface, the expert system interacts with the user, takes
queries as an input in a readable format, and passes it to the inference engine.
After getting the response from the inference engine, it displays the output to
the user. In other words, it is an interface that helps a non-expert user to
communicate with the expert system to find a solution.

2. Inference Engine(Rules of Engine)

o The inference engine is known as the brain of the expert system as it is


the main processing unit of the system. It applies inference rules to the
knowledge base to derive a conclusion or deduce new information. It
helps in deriving an error-free solution of queries asked by the user.
o With the help of an inference engine, the system extracts the knowledge
from the knowledge base.
o There are two types of inference engine:
o Deterministic Inference engine: The conclusions drawn from this type
of inference engine are assumed to be true. It is based on facts and rules.
o Probabilistic Inference engine: This type of inference engine contains
uncertainty in conclusions, and based on the probability.

Inference engine uses the below modes to derive the solutions:

o Forward Chaining: It starts from the known facts and rules, and applies
the inference rules to add their conclusion to the known facts.
o Backward Chaining: It is a backward reasoning method that starts from
the goal and works backward to prove the known facts.

3. Knowledge Base

o The knowledgebase is a type of storage that stores knowledge acquired


from the different experts of the particular domain. It is considered as big
storage of knowledge. The more the knowledge base, the more precise
will be the Expert System.
o It is similar to a database that contains information and rules of a
particular domain or subject.
o One can also view the knowledge base as collections of objects and their
attributes. Such as a Lion is an object and its attributes are it is a
mammal, it is not a domestic animal, etc.

RULES OF INFERENCE IN ARTIFICIAL INTELLIGENCE


Inference:

In artificial intelligence, we need intelligent computers which can create new


logic from old logic or by evidence, so generating the conclusions from
evidence and facts is termed as Inference.

Inference rules:

Inference rules are the templates for generating valid arguments. Inference rules
are applied to derive proofs in artificial intelligence, and the proof is a sequence
of the conclusion that leads to the desired goal.

In inference rules, the implication among all the connectives plays an important
role. Following are some terminologies related to inference rules:

o Implication: It is one of the logical connectives which can be represented


as P → Q. It is a Boolean expression.
o Converse: The converse of implication, which means the right-hand side
proposition goes to the left-hand side and vice-versa. It can be written as
Q → P.
o Contrapositive: The negation of converse is termed as contrapositive,
and it can be represented as ¬ Q → ¬ P.
o Inverse: The negation of implication is called inverse. It can be
represented as ¬ P → ¬ Q.
Types of Inference rules:
1. Modus Ponens:
The Modus Ponens rule is one of the most important rules of inference, and it
states that if P and P → Q is true, then we can infer that Q will be true. It can be
represented as:

2. Modus Tollens:
The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬ P will also
true. It can be represented as:
4. Disjunctive Syllogism:
The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is true, then Q will be
true. It can be represented as:

5. Addition:
The Addition rule is one the common inference rule, and it states that If P is true,
then P∨Q will be true.

You might also like